id
stringlengths 25
96
| input
stringlengths 137
1.08M
| output
stringlengths 501
1.6k
| instruction
stringclasses 5
values | num_tokens
int64 73
522
|
---|---|---|---|---|
arxiv-format/1812_02366v1.md | # Radar Imaging by Sparse Optimization Incorporating MRF Clustering Prior
Shiyong Li,, Moeness Amin,, Guoqiang Zhao, and Houjun Sun
Manuscript received, 2018. This work was performed while Dr. S. Li was a Visiting Research Scholar in the Center for Advanced Communications, Villanova University. The work of Shiyong Li, Guoqiang Zhao, and Houjun Sun was supported by the National Natural Science Foundation of China under Grant 61771049.S. Li, G. Zhao and H. Sun are with the Beijing Key Laboratory of Millimeter Wave and Terahertz Technology, Beijing Institute of Technology, Beijing 100081, China. (e-mail: [email protected]).M. Amin is with the Center for Advanced Communications, Villanova University, Villanova, PA 19085 USA (e-mail: [email protected]).
## I Introduction
The theory of compressive sensing (CS) [1, 2] has generated great interest in radar imaging [3, 4, 5, 6, 7, 8, 9, 10]. Compressive sensing-based radar imaging methods can achieve better image quality than traditional Fourier transform-based techniques under image sparsity condition and with incomplete observations, as viewed from Nyquist sampling requirements.
However, in many real world scenarios, there exists other important signal structure information, beside sparsity, which can be exploited for enhanced imaging [11, 12, 13, 14]. For instance, most of the wavelet coefficients of a natural image are both sparse and exhibit a tree structure [11]. Clustering is a structure property in which the image assumes high values in neighboring pixels. It may be exhibited in the image canonical basis or under sparsifying basis. From sparse reconstruction perspective, this property is known as _block sparse_, or _clustered sparse_[12]. In essence, the support of the image spatial pixels assumes high correlations among closely separated pixels which conforms to the block sparse model.
Image reconstruction approaches incorporating block sparse models were developed in [15, 16]. These approaches combine the \\(\\ell_{1}\\) and \\(\\ell_{2}\\) norms for feature selections, and assume knowledge of group partition information. This assumption is not mandated in the block sparse Bayesian frame work proposed in [17], though performance could still be affected by the block size.
Using graph theory, the radar image scene can be cast as an undirected graphical model, also referred to as Markov random field (MRF). MRF provides an effective way for modeling spatial context dependent entities, such as image pixels and correlated features. It has been widely employed in a variety of areas including computer vision, image segmentation [18], and sparse signal recovery [12, 13, 14].
The use of MRF prior in CS based magnetic resonance imaging was recently introduced in [19]. The authors extended a constrained split augmented Lagrangian shrinkage algorithm (C-SALSA) with an MRF prior [20], and solved the problem using the alternating direction method of multipliers (ADMM) [21]. The analysis of the ADMM reveals the sensitivity of this type of algorithms to the penalty parameter underpinning the augmented Lagrangian where the primal and dual residuals exhibit conflicting requirements.
In this letter, we incorporate the MRF prior with fast iterative shrinkage/thresholding algorithm (FISTA) [22]. Unlike ADMM, the penalty parameter of FISTA is used to balance the twin objectives of minimizing both the \\(l_{2}\\) error and signal sparsity, without significantly affecting the algorithm convergence. FISTA also preserves the computational simplicity of ISTA family, and achieves a quadratic convergence rate. To demonstrate performance, we apply the proposed MRF-based FISTA algorithms to inverse synthetic aperture radar (ISAR) imaging, in which scatterers are typically located contiguously, satisfying the clustering property.
The remainder of this letter is as follows. Section II presents the formulation of the MRF-based FISTA. In Section III, we apply the MRF-FISTA to ISAR imaging. Numerical simulations and experimental results are shown in Section IV. Finally, concluding remarks are presented in Section V.
## II MRF-based FISTA
In compressive sensing, we aim to recover a signal \\(\\mathbf{x}\\in\\mathbb{C}^{N}\\) from \\(M\\leq N\\) noisy linear measurements \\(\\mathbf{y}\\in\\mathbb{C}^{M}\\),
\\[\\mathbf{y}=\\mathbf{\\Phi}\\mathbf{x}+\\epsilon \\tag{1}\\]
where \\(\\mathbf{\\Phi}\\in\\mathbb{C}^{M\\times N}\\) denotes a known sensing matrix, and \\(\\epsilon\\in\\mathbb{C}^{M}\\) is white Gaussian noise. In the following analysis, we assume \\(\\mathbf{x}\\) is sparse in its canonical basis for convenience. If not, we can apply a sparsifying transform \\(\\mathbf{\\Psi}\\), such as \\(\\mathbf{x}\\!=\\!\\mathbf{\\Psi}\\boldsymbol{\\alpha}\\), where \\(\\boldsymbol{\\alpha}\\) is a sparse vector.
The estimation of \\(\\mathbf{x}\\) from \\(\\mathbf{y}\\) in (1) is an ill-posed linear inverse problem, since the sensing matrix \\(\\mathbf{\\Phi}\\) is singular or ill-conditioned. However, if \\(\\mathbf{x}\\) is sparse or has sparse coefficientsin some basis \\(\\mathbf{\\Psi}\\), then exact solution of \\(\\mathbf{x}\\) can be obtained by solving the following relaxed convex optimization problem,
\\[\\mathbf{\\hat{x}}=\\arg\\min_{\\mathbf{x}}\\bigg{\\{}\\frac{1}{2}\\|\\mathbf{y}-\\mathbf{\\Phi} \\mathbf{x}\\|_{2}^{2}+\\lambda\\|\\mathbf{x}\\|_{1}\\bigg{\\}} \\tag{2}\\]
where \\(\\|\\mathbf{x}\\|_{1}=\\sum_{i}|x_{i}|\\) denotes the \\(\\ell_{1}\\) norm of \\(\\mathbf{x}\\), and \\(\\|\\mathbf{y}\\|_{2}=(\\sum_{i}|y_{i}|^{2})^{1/2}\\) represents the \\(\\ell_{2}\\) norm of \\(\\mathbf{y}\\).
To invoke the clustering property, we assume the probability distribution of the prior structural information as \\(p(\\mathbf{s})\\), with \\(\\mathbf{s}\\) representing the signal support. The relationship among the signal \\(\\mathbf{x}\\), the signal support \\(\\mathbf{s}\\), and the measurement \\(\\mathbf{y}\\) can be illustrated by an undirected graphical model, shown in Fig. 1. Independence between two variables is displayed as a lack of connection between their corresponding vertices in the graph; conversely, dependent variables should correspond to connected vertices. Further, the absence of a direct link between two variables reflects an interaction that is conditional. In this regard, and for the underlying problem, given signal \\(\\mathbf{x}\\), the signal support \\(\\mathbf{s}\\) and the measurement \\(\\mathbf{y}\\) are independent. Accordingly, the maximum _a posterior_ (MAP) estimation of the signal support \\(\\mathbf{s}\\) becomes
\\[\\mathbf{\\hat{s}} =\\arg\\max_{\\mathbf{s}}p(\\mathbf{s}|\\mathbf{x},\\mathbf{y})=\\arg \\max_{\\mathbf{s}}p(\\mathbf{s}|\\mathbf{x})\\] \\[=\\arg\\max_{\\mathbf{s}}p(\\mathbf{x}|\\mathbf{s})p(\\mathbf{s}). \\tag{3}\\]
The aforementioned undirected graph, referred to as Markov random field, can be described by an Ising model [19],
\\[p(\\mathbf{s};\\alpha,\\beta)\\!=\\!\\frac{1}{Z}e^{-\\frac{1}{Z}\\left[\\,\\sum_{i\\in V }V_{1}(s_{i})\\!+\\!\\sum_{(i,j)\\in E}V_{2}(s_{i},s_{j})\\right]} \\tag{4}\\]
where each vertex has two possible states \\(\\mathbf{s}\\in\\{0,1\\}^{N}\\), \\(Z\\) and \\(T\\) are constants. The content in the \"[]\" is the _energy function_, which is a sum of _potentials_ over the single-site cliques \\(V\\) and the pair-site cliques \\(E\\), defined as \\(V_{1}(s)\\!=\\!\\pm\\alpha\\) corresponding to \\(s\\!=\\!0\\) and \\(1\\), respectively, and \\(V_{2}(s_{i},s_{j})\\!=\\!\\pm\\beta\\) corresponding to \\(s_{i}\\!\
eq\\!s_{j}\\) and \\(s_{i}\\!=\\!s_{j}\\), respectively. It is noted that a higher value of \\(\\alpha\\) enforces a sparser signal activity, and a higher value of \\(\\beta\\) implies a stronger spatial correlation.
Substituting (4) into (3) yields,
\\[\\!\\!\\!\\!\\!\\!
## III MRF-FISTA Applied to ISAR Imaging
The ISAR geometry, after compensating for the translational motion, can be shown in Fig. 2 in which the target rotates around its centroid with a constant angular velocity \\(\\omega\\). The scattered electromagnetic (EM) wave after demodulation is,
\\[Y(f_{n},t_{m})=\\sum_{i=1}^{I}\\sigma_{i}\\exp\\left[-\\mathrm{j}2\\pi f_{n}\\frac{2R _{i}(t_{m})}{c}\\right], \\tag{12}\\]
where \\(\\sigma_{i}\\) denotes the backscattering coefficient of the \\(i\\)th point scatterer, \\(I\\) is the number of total pixels of the target scene, \\(f_{n}\\) represents the operating frequency: \\(f_{n}\\!=\\!f_{0}\\!+\\!n\\Delta f\\), \\(f_{0}\\) and \\(\\Delta f\\) denote the start frequency and the frequency step of the transmitted signal, respectively, and \\(n\\) is an integer from the set \\([0,N-1]\\). The variable \\(t_{m}\\) represents the discrete slow time in the cross-range dimension, with \\(m\\) is an integer from the set \\([0,M-1]\\), \\(c\\) is the speed of EM wave propagating in free space, and \\(R_{i}(t_{m})\\) is given by,
\\[R_{i}(t_{m})\\!=\\!\\sqrt{R_{0}^{2}+x_{i}^{2}+y_{i}^{2}+2R_{0}[-x_{i}\\sin\\omega t_ {m}+y_{i}\\cos\\omega t_{m}]}, \\tag{13}\\]
representing the instantaneous range between the \\(i\\)th scatterer and the radar, where \\(R_{0}\\) denotes the distance from the target centroid to the radar.
According to (12), we can formulate the element of the sensing matrix \\(\\mathbf{\\Phi}\\) as, \\(\\phi_{l,i}=\\exp\\left[-\\mathrm{j}2\\pi f_{n}\\frac{2R_{i}(t_{m})}{c}\\right]\\), where \\(n=(l\\mod N)\\), and \\(m=\\lfloor\\frac{l}{M}\\rfloor\\). Here, \"mod\" means the modulo operation, and \"\\(\\lfloor\\lfloor\\rceil\\)\" denotes rounding toward negative infinity, and \\(l=0,1,\\cdots,MN-1\\), \\(i=0,1,\\cdots,I-1\\).
The target scene can be viewed as a matrix \\(\\mathbf{X}\\) with a size of \\(N\\times M\\) (assuming \\(I=N\\times M\\)). Thus, the forward model of radar imaging can be expressed as, \\(\\text{vec}(\\mathbf{Y})=\\mathbf{\\Phi}\\text{vec}(\\mathbf{X})\\), where \\(\\mathbf{Y}\\) denotes the matrix form of the received data (with zeros padding at the positions where there are no samples), and \\(\\text{vec}(\\cdot)\\) represents vectorization of a matrix. Then, the image \\(\\mathbf{X}\\) can be reconstructed by the proposed MRF-FISTA.
If the size of the sensing matrix is too large, we can use the method in [25] to change \\(\\mathbf{\\Phi}\\) into a 2-D Fourier transform with a compensation for the spherical wavefront. In addition, if the distance \\(R_{0}\\) is sufficiently large, (13) can be approximated as \\(R_{i}(t_{m})\\!\\approx\\!R_{0}\\!\\!-\\!x_{i}\\sin\\omega t_{m}\\!+\\!y_{i}\\cos\\omega t_ {m}\\). In this case, the sensing matrix can also be replaced by a Fourier transform.
## IV Simulations and Experimental Results
This section shows the performance of the MRF-FISTA based ISAR imaging method using both simulation and real data.
In order to show the performance sensitivity of the proposed method and LaSAL2 with respect to their corresponding penalty parameters, we compare the convergence of the algorithms with different penalties. Here, we employ the root mean squared error (RMSE) to assess convergence, which is given by, \\(\\text{RMSE}=\\sqrt{\\frac{1}{N\\hat{M}}\\sum_{n=0}^{N-1}\\sum_{m=0}^{M-1}\\left[\\mathbf{ X}(n,m)\\text{--}\\widehat{\\mathbf{X}}(n,m)\\right]^{2}}\\), where \\(\\mathbf{X}\\) and \\(\\widehat{\\mathbf{X}}\\) denote the target model and the recovered image, respectively.
Fig. 5 shows the RMSEs of LaSAL2 with different starting value of the penalty parameters, \\(\\mu\\), which underlies the augmented Lagrangian, for different data sets. The SNR is 20 dB for this simulation. Note that smaller penalty parameters can result in divergence for larger data sets. A larger penalty has a better solution, however, with a slower convergence rate. Thus, for such methods, it becomes critical to choose a proper starting penalty parameter.
Unlike the ADMM based algorithms, the penalty parameter \\(\\lambda\\) of FISTA is only employed to tradeoff the \\(l_{2}\\) error and the signal sparsity. Fig. 6 shows the RMSEs of MRF-FISTA with different \\(\\lambda\\)s. Clearly, the convergence rate almost remains invariant with different penalty parameters.
Next, we include another performance metric - _entropy_, for the quantitative comparisons. The Shannon entropy of an image \\(\\mathbf{X}\\) is defined as, \\(H(\\mathbf{X})=-\\sum_{k=1}^{K}p_{k}(\\mathbf{X}_{k})\\log_{2}p_{k}(\\mathbf{X}_{k})\\), where \\((p_{1},p_{2},\\cdots,p_{K})\\) is a finite discrete probability distribution, and \\(p_{k}\\) contains the normalized histogram counts within a fixed bin \\(\\mathbf{X}_{k}\\), i.e., it is a function of pixel intensity in an image. The distribution of pixel intensity is commensurate with the degree of image focus.
Fig. 7 shows the RMSEs and entropy of the imaging results obtained by the MRF-FISTA, FISTA, and the LaSAL2, respectively, for different data ratios. The SNR is fixed to 10 dB. We perform 50 independent runs for each data ratio to obtain the mean values of the RMSEs and entropy. It is noted from Fig. 7(a) that the MRF-FISTA improves clearly over FISTA and LaSAL2 more than at least 3 dB and 4 dB, respectively, when the data ratios are above 20%. Also, the entropy of the results obtained by the MRF-FISTA is much smaller than those of FISTA and LaSAL2, indicating that the imaging results of MRF-FISTA are best focused among these three algorithms, as demonstrated in Fig. 7(b).
The parameters of the Ising model were chosen as: \\(\\alpha=0.01\\) and \\(\\beta=0.3\\) for the above simulations.
### _Experimental results_
Here, we use measurements from an unmanned aerial vehicle. The radar frequencies in the experiments vary from 34.5 GHz to 35.3 GHz with 128 steps. The total observation angles are 2 degrees, also with 128 steps.
Fig. 8 depicts the imaging results of MRF-FISTA, FISTA, LaSAL2 and BP, using 30% of the full sampled data. It is seen that the result of MRF-FISTA assumes the best quality with much smaller noisy like clutter among all the algorithms.
Fig. 4: Imaging results by (a) MRF-FISTA, (b) FISTA, (c) LaSAL2, and (d) BP, respectively, using 30% of the full data with SNR=20dB.
Fig. 5: RMSEs of LaSAL2 with respect to different penalty parameters \\(\\mu\\) by using (a) 20%, (b) 30% of the full data set.
Fig. 6: RMSEs of MRF-FISTA with respect to different penalty parameters \\(\\lambda\\) by using (a) 20%, (b) 30% of the full data set.
Fig. 7: (a) RMSE, and (b) entropy of the reconstructed images versus different data ratios.
Since we do not have a bench mark for the real data imaging, we present the entropy results of the images to assess the quantitative performance of the algorithms when processing the real data. This is shown in Fig. 9. Clearly, the precision of MRF-FISTA is still higher than those of FISTA and LaSAL2, with much smaller entropy.
## V Conclusion
This letter proposed a MRF based FISTA as an effective image reconstruction technique for cluster-type images, including those of ISAR. The MRF is a powerful tool for modeling spatial context dependent entities. It was used in this letter as a prior to describe the high spatial correlations in an image. The MRF prior was incorporated into sparsity based optimization algorithm, and shown to yield higher image reconstruction precision compared to existing techniques. Simulations and experimental results demonstrated that the proposed algorithm improves over the original FISTA and another recently introduced MRF based ADMM algorithm when considering imaging errors as well as convergence rate.
## References
* [1] E. J. Candes, J. Romberg, and T. Tao, \"Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,\" _IEEE Trans. Inf. Theory_, vol. 52, pp. 489-509, Feb 2006.
* [2] D. L. Donoho, \"Compressed sensing,\" _IEEE Trans. Inf. Theory_, vol. 52, pp. 1289-1306, April 2006.
* [3] R. Baraniuk and P. Steeghs, \"Compressive radar imaging,\" in _2007 IEEE Radar Conference_, pp. 128-133, April 2007.
* [4] M. A. Herman and T. Strohmer, \"High-resolution radar via compressed sensing,\" _IEEE Trans. Sig. Proc._, vol. 57, pp. 2275-2284, June 2009.
* [5] L. C. Potter, E. Ertin, J. T. Parker, and M. Cetin, \"Sparsity and compressed sensing in radar imaging,\" Proc. IEEE, vol. 98, pp. 1006-1020, June 2010.
* [6] M. Amin, _Through-the-Wall Radar Imaging_. Taylor & Francis, 2010.
* [7] Y. Yoon and M. G. Amin, \"Through-the-wall radar imaging using compressive sensing along temporal frequency domain,\" in _2010 IEEE International Conference on Acoustics, Speech and Signal Processing_, pp. 2806-2809, March 2010.
* [8] A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou, \"A tutorial on synthetic aperture radar,\" _IEEE Geosci. Remote Sens. Mag._, vol. 1, pp. 6-43, March 2013.
* [9] M. Amin, _Compressive Sensing for Urban Radar_. Boca Raton, FL, USA: CRC Press, 2014.
* [10] M. Cetin, I. Stojanovic, O. Onhon, K. Varshney, S. Samadi, W. C. Karl, and A. S. Willsky, \"Sparsity-driven synthetic aperture radar imaging: Reconstruction, autocousing, moving targets, and compressed sensing,\" _IEEE Signal Process. Mag._, vol. 31, pp. 27-40, July 2014.
* [11] S. Som and P. Schniter, \"Compressive imaging using approximate message passing and a Markov-free prior,\" _IEEE Trans. Signal Process._, vol. 60, pp. 3439-3448, July 2012.
* [12] V. Cevher, F. P. Duarte, C. Hegde, and R. Baraniuk, \"Sparse signal recovery using Markov random fields,\" in _Advances in Neural Information Processing Systems 21_, pp. 257-264, 2009.
* [13] L. Wang, L. Zhao, G. Bi, and C. Wan, \"Sparse representation-based ISAR imaging using Markov random fields,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 8, pp. 3941-3953, Aug 2015.
* [14] X. Wang, G. Li, Y. Liu, and M. G. Amin, \"Two-level block matching pursuit for polarimetric through-wall radar imaging,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, pp. 1533-1545, March 2018.
* [15] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, \"Sparse reconstruction by separable approximation,\" _IEEE Trans. Signal Process._, vol. 57, pp. 2479-2493, July 2009.
* [16] L. Yunn, J. Liu, and J. Ye, \"Efficient methods for overlapping group Lasso,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 35, pp. 2104-2116, Sept 2013.
* [17] Z. Zhang and B. D. Rao, \"Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning,\" _IEEE J. Sel. Topics Signal Process._, vol. 5, pp. 912-926, Sept 2011.
* [18] S. Z. Li, _Markov Random Field Modeling in Image Analysis_. Springer Publishing Company, Incorporated, 3rd ed., 2009.
* [19] M. Pani, J. Aelterman, V. Crnoyei, and A. Puitra, \"Sparse recovery in magnetic resonance imaging with a Markov random field prior,\" _IEEE Trans. Med. Imag._, vol. 36, pp. 2104-2115, Oct 2017.
* [20] M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, \"An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,\" _IEEE Trans. Image Process._, vol. 20, pp. 681-695, March 2011.
* [21] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, \"Distributed optimization and statistical learning via the alternating direction method of multipliers,\" _Found. Trends Mach. Learn._, vol. 3, pp. 1-122, Jan. 2011.
* [22] A. Beck and M. Teboulle, \"Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,\" _IEEE Trans. Image Process._, vol. 18, pp. 2419-2434, Nov 2009.
* [23] V. Cevher, P. Indyk, L. Carin, and R. G. Baraniuk, \"Sparse signal recovery and acquisition with graphical models,\" _IEEE Signal Process. Mag._, vol. 27, pp. 92-103, Nov 2010.
* [24] Q. Xu, D. Yang, J. Tan, A. Sawatky, and M. A. Anastasio, \"Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction,\" _Med. Phys._, vol. 43, pp. 1849-1872, April 2016.
* [25] S. Li, G. Zhao, H. Li, and etc., \"Near-field radar imaging via compressive sensing,\" _IEEE Trans. Antennas Propag._, vol. 63, pp. 828-833, Feb 2015.
Fig. 8: Imaging results by (a) MRF-FISTA, (b) FISTA, (c) LaSAL2, and (d) BP, respectively, using 30% of the full data.
Fig. 9: Entropy of the reconstructed images versus different data ratios. | Recent progress in compressive sensing states the importance of exploiting intrinsic structures in sparse signal reconstruction. In this letter, we propose a Markov random field (MRF) prior in conjunction with fast iterative shrinkage-thresholding algorithm (FISTA) for image reconstruction. The MRF prior is used to represent the support of sparse signals with clustered nonzero coefficients. The proposed approach is applied to the inverse synthetic aperture radar (ISAR) imaging problem. Simulations and experimental results are provided to demonstrate the performance advantages of this approach in comparison with the standard FISTA and existing MRF-based methods.
Compressive sensing (CS), Markov random field (MRF), FISTA, inverse synthetic aperture radar (ISAR). | Provide a brief summary of the text. | 144 |
arxiv-format/2111_07910v2.md | # Mask-guided Spectral-wise Transformer for
Efficient Hyperspectral Image Reconstruction
Yuanhao Cai \\({}^{1,2,*}\\), Jing Lin \\({}^{1,2,*}\\), Xiaowan Hu \\({}^{1,2}\\), Haoqian Wang \\({}^{1,2,\\dagger}\\),
Xin Yuan \\({}^{3}\\), Yulun Zhang \\({}^{4}\\), Radu Timofte \\({}^{4}\\), and Luc Van Gool \\({}^{4}\\)
\\({}^{1}\\) Shenzhen International Graduate School, Tsinghua University,
\\({}^{2}\\) Shenzhen Institute of Future Media Technology, \\({}^{3}\\) Westlake University, \\({}^{4}\\) ETH Zurich
Equal Contribution, \\(\\dagger\\) Corresponding Author
## 1 Introduction
Hyperspectral imaging refers to multi-channel imaging where each channel captures the information at a specific spectral wavelength for a real-world scene. Generally, hyperspectral images (HSIs) have more spectral bands than normal RGB images to store richer information and delineate more detailed characteristics of the imaged scene. Relying on this property, HSIs have been widely applied to many computer vision related tasks, _e.g._, remote sensing [5, 34, 59], object tracking [21, 40], medical image processing [3, 33, 37], _etc._ To collect HSIs, conventional imaging systems with spectrometers scan the scenes along the spatial or spectral dimension, usually requiring a long time. Therefore, these traditional imaging systems are unsuitable for capturing and measuring dynamic scenes. Recently, researchers have used snapshot compressive imaging (SCI) systems to capture HSIs. These SCI systems compress information of snapshots along the spectral dimension into one single 2D measurement [58]. Among current existing SCI systems [10, 16, 32, 45, 47], the coded aperture snapshot spectral imaging (CASSI) [36, 45] stands out and forms one promising mainstream research direction.
Based on CASSI, a large number of reconstruction algorithms have been proposed to recover the 3D HSI cube from the 2D measurement. Conventional model-based methods adopt hand-crafted priors such as sparsity [29, 46, 23], total variation [24, 51, 57], and non-local similarity [52, 30, 51] to regularize the reconstruction procedure. However, these
Figure 1: PSNR-Params-FLOPS comparisons with CNN-based HSI reconstruction methods. The vertical axis is PSNR (in dB performance), the horizontal axis is FLOPS (computational cost), and the circle radius is Params (memory cost). Our proposed Mask-guided Spectral-wise Transformers (MSTs) outperform previous methods while requiring significantly cheaper FLOPS and Params.
methods need to tweak parameters manually, resulting in poor generalization ability, unsatisfactory reconstruction quality, and slow restoration speed. With the development of deep learning, HSI reconstruction has witnessed significant progress. Deep convolutional neural network (CNN) applies a powerful model to learn the end-to-end mapping function from the 2D measurement to the 3D HSI cube. Although impressive results have been achieved, CNN-based methods [20, 35, 36, 39] show limitations in modeling the inter-spectra similarity and long-range dependencies. Besides, the HSIs are modulated by a physical mask in CASSI. Nonetheless, previous CNN-based methods [35, 36, 38, 48] mainly adopt the inner product between the mask and the shifted measurement as the input. This scheme corrupts the input HSI information and does not fully explore the _guidance effect of the mask_, leading to limited improvement.
In recent years, the natural language processing (NLP) model, Transformer [44], has been introduced into computer vision and outperformed CNN methods in many tasks. The Multi-head Self-Attention (MSA) module in Transformer excels at capturing non-local similarity and long-range dependencies. This advantage provides a possibility to address the aforementioned limitations of CNN-based methods in HSI reconstruction. However, directly applying the original Transformer may be unsuitable for HSI restoration due to the following reasons. **Firstly**, original Transformers learn to capture the long-range dependencies in spatial wise but the representations of HSIs are spectrally highly self-similar. In this case, the inter-spectra similarity and correlations are not well modeled. Meanwhile, the spectral information is spatially sparse. Capturing spatial interactions may be less cost-effective than modeling spectral correlations with the same resources. **Secondly**, the HSI representations are modulated by the mask in the CASSI system. The original Transformer without sufficient guidance may easily attend to many low-fidelity and less informative image regions when calculating self-attention. This may degrade the model efficiency. **Thirdly**, when using the original global Transformer [15], the computational complexity is quadratic to the spatial size. This burden is non-trivial and sometimes unaffordable. When using the local window-based Transformer [31], the receptive fields of the MSA module are limited within the position-specific windows and some highly related tokens may be neglected.
To cope with the above problems, we propose a novel method, Mask-guided Spectral-wise Transformer (MST), for HSI reconstruction. **Firstly**, in Fig. 2 (a), we observe that each spectral channel of HSIs captures an incomplete part of the same scene due to the constraints of the specific wavelength. This indicates that the HSI representations are similar and complementary along the spectral dimension. Hence, we propose a Spectral-wise MSA (S-MSA) to capture the long-range inter-spectra dependencies. Specifically, S-MSA treats each spectral channel feature as a token and calculates the self-attention along the spectral dimension. **Secondly**, in Fig. 2 (b), a mask is used in the CASSI system to modulate HSIs. The light transmittance of different positions on the mask varies significantly. This indicates that the fidelity of the modulated spectral information is position-sensitive. Therefore, we exploit the mask as a key clue and present a novel Mask-guided Mechanism (MM) that directs the S-MSA module to pay attention to the regions with high-fidelity spectral representations. Meanwhile, MM also alleviates the limitation of S-MSA in modeling the spatial correlations of HSI representations. **Finally**, with our proposed techniques, we establish a series of extremely efficient MST models that surpass state-of-the-art (SOTA) methods by a large margin, as illustrated in Fig. 1.
Our contributions can be summarized as follows:
* We propose a new method, MST, for HSI reconstruction. To the best of our knowledge, it is the first attempt to explore the potential of Transformer in this task.
* We present a novel self-attention, S-MSA, to capture the inter-spectra similarity and dependencies of HSIs.
* We customize an MM that directs S-MSA to pay attention to regions with high-fidelity HSI representations.
* Our MST dramatically outperforms SOTA methods on all scenes in simulation while requiring much cheaper Params and FLOPS. Besides, MST yields more visually pleasant results in real-world HSI reconstruction.
## 2 Related Work
### HSI Reconstruction
Traditional HSI reconstruction methods [18, 23, 29, 30, 43, 52, 61, 46, 57] are mainly based on hand-crafted priors. For example, GAP-TV [57] introduces the total variation prior. DeSCI [30] exploits the low-rank property and non-local self-similarity. However, these model-based methods achieve unsatisfactory performance and generality due to the poor representing capacity. Recently, deep CNNs have been applied to learn the end-to-end mapping function of HSI reconstruction [19, 20, 36, 39, 49] to achieve promising performance. TSA-Net [36] uses three spatial-spectral self-attention modules to capture the dependencies in compressed spatial or spectral dimensions. The additional costs are nontrivial while the improvement is limited. DGSMP [20] suggests an interpretable HSI restoration method with learned Gaussian Scale Mixture (GSM) prior. These CNN-based methods yield impressive performance but show limitations in modeling inter-spectra similarity and correlations. Besides, the _guidance effect of the mask_ is under-studied.
### Vision Transformer
Transformer is firstly proposed by [44] for machine translation. Recently, Transformer has achieved great success in many high-level vision tasks, such as image classification [2, 15, 31, 17], object detection [1, 64, 13, 60], segmentation [8, 55, 63], human pose estimation [25, 26, 56, 7], _etc_. Due to its promising performance, Transformer has also been introduced into low-level vision [6, 9, 11, 14, 27, 28, 54]. SwinIR [27] uses Swin Transformer [27] blocks to build up a residual network and achieve SOTA results in image restoration. However, these Transformers mainly aim to capture long-range dependencies of spatial regions. As for spectrally self-similar and mask-modulated HSIs, directly applying previous Transformers may be less effective in capturing spectral-wise correlations. In addition, the MSA may pay attention to less informative spatial regions.
## 3 CASSI System
A concise CASSI principle is shown in Fig. 2 (b). Given a 3D HSI cube, denoted by \\(\\mathbf{F}\\in\\mathbb{R}^{H\\times W\\times N_{\\lambda}}\\), where \\(H\\), \\(W\\), and \\(N_{\\lambda}\\) represent the HSI's height, width, and number of wavelengths, respectively. \\(\\mathbf{F}\\) is firstly modulated by the coded aperture (physical mask) \\(\\mathbf{M}^{*}\\in\\mathbb{R}^{H\\times W}\\) as
\\[\\mathbf{F}^{\\prime}(:,:,n_{\\lambda})=\\mathbf{F}(:,:,n_{\\lambda})\\odot\\mathbf{ M}^{*}, \\tag{1}\\]
where \\(\\mathbf{F}^{\\prime}\\) denotes the modulated HSIs, \\(n_{\\lambda}\\in[1,\\ldots,N_{\\lambda}]\\) indexes the spectral channels, and \\(\\odot\\) denotes the element-wise multiplication. After passing through the disperser, \\(\\mathbf{F}^{\\prime}\\) becomes tilted and is considered to be sheared along the \\(y\\)-axis. We use \\(\\mathbf{F}^{\\prime\\prime}\\in\\mathbb{R}^{H\\times(W+d(N_{\\lambda}-1))\\times N_{ \\lambda}}\\) to denote the tilted HSI cube, where \\(d\\) represents the shifting step. We assume \\(\\lambda_{c}\\) to be the reference wavelength, _i.e_., \\(\\mathbf{F}^{\\prime\\prime}(:,:,n_{\\lambda_{c}})\\) is not sheared along the \\(y\\)-axis. Then we have
\\[\\mathbf{F}^{\\prime\\prime}(u,v,n_{\\lambda})=\\mathbf{F}^{\\prime}(x,y+d(\\lambda_ {n}-\\lambda_{c}),n_{\\lambda}), \\tag{2}\\]
where \\((u,v)\\) represents the coordinate system on the detector plane, \\(\\lambda_{n}\\) denotes the wavelength of the \\(n_{\\lambda}\\)-th channel, and \\(d(\\lambda_{n}-\\lambda_{c})\\) indicates the spatial shifting for the \\(n_{\\lambda}\\)-th channel on \\(\\mathbf{F}^{\\prime\\prime}\\). Finally, the captured 2D compressed measurement \\(\\mathbf{Y}\\in\\mathbb{R}^{H\\times(W+d(N_{\\lambda}-1))}\\) can be obtained by
\\[\\mathbf{Y}=\\sum_{n_{\\lambda}=1}^{N_{\\lambda}}\\mathbf{F}^{\\prime\\prime}(:,:,n_ {\\lambda})+\\mathbf{G}, \\tag{3}\\]
where \\(\\mathbf{G}\\in\\mathbb{R}^{H\\times(W+d(N_{\\lambda}-1))}\\) is the imaging noise on the measurement, generated by the photon sensing detector.
## 4 Method
### Overall Architecture
The overall architecture of MST is shown in Fig. 3 (a). We adopt a U-shaped structure that consists of an encoder, a bottleneck, and a decoder. MST is built up by Mask-guided Spectral-wise Attention Blocks (MSAB). Firstly, we reverse the dispersion process (Eq. (2)) and shift back the measurement to obtain the initialized signal \\(\\mathbf{H}\\in\\mathbb{R}^{H\\times W\\times N_{\\lambda}}\\) as
\\[\\mathbf{H}(x,y,n_{\\lambda})=\\mathbf{Y}(x,y-d(\\lambda_{n}-\\lambda_{c})). \\tag{4}\\]
Then we feed \\(\\mathbf{H}\\) into the model. **Firstly**, MST exploits a _conv3\\(\\times\\)3 (convolution with kernel size = 3) layer to map \\(\\mathbf{H}\\) into feature \\(\\mathbf{X}_{0}\\in\\mathbb{R}^{H\\times W\\times C}\\). **Secondly**, \\(\\mathbf{X}_{0}\\) undergoes \\(N_{1}\\) MSABs, a downsample module, \\(N_{2}\\) MSABs, and a downsample module to generate hierarchical features. The downsample module is a strided _conv4\\(\\times\\)4_ layer that downscales the feature maps and doubles the channels. Therefore, the feature of the \\(i\\)-th stage of the encoder is denoted as \\(\\mathbf{X}_{i}\\in\\mathbb{R}^{\\frac{H}{2^{i}}}\\times\\frac{\\mathbf{W}^{i}}{2^{i} }\\times 2^{i}C\\). **Thirdly**, \\(\\mathbf{X}_{2}\\) passes through the
Figure 2: Illustration of the proposed method. Our Mask-guided Spectral-wise Multi-head Self-Attention (MS-MSA) is motivated by the HSI characteristics and CASSI system. (a) The representations of HSIs are spatially sparse while spectrally correlated. (b) The CASSI system uses a mask to modulate the HSIs. (c) Our MS-MSA in stage 0 of MST. (c1) S-MSA treats each spectral feature as a token and calculates self-attention along the spectral dimension. (c2) Mask-guided Mechanism directs the Spectral-wise MSA to pay attention to spatial regions with high-fidelity HSI representations. Some components are omitted for simplification. Please refer to the text for details.
bottleneck that consists of \\(N_{3}\\) MSABs. **Subsequently**, We follow the spirit of U-Net [42] and design a symmetrical structure as the decoder. In particular, the upsample module is a strided _decom_\\(2\\times 2\\) layer. The skip connections are exploited for feature aggregation between the encoder and decoder to alleviate the information loss caused by the downsample operations. Similarly, the feature of the \\(i\\)-th stage of the decoder is denoted as \\(\\mathbf{X}_{i}^{\\prime}\\in\\mathbb{R}^{\\frac{H}{2}}\\times\\frac{\\mathbf{W}}{2} \\times 2^{iC}\\). After passing through the decoder, the feature maps undergo a _conv_\\(3\\times 3\\) layer to generate the residual HSIs \\(\\mathbf{R}\\in\\mathbb{R}^{H\\times W\\times N_{\\lambda}}\\). **Finally**, the reconstructed HSIs \\(\\mathbf{H}^{\\prime}\\in\\mathbb{R}^{H\\times W\\times N_{\\lambda}}\\) can be obtained by the sum of \\(\\mathbf{R}\\) and \\(\\mathbf{H}\\), _i.e._, \\(\\mathbf{H}^{\\prime}=\\mathbf{H}+\\mathbf{R}\\).
In implementation, we set \\(C\\) to 28 and change the combination \\((N_{1},N_{2},N_{3})\\) to establish a series of MST models with small, medium, and large model sizes and computation costs: MST-S (2,2,2), MST-M (2,4,4), and MST-L (4,7,5).
The basic unit of MST is MSAB. As shown in Fig. 3 (b), MSAB consists of two layer normalization, a Mask-guided Spectral-wise MSA (MS-MSA), and a Feed-Forward Network (FFN). The details of FFN are depicted in Fig. 3 (c).
### Spectral-wise Multi-head Self-Attention
The non-local self-similarity is often exploited in HSI reconstruction but is usually not well modeled by CNN-based methods. Due to the effectiveness of Transformer in capturing non-local long-range dependencies and its impressive performance in other vision tasks, we aim to explore the potential of Transformer in HSI reconstruction. However, there are two main issues when directly applying Transformer to HSI restoration. The first problem is that original Transformers model long-range dependencies in spatial dimensions. But the HSI representations are spatially sparse and spectrally correlated, as shown in Fig. 2 (a). Capturing spatial-wise interactions may be less cost-effective than modeling spectral-wise correlations. Hence, we propose S-MSA that treats each spectral feature map as a token and calculates self-attention along the spectral dimension. Fig. 2 (c1) shows the S-MSA used in stage 0 of MST. The input \\(\\mathbf{X}_{in}\\in\\mathbb{R}^{H\\times W\\times C}\\) is reshaped into tokens \\(\\mathbf{X}\\in\\mathbb{R}^{HW\\times C}\\). Then \\(\\mathbf{X}\\) is linearly projected into _query_\\(\\mathbf{Q}\\in\\mathbb{R}^{HW\\times C}\\), _key_\\(\\mathbf{K}\\in\\mathbb{R}^{HW\\times C}\\), and _value_\\(\\mathbf{V}\\in\\mathbb{R}^{HW\\times C}\\):
\\[\\mathbf{Q}=\\mathbf{X}\\mathbf{W}^{\\mathbf{Q}},\\mathbf{K}=\\mathbf{X}\\mathbf{W}^{ \\mathbf{K}},\\mathbf{V}=\\mathbf{X}\\mathbf{W}^{\\mathbf{V}}, \\tag{5}\\]
where \\(\\mathbf{W}^{\\mathbf{Q}}\\), \\(\\mathbf{W}^{\\mathbf{K}}\\), and \\(\\mathbf{W}^{\\mathbf{V}}\\in\\mathbb{R}^{C\\times C}\\) are learnable parameters; \\(biases\\) are omitted for simplification. Subsequently, we respectively split \\(\\mathbf{Q}\\), \\(\\mathbf{K}\\), and \\(\\mathbf{V}\\) into \\(N\\)_heads_ along the spectral channel dimension: \\(\\mathbf{Q}=[\\mathbf{Q}_{1},\\dots,\\mathbf{Q}_{N}]\\), \\(\\mathbf{K}=[\\mathbf{K}_{1},\\dots,\\mathbf{K}_{N}]\\), and \\(\\mathbf{V}=[\\mathbf{V}_{1},\\dots,\\mathbf{V}_{N}]\\). The dimension of each head is \\(d_{h}=\\frac{C}{N}\\). Please note that Fig. 2 (c1) depicts the situation with \\(N\\) = 1 and some details are omitted for simplification. Different from original MSAs, our S-MSA treats each spectral representation as a token and calculates the self-attention for each \\(head_{j}\\):
\\[\\mathbf{A}_{j}=\\text{softmax}(\\sigma_{j}\\mathbf{K}_{j}^{\\mathsf{T}}\\mathbf{Q} _{j}),\\;\\;head_{j}=\\mathbf{V}_{j}\\mathbf{A}_{j}, \\tag{6}\\]
where \\(\\mathbf{K}_{j}^{\\mathsf{T}}\\) denotes the transposed matrix of \\(\\mathbf{K}_{j}\\). Because the spectral density varies significantly with respect to the wavelengths, we use a learnable parameter \\(\\sigma_{j}\\in\\mathbb{R}^{1}\\) to adapt the self-attention \\(\\mathbf{A}_{j}\\) by re-weighting the matrix multiplication \\(\\mathbf{K}_{j}^{\\mathsf{T}}\\mathbf{Q}_{j}\\) inside \\(head_{j}\\). Subsequently, the outputs of \\(N\\)_heads_ are concatenated in spectral wise to undergo a linear projection and then is added with a position embedding:
\\[\\text{S-MSA}(\\mathbf{X})=\\big{(}\\operatorname*{\\textsl{Concat}}_{j=1}^{N} (head_{j})\\big{)}\\mathbf{W}+f_{p}(\\mathbf{V}), \\tag{7}\\]
where \\(\\mathbf{W}\\in\\mathbb{R}^{C\\times C}\\) are learnable parameters, \\(f_{p}(\\cdot)\\) is the function to generate position embedding. It consists of two
Figure 3: The overall architecture of MST. (a) MST adopts a U-shaped structure that consists of an encoder, a bottleneck, and a decoder. (b) MSAB is composed of a Feed-Forward Network (FFN), an MS-MSA, and two layer normalization. (c) The components of FFN.
depth-wise _conv3\\(\\times\\)3_ layers, a GELU activation, and reshape operations. The HSIs are sorted by the wavelength along the spectral dimension. Therefore, we exploit this embedding to encode the position information of different spectral channels. Finally, we reshape the result of Eq. (7) to obtain the output feature maps \\(\\mathbf{X}_{out}\\in\\mathbb{R}^{H\\times W\\times C}\\).
We analyze the computational complexity of S-MSA and compare it with other MSAs. We only compare the main difference, _i.e._, the self-attention mechanism in Eq. (6):
\\[\\begin{split} O(\\text{S-MSA})=\\frac{2HWC^{2}}{N},\\;O(\\text{G- MSA})=2(HW)^{2}C,\\\\ O(\\text{W-MSA})=2(M^{2})^{2}(\\frac{HW}{M^{2}})C=2M^{2}HWC,\\end{split} \\tag{8}\\]
where G-MSA denotes the original global MSA [15], W-MSA denotes the local window-based MSA [31], and \\(M\\) represents the window size. The computational complexity of S-MSA and W-MSA is linear to the spatial size \\(HW\\). This cost is much cheaper than that of G-MSA (quadratic to \\(HW\\)). Meanwhile, S-MSA treats a whole spectral feature map as a token. Thus, the receptive field of our S-MSA is global and not limited to position-specific windows.
### Mask-guided Mechanism
The second problem of directly using Transformer for HSI restoration is that original Transformers may attend to some less informative spatial regions with low-fidelity HSI representations. In CASSI, a physical mask is used to modulate the HSIs. Thus, the light transmittance of different positions on the mask varies. As a result, the fidelity of the modulated spectral information is position-sensitive. This observation motivates us that the mask should be used as a clue to direct the model to attend to regions with high-fidelity HSI representations. In this part, we firstly analyze the usage of the mask in previous CNN-based methods, and then introduce our Mask-guided Mechanism (MM).
**Previous Mask Usage Scheme.** Previous CNN-based methods [35, 36, 48, 38] mainly conduct an inner product between the initialized HSIs \\(\\mathbf{H}\\) and the mask \\(\\mathbf{M}^{*}\\) to generate a modulated input. This scheme introduces spatial fidelity information but suffers from the following limitations: **(i)** This operation corrupts the input HSI representations, causes the information loss, and leads to spatial discontinuity. **(ii)** This scheme only operates at the input. The _guidance effect of the mask_ in directing the network to pay attention to regions with high-fidelity HSI representations is not fully explored. **(iii)** This scheme does not exploit learnable parameters to model the spatial-wise correlations.
**Our MM.** Different from previous methods, our MM preserves all the input HSI representations and learns to direct S-MSA to pay attention to the spatial regions with high-fidelity spectral representations. To be specific, given the mask \\(\\mathbf{M}^{*}\\in\\mathbb{R}^{H\\times W}\\) shown in Fig. 2 (c2), since the modulated HSIs are shifted by the disperser of the CASSI system, we firstly shift \\(\\mathbf{M}^{*}\\) like the dispersion process:
\\[\\mathbf{M}_{s}(x,y,n_{\\lambda})=\\mathbf{M}^{*}(x,y+d(\\lambda_{n}-\\lambda_{c})), \\tag{9}\\]
where \\(\\mathbf{M}_{s}\\in\\mathbb{R}^{H\\times(W+d(N_{\\lambda}-1))\\times N_{\\lambda}}\\) denotes the shifted version of \\(\\mathbf{M}^{*}\\). The shifted regions out of the range in \\(y\\)-axis on \\(\\mathbf{M}^{*}\\) are set to 0. Please note that Fig. 2 (c2) shows the MM used in stage 0 of MST. To match the scale of the feature maps in stage \\(i\\) of MST, \\(\\mathbf{M}_{s}\\) needs to pass through the same downsample operations in Fig. 3 (a). Subsequently, \\(\\mathbf{M}_{s}\\) undergoes a _conv1\\(\\times\\)_1 layer and then is input to two paths. The upper path is an identity mapping to retain the original fidelity information. The lower path undergoes a _conv1\\(\\times\\)_1 layer, a depth-wise _conv5\\(\\times\\)_5 layer, a sigmoid activation, and an inner product with the upper path. S-MSA is effective in capturing inter-spectra dependencies but shows limitations in modeling spatial interactions of HSI representations. Thus, the lower path is designed to capture the spatial-wise correlations. Then we have
\\[\\mathbf{M}_{s}^{\\prime}=(\\mathbf{W}_{1}\\mathbf{M}_{s})\\odot(1+\\delta(f_{dw}( \\mathbf{W}_{2}\\mathbf{W}_{1}\\mathbf{M}_{s})), \\tag{10}\\]
where \\(\\mathbf{W}_{1}\\) and \\(\\mathbf{W}_{2}\\) are the learnable parameters of the two _conv1\\(\\times\\)_1 layers, \\(f_{dw}(\\cdot)\\) denotes the mapping function of the depth-wise _conv5\\(\\times\\)_5 layer, \\(\\delta(\\cdot)\\) represents the sigmoid activation, and \\(\\mathbf{M}_{s}^{\\prime}\\in\\mathbb{R}^{H\\times(W+d(N_{\\lambda}-1))\\times C}\\) denotes the intermediate feature maps. To spatially align the mask attention map with the modulated HSIs \\(\\mathbf{F}^{\\prime}\\) in the CASSI system (Fig. 2 (b)) and the initialized input \\(\\mathbf{H}\\) of MST (Fig. 3 (a)), we reverse the dispersion process and shift back \\(\\mathbf{M}_{s}^{\\prime}\\) to obtain the mask attention map \\(\\mathbf{M}^{\\prime}\\in\\mathbb{R}^{H\\times W\\times C}\\) as
\\[\\mathbf{M}^{\\prime}(x,y,n_{\\lambda})=\\mathbf{M}_{s}^{\\prime}(x,y-d(\\lambda_{n} -\\lambda_{c}),n_{\\lambda}), \\tag{11}\\]
where \\(n_{\\lambda}\\in[1,\\ldots,C]\\) indexes the spectral channels to match the dimensions of \\(\\mathbf{M}_{s}^{\\prime}\\). We reshape \\(\\mathbf{M}^{\\prime}\\) into \\(\\mathbf{M}\\in\\mathbb{R}^{HW\\times C}\\) to match the dimensions of \\(\\mathbf{V}\\). Then we split \\(\\mathbf{M}\\) into \\(N\\)_heads_ in spectral wise: \\(\\mathbf{M}=[\\mathbf{M}_{1},\\ldots,\\mathbf{M}_{N}]\\). For each \\(head_{j}\\), MM conducts its guidance by re-weighting \\(\\mathbf{V}_{j}\\) using \\(\\mathbf{M}_{j}\\in\\mathbb{R}^{HW\\times d_{h}}\\). Hence, when using MM to direct S-MSA, the S-MSA module just needs to make a simple modification by re-formulating \\(head_{j}\\) in Eq. (6):
\\[head_{j}=(\\mathbf{M}_{j}\\odot\\mathbf{V}_{j})\\mathbf{A}_{j}. \\tag{12}\\]
The subsequent steps of S-MSA remain unchanged. By using MM, S-MSA can extract non-corrupted HSI representations, enjoy the guidance of position-sensitive fidelity information, and adaptively model the spatial-wise interactions.
## 5 Experiments
### Experimental Settings
Following the settings of TSA-Net [36], we adopt 28 wavelengths from 450 nm to 650 nm derived by spectral interpolation manipulation for HSIs. We perform experiments on both simulation and real HSI datasets.
**Simulation HSI Data.** We use two simulation hyperspectral image datasets, CAVE [41] and KAIST [12]. CAVEdataset is composed of 32 hyperspectral images at a spatial size of 512\\(\\times\\)512. KAIST dataset consists of 30 hyperspectral images at a spatial size of 2704\\(\\times\\)3376. Following the schedule of TSA-Net [36], we adopt CAVE as the training set. 10 scenes from KAIST are selected for testing.
**Real HSI Data.** We use the real HSI dataset collected by the CASSI system developed in TSA-Net [36].
**Evaluation Metrics.** We adopt peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [53] as the metrics to evaluate the HSI reconstruction performance.
**Implementation Details.** We implement MST in Pytorch. All the models are trained with Adam [22] optimizer (\\(\\beta_{1}\\) = 0.9 and \\(\\beta_{2}\\) = 0.999) for 300 epochs. The learning rate is set to 4\\(\\times\\)10\\({}^{-4}\\) in the beginning and is halved every 50 epochs during the training procedure. When conducting experiments on simulation data, patches at a spatial size of 256\\(\\times\\)256 cropped from the 3D cubes are fed into the networks. As for real hyperspectral image reconstruction, the patch size is set to 660\\(\\times\\)660 to match the real-world measurement. The shifting step \\(d\\) in dispersion is set to 2. Thus, the measurement sizes are 256\\(\\times\\)310 and 660\\(\\times\\)714 for simulation and real HSI datasets. The shifting step in reversed dispersion is \\(d/4^{i},i=0,1,2\\) for the \\(i\\)-th stage of MST. The batch size is 5. Random flipping and rotation are used for data augmentation. The models are trained on one RTX 8000 GPU. The training objective is to minimize the Root Mean Square Error (RMSE) and Spectrum Constancy Loss [62] between the reconstructed and ground-truth HSIs.
### Quantitative Results
We compare our MST with several SOTA HSI reconstruction algorithms, including three model-based methods (TwIST [4], GAP-TV [57], and DeSCI [30]) and six CNN-based methods (\\(\\lambda\\)-net [39], HSSP [49], DNU [50], PnP-DIP-HSI [38], TSA-Net [36], and DGSMP [20]). For fair comparisons, all methods are tested with the same settings as DGSMP [20]. The PSNR and SSIM results of different methods on 10 scenes in the simulation datasets are listed in Tab. 1. The Params and FLOPS (test size = 256\\(\\times\\)256) of open-source CNN-based algorithms are reported in Tab. 2c. It can be observed from these two tables that our MSTs significantly surpass previous methods by a large margin on all the 10 scenes while requiring much cheaper memory and computational costs. More specifically, our best model, MST-L surpasses DGSMP, TSA-Net, and \\(\\lambda\\)-net by 2.55, 3.72, and 6.65 dB while costing 54.0% (2.03 / 3.76), 4.6%, and 3.2% Params and 4.4% (28.15 / 646.65), 25.6%, and 23.9% FLOPS. Surprisingly, even our smallest model, MST-S outperforms DGSMP, TSA-Net, PnP-DIP-HSI, DNU, and \\(\\lambda\\)-net by 1.63, 2.80, 3.00, 3.52, and 5.73
\\begin{table}
\\begin{tabular}{c|c c
dB while requiring 24.7%, 2.1%, 2.7%, 78.2%, and 1.5% Params and 2.0%, 11.8%, 20.1%, 7.9%, and 11.0% FLOPS.
To intuitively show the superiority of our MST, we provide PSNR-Params-FLOPS comparisons of different reconstruction algorithms in Fig. 1. The vertical axis is PSNR (performance), the horizontal axis is FLOPS (computational cost), and the circle radius is Params (memory cost). It can be seen that our MSTs take up the upper-left corner, exhibiting the extreme efficiency advantages of our method.
### Qualitative Results
**Simulation HSI Reconstruction.** Fig. 4 visualizes the reconstructed simulation HSIs of _Scene_ 5 with 4 out of 28 spectral channels using seven SOTA methods and our MST-L. Please zoom in for a better view. As can be seen from the reconstructed HSIs (right) and the zoom-in patches of the selected yellow boxes, previous methods are less favorable to restore HSI details. They either yield over-smooth results sacrificing fine-grained structural contents and textural details, or introduce undesirable chromatic artifacts and blotchy textures. In contrast, our MST-L is more capable of producing perceptually-pleasing and sharp images, and preserving the spatial smoothness of the homogeneous regions. This is mainly because our MST-L enjoys the guidance of modulation information and captures the long-range dependencies of different spectral channels. In addition, we plot the spectral density curves (bottom-left) corresponding to the picked region of the green box in the RGB image (top-left). The highest correlation and coincidence between our curve and the ground truth demonstrate the spectral-wise consistency restoration effectiveness of our MST.
**Real HSI Reconstruction.** We further apply our proposed approach to real HSI reconstruction. Similar to [20, 36], we re-train our model (MST-L) on all scenes of CAVE [41] and KAIST [12] datasets. To simulate real imaging situations, we inject 11-bit shot noise into the measurements during training. Visual comparisons are shown in Fig. 5. Our MST-L surpasses previous algorithms in terms of high-frequency structural detail reconstruction and real noise suppression.
### Ablation Study
In this part, we adopt the simulation HSI datasets [12, 41] to conduct ablation studies. The baseline model is derived by removing our S-MSA and MM from MST-S.
**Break-down Ablation.** We firstly conduct a break-down ablation experiment to investigate the effect of each component towards higher performance. The results are listed in Tab. 2a. The baseline model yields 32.29 dB. When we successively apply our S-MSA and MM, the model continuously achieves 0.89 and 1.08 dB improvements. These results suggest the effectiveness of S-MSA and MM.
**Self-Attention Scheme Comparison.** We compare S-MSA with other self-attentions and report the results in Tab. 2b. For fair comparisons, the Params of models using different self-attention schemes are set to the same value (0.70 M). Please note that we downscale the input feature of global MSA [15] into \\(\\frac{1}{4}\\) size to avoid out of memory. The baseline model yields 32.29 dB while costing 0.53 M Params and 7.43 G FLOPS. We respectively apply the global MSA [15], local window-based MSA [31], Swin W-MSA [31], and our S-MSA. The model gains by 0.38, 0.46, 0.57, and 0.89 dB while adding 4.45, 3.64, 3.64, and 2.93 G FLOPS. Our S-MSA yields the most significant improvement but requires the least computational cost. We explain these results by the HSI characteristics that the spectral representations are spatially sparse and spectrally highly self-similar. Hence, capturing spatial interactions may be less cost-effective than modeling inter-spectra dependencies. This evidence clearly
Figure 5: Reconstructed real HSI comparisons of _Scene_ 3 with 4 out of 28 spectral channels. Seven SOTA algorithms and our MST-L are included. MST-L reconstructs more detailed contents and suppresses more noise. Please zoom in for better visualization performance.
verifies the efficiency superiority of our S-MSA.
In addition, we further conduct visual analysis about different MSAs in Fig. 6. Specifically, we visualize the correlation coefficients between each spectral pair of HSIs reconstructed by models equipped with different MSAs. It can be observed that the correlation coefficient map of the model using our proposed S-MSA is the most similar one to that of the ground truth. These results demonstrate the promising effectiveness of our S-MSA in modeling the inter-spectra similarity and long-range spectral-wise dependencies.
**Mask-guided Mechanism.** We conduct ablation studies to investigate the effect of the previous mask usage scheme described in Sec. 4.3, our MM, and their interaction. The adopted network is the baseline model using S-MSA. The results are reported in Tab. (d)d. Method A uses our input setting. Method B exploits the previous scheme that adopts \\(\\mathbf{H}\\odot\\mathbf{M}^{*}\\) as the input. B achieves a limited improvement due to the HSI representation corruption and under-utilization of the mask. Method C uses our MM. C yields the most significant improvement by 1.08 dB, showing the guidance advantage of MM for HSI reconstruction. D exploits both the previous scheme and our MM but degrades by 0.19 dB when compared to method C. This degradation may stem from the loss of some input spectral information.
Additionally, to intuitively show the advantages of MM, we visualize the feature maps of the last MSAB in MST-S. As depicted in Fig. 7, the top row shows the original RGB images. The middle and bottom rows respectively exhibit the feature maps without and with MM. It can be clearly observed that the model without MM generates blurred, distorted, and incomplete feature maps while sacrificing some details, neglecting some scene patches, or introducing unpleasant artifacts. In contrast, the model using our MM pays more accurate and high-fidelity attention to the detailed contents and structural textures of the desired scenes.
## 6 Conclusion
In this paper, we propose an efficient Transformer-based framework, MST, for accurate HSI reconstruction. Motivated by the HSI characteristics, we develop an S-MSA to capture inter-spectra similarity and dependencies. Moreover, we customize an MM module to direct S-MSA to pay attention to spatial regions with high-fidelity HSI representations. With these novel techniques, we establish a series of extremely efficient MST models. Quantitative experiments demonstrate that our method surpasses SOTA algorithms by a large margin, even requiring significantly cheaper Params and FLOPS. Qualitative comparisons show that our MST achieves more visually pleasant reconstructed HSIs.
**Acknowledgements:** This work is partially supported by the NSFC fund (61831014), the Shenzhen Science and Technology Project under Grant (ZDYBH201900000002, CJGJZD20200617102601004), the Westlake Foundation (2021B1501-2), and the funding from Lochn Optics.
Figure 6: Visualization of the correlation coefficients among spectral channels of HSIs reconstructed by models using different MSAs. The correlation coefficient map of the model equipped with our S-MSA is the most similar one to that of the ground truth.
Figure 7: Visual analysis of the feature maps of the last MSAB in MST-S. The top row shows the original RGB images. The middle and bottom rows exhibit the feature maps without and with MM. The model using MM pays more high-fidelity attention to details.
## References
* [1] Nicolas arion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In _ECCV_, 2020.
* [2] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. Vivit: A video vision transformer. _arXiv preprint arXiv:2103.15691_, 2021.
* [3] V. Backman, M. B. Wallace, L. Perelman, J. Arendt, R. Gurjar, M. Muller, Q. Zhang, G. Zonios, E. Kline, and T. McGillican. Detection of preinvasive cancer cells. _Nature_, 2000.
* [4] J.M. Bioucas-Dias and M.A.T. Figueiredo. A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration. _TIP_, 2007.
* [5] M. Borengasser, W. S. Hungate, and R. Watkins. Hyperspectral remote sensing: principles and applications. _CRC press_, 2007.
* [6] Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, and Donglai Wei. Learning to generate realistic noisy images via pixel-level noise-aware adversarial training. In _NeurIPS_, 2021.
* [7] Yuanhao Cai, Zhicheng Wang, Zhengxiong Luo, Binyi Yin, Angang Du, Haoqian Wang, Xinyu Zhou, Erjin Zhou, Xiangyu Zhang, and Jian Sun. Learning delicate local representations for multi-person pose estimation. _arXiv preprint arXiv:2003.04030_, 2020.
* [8] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. _arXiv preprint arXiv:2105.05537_, 2021.
* [9] Jiezhang Cao, Yawei Li, Kai Zhang, and Luc Van Gool. Video super-resolution transformer. _arXiv preprint arXiv:2106.06847_, 2021.
* [10] Xun Cao, Tao Yue, Xing Lin, Stephen Lin, Xin Yuan, Qionghai Dai, Lawrence Carin, and David J. Brady. Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world. _IEEE Signal Processing Magazine_, 2016.
* [11] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In _CVPR_, 2021.
* [12] Inchang Choi, MH Kim, D Gutierrez, DS Jeon, and G Nam. High-quality hyperspectral reconstruction using a spectral prior. In _Technical report_, 2017.
* [13] Xiyang Dai, Yinpeng Chen, Jianwei Yang, Pengchuan Zhang, Lu Yuan, and Lei Zhang. Dynamic detr: End-to-end object detection with dynamic attention. In _ICCV_, 2021.
* [14] Zhuo Deng, Yuanhao Cai, Lu Chen, Zheng Gong, Qiqi Bao, Xue Yao, Dong Fang, Shaochong Zhang, and Lan Ma. Rformer: Transformer-based generative adversarial network for real fundus image restoration on a new clinical benchmark. _arXiv preprint arXiv:2201.00466_, 2022.
* [15] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _ICLR_, 2021.
* [16] Hao Du, Xin Tong, Xun Cao, and Stephen Lin. A prism-based system for multispectral video acquisition. In _ICCV_, 2009.
* [17] Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, et al. Xcit: Cross-covariance image transformers. In _NeurIPS_, 2021.
* [18] Ying Fu, Yinqiang Zheng, Imari Sato, and Yoichi Sato. Exploiting spectral-spatial correlation for coded hyperspectral image restoration. In _CVPR_, 2016.
* [19] Xiaowan Hu, Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Hd-net: High-resolution dual-domain learning for spectral compressive imaging. In _CVPR_, 2022.
* [20] Tao Huang, Weisheng Dong, Xin Yuan, Jinjian Wu, and Guangming Shi. Deep gaussian scale mixture prior for spectral compressive imaging. In _CVPR_, 2021.
* [21] M. H. Kim, T. A. Harvey, D. S. Kittle, H. Rushmeier, R. O. Prum J. Dorsey, and D. J. Brady. 3d imaging spectroscopy for measuring hyperspectral patterns on solid objects. _ACM Transactions on on Graphics_, 2012.
* [22] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In _ICLR_, 2015.
* [23] David Kittle, Kerkil Choi, Ashwin Wagadarikar, and David J Brady. Multiframe image estimation for coded aperture snapshot spectral imagers. _Applied optics_, 2010.
* [24] David Kittle, Kerkil Choi, Ashwin Wagadarikar, and David J. Brady. Multiframe image estimation for coded aperture snapshot spectral imagers. _OSA Applied Optics_, 2010.
* [25] Ke Li, Shijie Wang, Xiang Zhang, Yifan Xu, Weijian Xu, and Zhuowen Tu. Pose recognition with cascade transformers. In _CVPR_, 2021.
* [26] Yanjie Li, Shoukui Zhang, Zhicheng Wang, Sen Yang, Wankou Yang, Shu-Tao Xia, and Erjin Zhou. Tokenpose: Learning keypoint tokens for human pose estimation. In _ICCV_, 2021.
* [27] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In _ICCVW_, 2021.
* [28] Jing Lin, Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Youliang Yan, Xueyi Zou, Henghui Ding, Yulun Zhang, Radu Timofte, and Luc Van Gool. Flow-guided sparse transformer for video deblurring. _arXiv preprint arXiv:2201.01893_, 2022.
* [29] Xing Lin, Yebin Liu, Jiamin Wu, and Qionghai Dai. Spatial-spectral encoded compressive hyperspectral imaging. _TOG_, 2014.
* [30] Yang Liu, Xin Yuan, Jinli Suo, David Brady, and Qionghai Dai. Rank minimization for snapshot compressive imaging. _TPAMI_, 2019.
* [31] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _ICCV_, 2021.
* [32] Patrick Llull, Xuejun Liao, Xin Yuan, Jianbo Yang, David Kittle, Lawrence Carin, Guillermo Sapiro, and David J Brady. Coded aperture compressive temporal imaging. _Optics Express_, 2013.
* [33] Guolan Lu and Baowei Fei. Medical hyperspectral imaging: a review. _Journal of Biomedical Optics_, 2014.
* [34] Farid Melgani and Lorenzo Bruzzone. Classification of hyperspectral remote sensing images with support vector machines. _IEEE Transactions on Geoscience and Remote Sensing_, 2004.
* [35] Ziyi Meng, Shirin Jalali, and Xin Yuan. Gap-net for snapshot compressive imaging. _arXiv preprint arXiv:2012.08364_, 2020.
* [36] Ziyi Meng, Jiawei Ma, and Xin Yuan. End-to-end low cost compressive spectral imaging with spatial-spectral self-attention. In _ECCV_, 2020.
* [37] Ziyi Meng, Mu Qiao, Jiawei Ma, Zhenming Yu, Kun Xu, and Xin Yuan. Snapshot multispectral endomicroscopy. _Optics Letters_, 2020.
* [38] Ziyi Meng, Zhenming Yu, Kun Xu, and Xin Yuan. Self-supervised neural networks for spectral snapshot compressive imaging. In _ICCV_, 2021.
* [39] Xin Miao, Xin Yuan, Yunchen Pu, and Vassilis Athitsos. l-net: Reconstruct hyperspectral images from a snapshot measurement. In _ICCV_, 2019.
* [40] Z. Pan, G. Healey, M. Prasad, and B. Tromberg. Face recognition in hyperspectral images. _TPAMI_, 2003.
* [41] Jong-Il Park, Moon-Hyun Lee, Michael D. Grossberg, and Shree K. Nayar. Multispectral imaging using multiplexed illumination. In _ICCV_, 2007.
* [42] Olaf Ronneberger, Philipp Fischer, Thomas Brox, a, and b. U-net: Convolutional networks for biomedical image segmentation. In _MICCAI_, 2015.
* [43] Jin Tan, Yanting Ma, Hoover Rueda, Dror Baron, and Gonzalo R. Arce. Compressive hyperspectral imaging via approximate message passing. _IEEE Journal of Selected Topics in Signal Processing_, 2016.
* [44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NeurIPS_, 2017.
* [45] Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady. Single disperser design for coded aperture snapshot spectral imaging. _Applied Optics_, 2008.
* [46] Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady. Single disperser design for coded aperture snapshot spectral imaging. _Applied optics_, 2008.
* [47] Ashwin A Wagadarikar, Nikos P Pitsianis, Xiaobai Sun, and David J Brady. Video rate spectral imaging using a coded aperture snapshot spectral imager. _Optics Express_, 2009.
* [48] Jiamian Wang, Yulun Zhang, Xin Yuan, Yun Fu, and Zhiqiang Tao. A new backbone for hyperspectral image reconstruction. _arXiv preprint arXiv:2108.07739_, 2021.
* [49] Lizhi Wang, Chen Sun, Ying Fu, Min H. Kim, and Hua Huang. Hyperspectral image reconstruction using a deep spatial-spectral prior. In _CVPR_, 2019.
* [50] Lizhi Wang, Chen Sun, Maoqing Zhang, Ying Fu, and Hua Huang. Dnu: Deep non-local unrolling for computational spectral imaging. In _CVPR_, 2020.
* [51] Lizhi Wang, Zhiwei Xiong, Dahua Gao, Guangming Shi, and Feng Wu. Dual-camera design for coded aperture snapshot spectral imaging. _OSA Applied Optics_, 2015.
* [52] Lizhi Wang, Zhiwei Xiong, Guangming Shi, Feng Wu, and Wenjun Zeng. Adaptive nonlocal sparse representation for dual-camera compressive hyperspectral imaging. _TPAMI_, 2016.
* [53] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncell. Image quality assessment: from error visibility to structural similarity. _TIP_, 2004.
* [54] Zhendong Wang, Xiaodong Cun, Jianmin Bao, and Jianzhuang Liu. Uformer: A general u-shaped transformer for image restoration. _arXiv preprint 2106.03106_, 2021.
* [55] Bichen Wu, Chenfeng Xu, Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Zhicheng Yan, Masayoshi Tomizuka, Joseph Gonzalez, Kurt Keutzer, and Peter Vajda. Visual transformers: Token-based image representation and processing for computer vision. _arXiv preprint arXiv:2006.03677_, 2020.
* [56] Sen Yang, Zhibin Quan, Mu Nie, and Wankou Yang. Transpose: Keypoint localization via transformer. In _ICCV_, 2021.
* [57] Xin Yuan. Generalized alternating projection based total variation minimization for compressive sensing. In _ICIP_, 2016.
* [58] Xin Yuan, David J Brady, and Aggelos K Katsaggelos. Snapshot compressive imaging: Theory, algorithms, and applications. _IEEE Signal Processing Magazine_, 2021.
* [59] Yuan Yuan, Xiangtao Zheng, and Xiaoqiang Lu. Hyperspectral image superresolution by transfer learning. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 2017.
* [60] Nicolas ZCarion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. Rend-to-end object detection with transformers. In _ECCV_, 2020.
* [61] Shipeng Zhang, Lizhi Wang, Ying Fu, Xiaoming Zhong, and Hua Huang. Computational hyperspectral imaging based on dimension-discriminative low-rank tensor recovery. In _ICCV_, 2019.
* [62] Yuanyuan Zhao, Hui Guo, Zhan Ma, Xun Cao, Tao Yue, and Xuemei Hu. Hyperspectral imaging with random printed mask. In _CVPR_, 2019.
* [63] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H.S. Torr, and Li Zhang. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In _CVPR_, 2021.
* [64] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. In _ICLR_, 2021. | Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement in the coded aperture snapshot spectral imaging (CASSI) system. The HSI representations are highly similar and correlated across the spectral dimension. Modeling the inter-spectra interactions is beneficial for HSI reconstruction. However, existing CNN-based methods show limitations in capturing spectral-wise similarity and long-range dependencies. Besides, the HSI information is modulated by a coded aperture (physical mask) in CASSI. Nonetheless, current algorithms have not fully explored the guidance effect of the mask for HSI restoration. In this paper, we propose a novel framework, Mask-guided Spectral-wise Transformer (MST), for HSI reconstruction. Specifically, we present a Spectral-wise Multi-head Self-Attention (S-MSA) that treats each spectral feature as a token and calculates self-attention along the spectral dimension. In addition, we customize a Mask-guided Mechanism (MM) that directs S-MSA to pay attention to spatial regions with high-fidelity spectral representations. Extensive experiments show that our MST significantly outperforms state-of-the-art (SOTA) methods on simulation and real HSI datasets while requiring dramatically cheaper computational and memory costs. | Provide a brief summary of the text. | 261 |
arxiv-format/1302_1220v1.md | # A Dark Energy Model interacting with Dark Matter described by an effective EoS.
Martiros Khurshudyan
CNR NANO Research Center S3, Via Campi 213a, 41125 Modena MO, Italy
and
Dipartimento di Scienze Fisiche, Informatiche e Matematiche,
Universita degli Studi di Modena e Reggio Emilia, Modena, Italy
email:[email protected]
## Introduction
Statefinder diagnostics for Emergent, Intermediate and Logamediate scenarios showed, that \\(Q=3Hb\\rho_{m}\\) form of interaction between a barotropic fluid and a dark energy model, based on Generalized Uncertainty Principle (origins of GUP are in the string theory), makes possible to cross \\(\\{r=1,s=0\\}\\), corresponding to \\(\\Lambda\\)CDM [7]1. In this article we consider interaction of GUP Dark energy with a Dark Matter described by
Footnote 1: Readers kindly requested to observe references of the article for self consistent information
\\[P=(\\gamma-1)\\rho+p_{0}+\\omega_{H}H+\\omega_{H2}H^{2}+\\omega_{dH}\\dot{H} \\tag{1}\\]
EoS [1]-[3]. We suppose, that such modification could be rise as a result of an interaction between \\(P=(\\gamma-1)\\rho\\) fluid and Darkness of the Universe. Having this modification, we suppose that there is an interaction between GUP and modified fluid. Two different forms for interaction we take into account, one of them is called sign-changeable and includes deceleration parameter in order to provide sign changeability of the interaction term during evolution of the Universe, and the second type is interaction of ordinary form, which has the same sign from the early epoch to late stages. First type of interaction is described by \\(Q=q(3Hb\\rho_{m}+\\beta\\dot{\\rho}_{m})\\)[4], [5] and the second type reads as \\(Q=3Hb\\rho_{m}+\\beta\\dot{\\rho}_{m}\\). For both cases we assume that interaction depends on matter energy density and its first order derivative only. Questions concerning to the nature of interactions term, appearing in cosmology just due to mathematics, are still open. Dark energy density based on GUP, which is described by (n, \\(\\xi\\)) parameters, takes the form
\\[\\rho_{G}=\\frac{3n^{2}m_{p}^{2}}{\\eta^{2}}+\\frac{3\\xi^{2}}{\\eta^{4}}, \\tag{2}\\]
where \\(\\eta\\) is the conformal time and reads as
\\[\\eta=\\int\\frac{dt}{a(t)}. \\tag{3}\\]
Hereafter, in the main writing of the article we will present results, concerning to Logamediate scenario: \\(a(t)=\\exp{(\\mu(\\ln{t})^{\\alpha})}\\), with \\(\\mu\\alpha>0,\\alpha>1\\). In appendix we will present results for \\(a(t)=a_{0}(B+e^{At})^{m}\\), with \\(a_{0}>0,A>0,B>0,m>1\\) known as Emergent scenario and \\(a(t)=exp(\\lambda t^{\\beta})\\), with \\(\\lambda>0,0<\\beta<1\\) called Intermediate scenario. We also perform analysis for a scale factor [6], for which \\(\\ddot{a}\\) evolution over time presented in Fig.1, with such scale factor conformal time has a behavior shown in Fig.2.
The parameters of our interests are the jerk or statefinder parameter
\\[r=\\frac{1}{H^{3}}\\frac{\\dddot{a}}{a}\\qquad\\qquad s=\\frac{r-1}{3(q-\\frac{1}{2} )}. \\tag{4}\\]
and \\(\\omega_{tot}\\) EoS parameter of the mixture.
The paper organized as follow: in the next section we will introduce equations which governs our model. Next we will obtain and investigate parameters of our model for each type of interaction. At the end in the last section we will conclude results concerning to the model. In appendix results concerning to Emergent and Intermediate scenarios presented in order.
### The field equations and the Model
Field equations that governs our model read as
\\[R^{\\mu\
u}-\\frac{1}{2}g^{\\mu\
u}R_{\\alpha}^{\\alpha}=T^{\\mu\
u}, \\tag{5}\\]
which by means of FRW metric
\\[ds^{2}=dt^{2}-a(t)^{2}\\left(dr^{2}+r^{2}d\\theta^{2}+r^{2}\\sin^{2}\\theta d\\phi^{ 2}\\right), \\tag{6}\\]
is reduced to
\\[H^{2}=\\frac{\\dot{a}^{2}}{a^{2}}=\\frac{\\rho_{tot}}{3}, \\tag{7}\\]
\\[-\\frac{\\ddot{a}}{a}=\\frac{1}{6}(\\rho_{tot}+P_{tot}), \\tag{8}\\]
with Bianchi identities implying that
\\[\\dot{\\rho}_{tot}+3\\frac{\\dot{a}}{a}(\\rho_{tot}+P_{tot})=0. \\tag{9}\\]
The mixture of our consideration describes by
\\[\\rho_{tot}=\\rho_{m}+\\rho_{G}. \\tag{10}\\]
\\[P_{tot}=P_{m}+P_{G}. \\tag{11}\\]
In case of interaction between components, (9) splits into two following equations
\\[\\dot{\\rho}_{m}+3H(\\rho_{m}+P_{m})=-Q \\tag{12}\\]
Figure 2: Conformal time \\(\\eta\\) against t\\[\\dot{\\rho}_{G}+3H(\\rho_{G}+P_{G})=Q, \\tag{13}\\]
where Q is the interaction term introduced above. (12) with (1) allows us to obtain \\(\\rho_{m}\\). With \\(\\rho_{m}\\) and \\(\\rho_{G}\\) from (7) Hubble parameter as a function of \\(t\\) accounting interaction under consideration will be obtained. Having \\(\\rho_{m}\\), \\(\\rho_{G}\\) and \\(H(t)\\) from (8) for \\(P_{G}\\) in case of interaction will be covered immediately
\\[P_{G}=-2\\dot{H}-(P_{m}+\\rho_{m})-\\rho_{G}. \\tag{14}\\]
Then having all parameters calculated we will be able to investigate parameters discussed above.
### Interaction \\(Q=3bH\\rho_{m}+\\beta\\dot{\\rho}_{m}\\)
In Logamediate scenario for conformal time we have
\\[\\eta=\\int\\frac{dt}{\\exp(\\mu(\\ln t)^{\\alpha})}. \\tag{15}\\]
For \\(\\alpha=2\\) (15) reads as
\\[\\eta=\\frac{e^{\\frac{1}{4\\mu}}\\sqrt{\\pi}Erf[\\frac{-1+2\\mu\\ln t}{2\\sqrt{\\mu}}]} {2\\sqrt{\\mu}}, \\tag{16}\\]
where
\\[Erf(z)=\\frac{2}{\\sqrt{\\pi}}\\int_{0}^{z}e^{-t^{2}}\\ dt. \\tag{17}\\]
For simplicity we omit expressions of Dark Matter Energy Density, of pressure and etc., and going to graphical analysis of the model. We consider \\(\\alpha=2\\) case in order to deal with analytical solutions, otherwise numerical analysis can be performed easily. We start with \\(\\omega_{tot}\\) (Fig. 3). We observe that in early epoch \\(\\omega_{tot}<-1\\) and indicates phantom-like behavior, then during evolution the change of behavior from phantom to \\(\\omega_{tot}>-1\\) quintessence-like behavior occurs, which in its turn at late stages of evolution behaves as a cosmological constant with \\(\\omega_{tot}=-1\\). Statefinder diagnostics reveal that we cross \\(\\{r=1,s=0\\}\\) twice corresponding to \\(\\Lambda\\)CDM. Departing from the section with positive \\(s\\) and positive \\(r\\) corresponding to the radiation phase of the Universe we cross \\(\\{r=1,s=0\\}\\) corresponding to \\(\\Lambda\\)CDM and we back to radiation phase for late stage of the Universe (Fig. 4). This depends on a value of \\(\\beta\\) parameter presenting in interaction \\(Q\\), which indicates an impact of the \\(\\dot{\\rho}_{m}\\) in interaction 2.
Footnote 2: \\(\\mu=2.2\\), \\(b=0.5\\), \\(\\beta=0.2\\), \\(\\gamma=1.5\\), \\(p_{0}=-0.5\\), \\(\\omega_{H}=-1.5\\), \\(\\omega_{H2}=-0.6\\), \\(\\omega_{dH}=-0.3\\)
### Interaction \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\)
In this section we will consider sign-changeable interaction between components, which involves deceleration parameter \\(q\\). Here we present graphical analysis of some parameters of the model. Parameters of our interest were found by the same mathematical way as in previous case. For EoS \\(\\omega_{tot}\\) parameter we found that in early stages of evolution it has quintessence-like behavior with \\(\\omega_{tot}>-1\\), then there is a transition to \\(\\omega_{tot}<-1\\) with phantom-like behavior and preserving it up to late stages of evolution (Fig 5). Statefinder diagnostics presented in Fig.63.
Footnote 3: \\(\\mu=2.2\\), \\(b=0.5\\), \\(\\beta=0.2\\), \\(\\gamma=1.5\\), \\(p_{0}=-0.5\\), \\(\\omega_{H}=-1.5\\), \\(\\omega_{H2}=-0.6\\), \\(\\omega_{dH}=-0.3\\)
Figure 4: Logamediate Scenario: \\(r\\times 10^{-1}\\) against s, \\(Q=3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m}\\)
### Scale factor 2
Having scale factor profile for which \\(\\ddot{a}\\) over time is presented in Fig.1 we perform numerical analysis in order to reveal the behavior of the parameters under consideration. With \\(Q=3bH\\rho_{m}+\\beta\\dot{\\rho}_{m}\\) interaction we observe that when \\(p_{0}=\\omega_{H}=\\omega_{dH}=0\\) and \\(\\omega_{H2}=0\\), during whole evolution of the Universe \\(\\omega_{tot}=-1\\). Then for instance, when \\(p_{0}=-1.5,\\ \\omega_{H}=-1\\ \\omega_{dH}=-0.5\\) and \\(\\omega_{H2}=-0.65\\) we observe that, \\(\\omega_{tot}<=-1\\) and it is true during whole evolution (Fig.7). In the case of positive \\(p_{0}=1.5\\) we observe that in the early epoch \\(\\omega_{tot}>0\\) then there is stage with \\(\\omega_{tot}>-1\\) indicating quintessence-like behavior. Finally for late time evolution \\(\\omega_{tot}<=-1\\) indicates phantom-like behavior (Fig.8). Statefinder diagnostics shows that a crossing of \\(\\{r=1,s=0\\}\\) is possible (Fig.9). Taking into account sign-changeable interaction between components of the mixture we observe that in case \\(p_{0}=\\omega_{H}=\\omega_{dH}=0\\) and \\(\\omega_{H2}=0\\) for EoS parameter we have \\(\\omega_{tot}>=-1\\) during evolution (Fig.10). For this case statefinder diagnostics presented in Fig.11. In case when parameters differ from \\(0\\) we observe that with \\(p_{0}=1.5\\) Universe starts with \\(\\omega_{tot}>-1\\), then continues its evolution and becomes \\(\\omega_{tot}<-1\\). Finally, for late time of the evolution \\(\\omega_{tot}=-1\\) (Fig.12). With \\(p_{0}=-1.5\\) for EoS parameter we have Fig.13.
Figure 8: Scale factor 2. \\(\\omega_{tot}\\) against t. Interaction: \\(Q=3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m}\\), \\(p_{0}=1.5\\)
Figure 10: Scale factor 2. \\(\\omega_{tot}\\) against t. Interaction: \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\)
Figure 9: Scale factor 2. \\(r\\times 10^{-1}\\) against s, Interaction: \\(Q=3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m}\\)
Figure 12: Scale factor 2. \\(\\omega_{tot}\\) against t. Interaction: \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\), \\(p_{0}=1.5\\)
Figure 13: Scale factor 2. \\(\\omega_{tot}\\) against t. Interaction: \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\), \\(p_{0}=-1.5\\)
## Discussion
Two different types of interaction \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\) and \\(Q=3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m}\\) between GUP Dark Energy and a fluid with a modified EoS were considered. In case of \\(Q=3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m}\\) we observe phantom-quintessence-cosmological constant evolution for \\(\\omega_{tot}\\). Statefinder diagnostics reveal that we cross \\(\\{r=1,s=0\\}\\) twice corresponding to \\(\\Lambda\\)CDM depending on \\(\\beta\\) parameter presenting in interaction \\(Q\\), which indicates an impact of \\(\\dot{\\rho}_{m}\\) in interaction. Departing from the section with positive \\(s\\) and positive \\(r\\) corresponding to the radiation phase of the Universe we cross \\(\\{r=1,s=0\\}\\) corresponding to \\(\\Lambda\\)CDM and we back to radiation phase for late stage of the Universe (Fig. 4). Interacting by \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\), EoS \\(\\omega_{tot}\\) parameter has quintessence-phantom transition (Fig 5). Here we are also able to cross \\(\\{r=1,s=0\\}\\) corresponding to \\(\\Lambda\\)CDM. In the second part of the article we considered a scale factor for which we have profile of \\(\\ddot{a}\\). We observe an interesting behavior for the model. For instance, \\(Q=3bH\\rho_{m}+\\beta\\dot{\\rho}_{m}\\) interaction shows that when \\(p_{0}=\\omega_{H}=\\omega_{dH}=0\\) and \\(\\omega_{H2}=0\\) during whole evolution of the Universe \\(\\omega_{tot}=-1\\). Then, with parameters \\(p_{0}=-1.5,\\ \\omega_{H}=-1\\ \\omega_{dH}=-0.5\\) and \\(\\omega_{H2}=-0.65\\), we saw, that \\(\\omega_{tot}<=-1\\) and it is true for whole evolution (Fig.7). In the case of positive \\(p_{0}=1.5\\) we observe that in the early epoch \\(\\omega_{tot}>0\\) then there is a stage with \\(\\omega_{tot}>-1\\) indicating quintessence-like behavior. Finally for late time evolution \\(\\omega_{tot}<=-1\\) indicates phantom-like behavior (Fig.8). Statefinder diagnostics shows that the crossing of \\(\\{r=1,s=0\\}\\) is possible (Fig.9). Taking into account sign-changeable interaction between components of the mixture we observe that in case \\(p_{0}=\\omega_{H}=\\omega_{dH}=0\\) and \\(\\omega_{H2}=0\\) EoS parameter \\(\\omega_{tot}>=-1\\) during evolution (Fig.10). For this case statefinder diagnostics presented in Fig.11. In case when parameters differ from \\(0\\) we observe that with \\(p_{0}=1.5\\) Universe starts with \\(\\omega_{tot}>-1\\), then continues its evolution and becomes \\(\\omega_{tot}<-1\\). Finally, for late time of the evolution \\(\\omega_{tot}=-1\\) (Fig.12). With \\(p_{0}=-1.5\\) for EoS parameter we have Fig.13. For fixing real values of the parameters of the considered model, we should perform comparison with experimental data, which will be done in forthcoming articles. For all cases we see, that for late stages of the evolution mixture indicates itself as a cosmological constant. In an appendix we present results concerning to the Emergent and Intermediate scenarios for both cases of interactions.
## Acknowledgments
This research activity has been supported by EU fonds in the frame of the program FP7-Marie Curie Initial Training Network INDEX NO.289968.
## References
* [1] Martiros Khurshudyan, A Matter with an effective EoS interacting with a tachynic field in an accelerating Universe, arXiv:1301.0005v2.
* [2] S. Nojiri (Japan, Natl. Defence Academy), Sergei D. Odintsov (ICREA, Barcelona and Barcelona, IEEC), Inhomogeneous equation of state of the universe: Phantom era, future singularity and crossing the phantom barrier,Phys.Rev. D72 (2005) 023003, DOI: 10.1103/PhysRevD.72.023003, e-Print: hep-th/0505215.
Salvatore Capozziello (Naples U. and INFN, Naples), V.F. Cardone (Salerno U. and INFN, Salerno), E. Elizalde (ICREA, Barcelona and Barcelona, IEEC), S. Nojiri (Japan, Natl. Defence Academy), S.D. Odintsov (ICREA, Barcelona and Barcelona, IEEC), Observational constraints on dark energy with generalized equations of state, Phys.Rev. D73 (2006) 043512, DOI: 10.1103/PhysRevD.73.043512, e-Print: astro-ph/0508350
* [3] Jie Ren and Xin-He Meng, Modified equation of state, scalar feild, and bulk viscosity in Friedmann Universe, arXiv:astro-ph/0602462v2.
* [4] WEI Hao, Cosmological Constraints on the Sign-Changeable Interactions, Common. Theory. Phys. 56 (2011) 972-980. H.Wei, Nucl. Phys. B 845 (2011) 381.
* [5] Martiros Khurshudyan, Interaction between Generalized Varying Chaplygin gas and Tachyonic Fluid, arXiv:1301.1021v2.
* [6] G.M. Kremer, Cosmological models described by a mixture of van der Waals fluid and dark energy, Phys. Rev. D68, 123507, 2003.
* [7] Rahul Ghosh, Surajit Chattopadhyay and Ujjal Debnath, A dark energy model with generalized uncertainty principle in an emergent, intermediate and logamediate scenarios of the universe, Int J Theor Phys (2012) 51:589-603.
Figure 21: Emergent Scenario: \\(\\omega_{tot}\\) against t, Interaction: \\(Q=q(3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m})\\)
Figure 19: Emergent Scenario: \\(\\omega_{tot}\\) against t, Interaction: \\(Q=3bH\\rho_{m}+\\gamma\\dot{\\rho}_{m}\\) | In this latter author would like to consider interaction between a dark energy based on Generalized Uncertainty Principle (GUP) and a Dark Matter described by effective EoS: \\(P=(\\gamma-1)\\rho+p_{0}+\\omega_{H}H+\\omega_{H2}H^{2}+\\omega_{dH}\\dot{H}\\)[1]-[3], which could be interpreted as a modification concerning to the some interaction between fluid \\(P=(\\gamma-1)\\rho\\) with different components of the Darkness of the Universe. Two types of interaction, called sign-changeable, \\(Q=q(3Hb\\rho_{m}+\\beta\\dot{\\rho}_{m})\\)[4], [5] and \\(Q=3Hb\\rho_{m}+\\beta\\dot{\\rho}_{m}\\) are considered. EoS parameter of the mixture \\(\\omega_{tot}\\) are investigated. Statefinder diagnostics provided also. | Condense the content of the following passage. | 214 |
arxiv-format/2009_08337v2.md | # Histopathology for Mohs Micrographic Surgery with Photoacoustic Remote Sensing Microscopy
Benjamin R. Ecclestone
1_PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada 1_
Kevan Bell
1_PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada 1_
Saad Abbasi
1_PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada 1_
Deepak Dinakaran
2_PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada 2_
Muba Taher
3_Department of Oncology, University of Alberta, 8440 112 St. NW, T6G 2R7, Edmonton, Alberta, Canada 3_
John R. Mackey
2_PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada 3_
Parsin Haji Reza
1_PhotoMedicine Labs, Department of Systems Design Engineering, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada 1_
######
Introduction
Mohs micrographic surgery (MMS) is the gold standard precision surgical technique for treating contiguous invading skin cancers in cosmetically and functionally important areas [1]. MMS excision of nonmelanoma skin cancers (NMSC) represents one of the most common procedures in the United States. Around 25% of the 3.5 million NMSC cases diagnosed each year are treated with this procedure [2,3]. In recent years NMSC incidence has risen dramatically, straining the global capacity to provide MMS [2]. For the two most common NMSCs, basal cell carcinomas (BCC) and squamous cell carcinomas (SCC), MMS achieves a five-year cure rate of nearly 99% [4,5]. For high risk nonmelanoma lesions, MMS achieves higher cure rates than wide local excision [6-8].
During MMS the surgeon repeatedly excises thin tissue layers which then undergo intraoperative histopathological analysis to identify regions of invasion at the margins. Each layer of tissue will be about ~5 mm thick and will aim to capture a 2 to 3 mm margin around the tumor [9]. These excised tissue samples will then undergo frozen histopathology. Standard frozen histology consists of embedding the sample into a cutting substrate, then cooling the sample and substrate to ~25\\({}^{\\circ}\\)C in a cryostat. Once frozen, the inner surface of the sample is sectioned via cryotome into 5-10-micron slices and placed onto a microscope slide. These slides are then dyed with histochemical stains to provide contrast for microscopic assessment. In contrast to other surgical techniques, in MMS the entire deep and peripheral margin of the excised tissue undergoes pathological analysis. By assessing the entire surgical margin in this manner, the surgeon is able to identify specific invasive regions of malignant cells, which are then targeted during the next excision. This process of layer by layer excision with interim histopathological analysis is repeated until the entire invasive tumor has been removed [9].
The use of intraoperative frozen histology means operating times for MMS can exceed several hours, depending on the number of excisions required [10]. Most of this time is consumed by tissue processing as described, which requires 20 to 60 minutes per individual tissue layer, while the excision and pathologic assessment are relatively rapid [9]. Consequently, the rate limiting step in the MMS process is generating the tissue suitable for transmission light microscopic interpretation. The MMS process could be accelerated greatly if tissue processing could be circumvented, and unstained tissues visualized directly. Ideally, images so obtained would be visually similar to standard histochemical staining images. Furthermore, this would also preserve sample integrity permitting re-examination and additional immunochemistry analysis.
Direct histological imaging of unstained tissue sections presents unique challenges compared to pathological analysis of stained frozen preparations. Nonetheless, a variety of imaging methods have demonstrated histology-like imaging in MMS excisions. Prominent examples include microscopy with ultraviolet surface excitation (MUSE) [11], multiphoton fluorescence microscopy (MPM) [12,13], Raman spectroscopy [14-16], photoacoustic microscopy (PAM) [17,18], and optical coherence tomography (OCT) [19-21], each of which has been explored for intraoperative histology during MMS. While MUSE and MPM have shown promising histology-like images, they require exogenous dyes for contrast. Staining tissues prior to imaging reintroduces many of the tissue preparation issues experienced by frozen histology as staining can be resource intensive and introduces potential for variability.
Only Raman spectroscopy, PAM and OCT have demonstrated label-free histology-like imaging of MMS specimens [14-21]. Unfortunately, recent works on MMS histology with PAM [17,18] and Raman spectroscopy [14-16] feature inferior resolutions compared to conventional optical microscopy. Though PAM [22, 23] and Raman spectroscopy [24] may provide subcellular resolution this has not been applied in MMS. Hence, the current embodiments [14-18] are inadequate for precisely locating small and subtle regions of malignant cells. Of the presented methods, OCT is the only method which provides subcellular resolution and label free contrast. However, the optical scattering contrast in OCT does not provide the specificity necessary to match current pathology standards [19-21]. To provide visualizations reminiscent of current H&E staining, OCT systems must use external image processing techniques. Therefore, there remains a pressing need for an accurate interoperative label-free histopathological microscopy technique capable of imaging large areas of tissue while also providing subcellular resolution to expedite MMS procedures.
Photoacoustic Remote Sensing (PARSTM) microscopy has recently emerged as an all-optical non-contact label-free reflection-mode imaging modality [25-27]. Like other photoacoustic techniques, PARS captures endogenous optical absorption contrast visualizing a wide range of biological chromophores including hemoglobin, lipids, and DNA. A pulsed excitation laser is used to deposit optical energy into the sample. As the target chromophore absorbs the excitation pulse, it undergoes thermo-elastic expansion proportional to the absorbed excitation energy. The expansion induces nanosecond scale modulations in the local refractive index of the sample. In PARS, this effect is observed as back-reflected intensity variations of a second co-focused continuous-wave detection laser. In this way, PARS microscopy visualizes endogenous absorption contrast in an all-optical label-free reflection-mode architecture [25-27]. Previously, our group has shown promising histology-like imaging capabilities by utilizing ultraviolet (UV) excitation to primarily target the absorption contrast of DNA [26,27]. Accentuating this contrast generates visualizations reminiscent of immunohistochemical staining of cell nuclei. In recent works, PARS histological imaging has been applied in thick tissue samples (>2 mm) including freshly resected tissues and formalin fixed paraffin embedded tissue preparations [26,27,28].
In this work, we present a PARS system for rapid label-free histological imaging of unprocessed MMS sections. By leveraging recent technical improvements of the PARS system, we have expanded the histological imaging capabilities to match the scanning area, resolution and imaging speed required for MMS. We show the first non-contact photoacoustic microscopy of Mohs excisions. Unstained MMS excisions are imaged with the PARS microscope, then stained and imaged with a brightfield microscope. Using this strategy, we provide the first true one to one comparison between PARS microscopy and normal histopathological imaging. Wide field of view grossing scans capture entire MMS sections (>1 cm\\({}^{2}\\) area) with sufficient resolution to recover subcellular diagnostic characteristics. Concurrently, smaller high-resolution images give close-ups of clinically relevant regions. These small fields provide ~300 nm optical resolution enabling morphological assessment of single nuclei. Thus, the proposed PARS system provides both the grossing scan capabilities, and high spatial resolution required to assess tissues during MMS. Compared to frozen histology, the presented PARS microscope without optimized scanning hardware can image an entire MMS excision in under 12 minutes, 60% of the time to prepare a slide for brightfield histological assessment. Applied in a clinical setting, this device may circumvent the need for histopathological processing of tissue. Ideally, the PARS system could be implemented into standard histopathological workflows without affecting current techniques. Thus, PARS is well positioned to supplement existing intraoperative tissue analysis techniques, potentially reducing the time for each MMS operative cycle, streamlining the MMS process, thereby increasing MMS capacity.
## 2 Methods
The proposed imaging system is shown in Figure 1. A 400 ps pulsed laser (WEDGE XF, Bright Solutions) was selected to provide 266 nm UV excitation. Residual 532 nm output is removed from the excitation beam with a CAF2 prism (PS862, Thorlabs). Following separation, the UV beam is expanded (BE05-266, Thorlabs) and combined with the detection beam via dichroic mirror (HBSY234, Thorlabs). Detection of PARS signals is performed with a 1310 nm continuous wave super-luminescent diode (S5FC1018P, Thorlabs). Collimated, horizontally polarized detection light passes through a polarizing beam splitter (PBS254, Thorlabs) and quarter wave plate (WPQ10M-1310, Thorlabs) into the imaging system. Both excitation and detection are then co-focused onto the sample with a 0.5 NA reflective objective (LMM-15X-UVV, Thorlabs). The back reflected detection beam from the sample which encodes the PARS modulations, is returned along detection pathway. Passing through the quarter wave plate for a second time, the detection beam becomes horizontally polarized. The horizontally polarized light is then directed towards the photodetector by the polarizing beam splitter where it is filtered (FELH1000, Thorlabs) before being focused onto the photodiode (PDB425C-AC, Thorlabs).
Images were collected by mechanically scanning samples over the fixed imaging head in a continued raster pattern. While scanning, the excitation laser pulses continually capturing evenly spaced PARS interrogation points. The lateral spacing between PARS collection points is then tuned by adjusting the stage speed and excitation laser repetition rate. Depending on the desired resolution, the lateral spacing ranged from 0.1 to 5 um. Each time the excitation laser is pulsed, a short segment of photodetector signal is recorded capturing the PARS modulations. To ensure accurate recovery of the PARS interrogation, around 250 samples of the photodiode signal are captured with a 14-bit digitizer (RZE-004-300, Gage Applied). This time domain data is streamed via PCI-E channel from the digitizer to the computer memory. Here, an algorithm was applied to extract the characteristic amplitude of each PARS signal in real time.
### Image Reconstruction and Processing
At each interrogation a positional signal from the stage is collected along with the characteristic PARS amplitude. The positional signal is then used to remove data with irregular spatial sampling characteristics (i.e. data collected while the stage was accelerating). Once the sections with irregular spatial sampling are removed, the remaining data forms a perfect cartesian grid
Figure 1: Simplified Schematic of the PARS system. Component labels are defined as follows: collimator (COL), polarizing beam splitter (PBS), quarter wave plate (QWP), dichroic mirror (DM), variable beam expander (VBE), beam dump (BD), objective lens (OL), long pass filter (LP), aspheric focal lens (AL), photodiode (PD), analogue high-pass filter (A-HPF), mirrors (M).
of PARS signals. This grid data is essentially a raw image ready for further processing [26, 27]. Once a raw frame has been formed, some standard processing steps are performed to generate a PARS image. First, to reduce measurement noise, the data is gaussian filtered. Then, to enable consistent processing between images, the raw data is normalized. The absorption data is then rescaled logarithmically, enabling visualization of absorption contrast across different orders of magnitude. Logarithmic rescaling is performed by converting the normalized linear PARS signal into decibels. Once converted to log space, the data scaling is adjusted based on the histogram distribution to reduce background noise around the tissue. This is the complete processing for the small high-resolution frames. Large field of view frames undergo one further refinement. For large frames, flat field correction is applied to reduce contrast artifacts. Flat field correction is applied using a gaussian smoothing approach. Finalized images, are then exported as image files which can be easily viewed and analyzed.
### Sample Preparation:
In this study, a variety of MMS excisions with BCC were selected for imaging. Frozen sections of tissue specimens with BCC were obtained from Mohs surgeries. These specimens underwent standard Mohs sample preparation as follows. The Mohs excisions are embedded within a cutting substrate and placed into a cryostat where they are cooled to approximately -25\\({}^{\\circ}\\)C, over a 1 to 10-minute period (depending on sample shape and composition). The frozen samples were then sectioned via cryotome into 5-10-micron slices and placed onto a microscope slide. The slide was then dried and fixed at 55\\({}^{\\circ}\\)C for 1 minute. The unstained tissue slices were then imaged at room temperature with the PARS microscope. Following PARS imaging, the slides were returned to the clinicians to undergo the remaining standard processing. Hematoxylin and eosin (H&E) staining was performed, then the slides were covered with mounting media and a cover slip. Following processing the now stained slices were imaged with a standard brightfield microscope (Zeiss Axioscope 2 with Zeiss Axiocom HR). The tissue samples were collected under protocols approved by the Research Ethics Board of Alberta (Protocol ID: HREBA.CC-18-0277) and the University of Waterloo Health Research Ethics Committee (Humans: #40275). The ethics committees waived the requirement for patient consent as the selected samples were excess tissues no-longer required for diagnostic purposes, and no patient identifiers were provided to the researchers. All experiments were performed in accordance with the requirements of the Government of Canada Panel on Research Ethics Guidelines.
## 3 Results and Discussion
Prior to imaging human tissue samples, the microscope was characterized using gold nanoparticles and polystyrene microspheres. Gold nanoparticles were used to measure the lateral optical resolution of the system, while the polystyrene microspheres were used to investigate the relationship between resolution and spatial sampling. The systems axial resolution was not characterized for this study. Applied in thick tissues, optical sectioning can be used to recover signals at different depths within the sample [26, 27, 28]. However, the thin samples used in this study, do not provide a depth resolvable phantom since PARS signal is generated over the entire thickness with each excitation. The lateral optical resolution was determined from the full width half maximum (FWHM), of imaging the 200 nm nanoparticles. To fully capture the nanoparticles, the minimum accurate lateral step sizes of 25 nm and 50 nm for the x and y lateral step respectively were used (Figure 2a). Based on the average FWHM of 50 gold nanoparticles, the resolution was determined to be \\(\\sim\\)300 nm (Figure 2a). While the optical resolution provides an ideal metric, in point scanning microscopy the spatial sampling in addition to the optical resolution will affect the final image resolution.
To determine the effects of the spatial sampling interval on image formation, polystyrene microspheres were imaged with varying lateral step sizes. The 0.95 \\(\\mu m\\) microspheres were selected as their size is representative of cell nuclei. While imaging these samples, the spatial sampling rate was varied from 25 nm to 250 nm. The effects of varying the lateral step size can be seen in Figure 2b. As the sampling rate is decreased the resolving power is correspondingly decreased. The spatial resolution is ultimately determined by a combination of the optical resolution and the spatial sampling (pixel resolution). If the optical resolution is finer than the pixel size, the pixel size determines the image resolution. Alternatively, if the optical resolution is larger than the pixel size, the optical resolution will be dominant. In the large grossing scans with 4 \\(\\mu m\\) pixel size, the spatial sampling rate determines the resolution. In the smaller high-fidelity frames with 250 \\(nm\\) pixel size (\\(<300\\) nm optical resolution), the optical resolution is dominant. For this reason, 250 \\(nm\\) spatial sampling rate was selected for the high-fidelity imaging. Using these steps, micron scale structures can be resolved with near optical resolution while maintaining lower scanning times and smaller data volumes.
Moving towards clinical applications, it would be ideal to incorporate the PARS system into the current tissue processing scheme. PARS could potentially be used to image tissues directly after excision, prior to histological analysis. Thus, nuclear morphology could be recovered from bulk tissues immediately during surgery, while still allowing further analysis and immunochemistry. To be implemented in this fashion, PARS should not interfere with current processing techniques. Imaging artifacts such as modification or degradation of tissues
Figure 2: Resolution characterization of the PARS system. a) (1) Image of 200 nm diameter gold nanoparticles acquired with the PARS system using a 25 nm lateral step size in the x, and a 50 nm lateral step size in the y. (Scale bar: 100 nm) (Dynamic range presented in decibels) (2) Lateral point spread function of the PARS system from gold nanoparticles, averaged across 50 nanoparticles. (FWHM resolution: \\(\\sim\\)300 nm) b) PARS image of a 0.95 μm polystyrene bead used to test the spatial sampling rates. (1) acquired using a 25 nm lateral step size. (2) acquired using a 75 nm lateral step size. (3) acquired using a 175 nm lateral step size. (4) acquired using a 250 nm lateral step size. (Scale bar: 500 nm) (Dynamic range presented in decibels).
must be avoided. Therefore, the PARS optical system was refined to maximize imaging sensitivity and minimize the required excitation energy. Two avenues of optimization were pursued in this refinement, efficiency of photoacoustic generation and efficacy of PARS modulation recovery. To increase the localized photoacoustic pressure without increasing pulse energies, a 0.5 NA objective lens was exchanged for the previously utilized 0.3 NA lens [26, 27]. Concurrently, the beam paths were condensed to the shortest viable path lengths. Reducing path lengths decreases vibration and thermal sensitivity. Shortening the beam paths also reduces the relative lateral displacement of the beam at the objective per microradian rotation in each alignment mirror. This allows for more precise manual positioning and co-alignment of the detection and excitation foci.
Additionally, further refinements were made to the detection pathway. Since PARS measures photoacoustic pressures as a modulation in the back reflected intensity of the detection beam, the detection sensitivity inherently depends on the efficiency of back reflection to the photodiode. Each percentage increase in back reflection efficiency is accompanied by a corresponding increase in the imaging sensitivity. Therefore, the optical system was refined to optimize the return of detection beam from the sample to the photodiode. There were two goals of the detection path refinements. First, to remove all non-essential optical components. Second, to reduce the detection power losses at each essential optical element. To this end, the galvo-mirrors and a dichroic mirror used in previous PARS embodiments [26, 27] were removed, resulting in an \\(>\\)15% increase in returned power through the detection path. Cumulatively, these refinements have enhanced contrast and reduced the excitation energy required for imaging. We have accentuated subtle contrast within tissue specimens, while reducing the risk of damaging tissues. This corresponded to a decrease in excitation pulse energy to about 750 pJ, from other reports of \\(\\sim\\)5 nJ [29].
Applying this system to unstained tissue samples, results of large grossing scans are shown in Figure 3. Figure 3a and 3b feature large field of view scans of a human MMS excision with BCC (13 mm and 10 mm side lengths respectively). The larger 13 mm scan was collected in \\(\\sim\\)12 minutes, while the smaller 10 mm scan was collected in \\(\\sim\\)8 minutes. In each case, dense regions of tumor tissues are observed within the excised tissues. Moreover, the bulk tissue morphology and the surgical margins can be assessed readily. A region of abnormal tissue in the excision sample, consisting of hypercellularity and architectural distortion, is seen extending and infiltrating into normal skin tissue (red outline, Figure 3a). Further resection layer shows only normal skin tissue (Figure 3b) with hair follicles being visible (green circles).
Assessing the entire excision specimen is crucial to the high success of MMS compared to other techniques. Observing the entire surgical margin means areas of contiguous invasion can be identified for excision. In this specimen, the initial resection (Figure 3a) showed areas suspicious for tumor invasion at the deep and inferior margin, which can be seen due to the increased cellularity, morphologically different cell structure, and region of architectural distortion surrounding the tumor. By necessity, tissue sections exceeding 1.0 cm\\({}^{2}\\) in area, such as the scans presented in Figure 3, must be imaged with subcellular resolution. Moreover, to improve on frozen sectioning, imaging must be performed in under 20 minutes. Ideally the imaging time should be further reduced, to under 10 minutes. Several technical improvements have been made over previous reports of PARS devices to match the scanning area, resolution and imaging speed required for MMS. These have included improvements to scanning accuracy, imaging speed, image reconstruction techniques, and data management. By the nature of the point scanning mechanism, these improvements have enhanced both the grossing scans and small field scans. At each scale, (large or small scan window) the number of interrogation points may remain the same, however, the excitation rate and/or mechanical scanning speed may be scaled up or down to provide the desired lateral step size and resolution.
Increasing the scan area is usually accompanied by a corresponding reduction in resolution or an increase in acquisition time. Fortunately, in PARS, the impacts on resolution and imaging speed may be mostly mitigated by selecting a sufficiently high excitation laser repetition rate. However, increasing the repetition rate while maintaining the same spatial sampling rate also necessitates an increase in the movement speed of the scanning stages. In the proposed system, these stages are limited to a maximum velocity of 200 mm/s. Thus, for the grossing scans displayed in Figure 3 a and b, maintaining 4 um spatial sampling requires a repetition rate of 50 kHz. Operating at the maximum mechanical velocity and 50 kHz excitation, a 16-million-point scan similar to Figure 4, or Figure 5, could be captured in around 12 minutes. Moving forwards, the imaging time will be drastically reduced by exchanging the mechanical stages and excitation laser. Utilizing a commercially available 600 mm/s mechanical stage in conjunction with a 1 MHz excitation would reduce the imaging time from 12, to under 2 minutes. However, another issue arises as capturing images such as Figure 3 through Figure 5 (16-million-points/image) would potentially result in nearly 30 GB of data for a single capture. To circumvent this issue, an algorithm was applied to extract the characteristic amplitude of each PARS signal in real time. Thus, the memory requirement is reduced by around 256 times. As a result, the number of PARS signals which can be reasonably recovered in a single scan is increased by the same factor, enabling practical acquisition of the presented larger and/or higher resolution scans.
Compared to the wide field images in Figure 3, the tighter spatial sampling and smaller field of view in Figure 4 reveal more intricate tissue morphology. By reducing the lateral spatial sampling rate to 250 nm, we maintain our optically limited resolution of \\(\\sim\\)300 nm. Presented in Figure 4 is a series of high-resolution PARS images of a tissue sample with BCC (bottom: a.2, b.2, c.2). Brightfield images of the same tissue following preparation with H&E staining are shown across the top row (a.1, b.1, c.1) of Figure 4. The H&E staining in Figure 4 is perhaps the most common immunohistochemical stain set and is regularly used in MMS for assessing NMSC. Within the images, the margin of invasive disease is identified due to high cellularity (lower right of red boundary) and corresponding nuclear content on PARS imaging, which
Figure 3: Wide field of view PARS images of entire Mohs excisions. a) 13 mm by 13 mm PARS image of human skin tissue with basal cell carcinoma (red outline) shown by the increased cellularity in the middle (deep) margin causing invasion and architectural distortion of the normal skin (scale Bar: 2 mm). b) 10 mm by 10 mm PARS image of human skin tissue, where two examples of hair follicles are circled in green (scale Bar: 1 mm). Both a & b feature a 4 μm lateral step size. The notch in the tissue signifies the superior margin and aids in orientation. (Dynamic range presented in decibels).
corresponds to the light microscopy images. The nuclear atypia, high nuclear-to-cytoplas ratio, and disorganized cellular organization are indicative of the presence of cancerous tissue (c.2) and compare favorably to the findings under light microscopy (c.1). We emphasize, that the H&E images (Figure 4a.1, b.1, c.1) are of the exact same tissue imaged with the PARS system (Figure 4a.2, b.2, c.2). By imaging the unstained tissues with the PARS system, then staining the tissues with H&E and imaging with a brightfield microscope, we are able to assess both the accuracy of the PARS and the effects PARS might have on normal pathological analysis. This is the first time such a comparison has been done in human tissues. Observing the brightfield images (Figure 4a.1, b.1, c.1), there is no visible degradation in tissue or nuclear structures following PARS imaging. Therefore, it shows that the PARS imaging has not affected the ability to perform staining and pathological analysis on the tissues. This implies the PARS system could be used to augment the current histopathology workflow with little to no impact.
Fig.4 True one to one comparison of PARS and bright-field images of hematoxylin and eosin (H&E) stained human skin tissue with basal cell carcinoma (BCC). a) (1) 5x bright field image of tissue with BCC demonstrating the border of invasive cancer (bottom of red border) versus normal tissue (top of red border). (2) PARS image of the same unstained sample with BCC, the same red border denotes the cancer boundary. Scale Bar: 200 \\(\\mu\\)m b) (1) 20x bright field image demonstrating the same cancer margin as in (a.1, a.2). (2) Enlarged section (green box) of the PARS image (a.2) compares the disorganized cellular architecture seen in the light microscopy (b.1) and PARS images (the same red border separates cancerous and normal tissues). Scale Bar: 100 \\(\\mu\\)m c) (1) 20x bright field of tissue with clearly identifiable atypical nuclear morphology, size and distribution. (2) Enlarged section (red box) of the PARS image (b.2), providing a near perfect match to the brightfield histology image (c.1). Scale Bar: 40 \\(\\mu\\)m. (Dynamic range presented in decibels).
Presented in Figure 5a is a series of 4 smaller 1 mm\\({}^{2}\\) images co-registered to form a 4 mm x 1 mm image with resolution equivalent to Figure 4. Each individual frame is 28 megapixels, 4000 by 7000-point image. Shown in this image is the transition (demarcated by the dashed red line) from cancerous tissue to normal tissue at the tumor margin, which is clearly visible by the different tissue architecture detectable by PARS imaging. The cancer cell's atypical cell and nuclear features are seen on higher magnification images Figure 5a to c. The ability to identify this tumor margin will guide the MMS surgeon in resecting the next layer of tissue.
To capture absorption contrast, PARS leverages the photoacoustic effect. In PARS, a pulsed excitation laser is co-focused onto a sample with a continuous wave detection laser. The pulsed excitation laser is then used to deposit optical energy into the sample. As the target absorbs the excitation energy it undergoes thermoelastic expansion, proportional to the absorbed energy. The thermoelastic expansion causes corresponding modulations in the back-reflection of the detection beam via the elasto-optic effect. The modulations in the back-reflected detection beam are then proportional to the optical absorption [25-27]. In this application, the 266 nm excitation is selected mainly to target the optical absorption of DNA. This is an appropriate selection since DNAs UV absorption is orders of magnitude higher than most biological tissues. However, most common biomolecules have non-zero optical absorption of UV. This means PARS contrast could be captured from a wide variety of biomolecules if the detection were sensitive enough to capture the signals. With the refinements to improve detection sensitivity in this work, we are able to recover absorption contrast from a variety of chromophores in addition to DNA. However, since the PARS signal is directly proportional to the optical absorption, the signals recovered from DNA are orders of magnitude higher than signals recovered from other biomolecules. Therefore, if the raw data is presented with linear scaling only DNA is visible. In order to target extranuclear contrast, the raw data is transformed logarithmically during the image formation process.
Logarithmic scaling enables non-nuclear chromophores with absorption across many orders of magnitude to be visualized. With this method the subtle contrast attributed to biomolecules,
Fig 5: Large area high resolution PARS image of human skin tissue with basal cell carcinoma. (a) A series of four PARS images stitched together (112-megapixel image, 16000 by 7000-point scan, 250 nm step size) with hypercellularity and nuclear content. Evident disorganized cellular architecture denoting cancerous tissue is enclosed in the red border on the left side with normal tissue in the top and right. Scale Bar: 400 μm (b) Cropped and enlarged section (red box) of the PARS image shown in (a) Scale Bar: 100 μm (c) Cropped and enlarged section (blue box) of the PARS image shown in (a) Scale Bar: 100 μm (d) Cropped and enlarged section (green box) of the PARS image shown in (a) Scale Bar: 100 μm. (Dynamic range presented in decibels).
such as cytochrome, hemoglobin and collagen can be recovered from the absorption data. Such extranuclear details are observed in Figure 5. While these system refinements and processing techniques have enhanced non-nuclear contrast, the extranuclear chromophores cannot be individually identified as their UV absorptions are relatively similar. However, recovering bulk tissue contrast from extranuclear structures still imparts a visualization benefit, as bulk tissue morphology may be captured in addition to nuclear contrast with a single wavelength. Moving forward, additional chromophore specific excitation wavelengths will be explored. Rather than using UV excitation, the infra-red absorption of DNA may be targeted to provide similar histology-like contrast while improving in-situ compatibility. Additionally, further excitation wavelengths will be added providing selective contrast for biomolecules such as lipids and collagen.
In addition to adding more chromophore specificity, future works will focus on increasing imaging speed. Considering the current system, capturing a single grossing or small region scan requires around 12 minutes and provides a 16-megapixel image. This is approximately 60% the time required for frozen histology preparation. However, to recover wider swaths such as Figure 5a, the scanning time increases linearly with 4 frames requiring 48 minutes. While still comparable to frozen sectioning, the PARS imaging time exceeds that of frozen pathology depending on how many scans are required. Moving forward, the imaging speed may be improved by increasing the point capture/excitation rate. Previous works have reported excitation rates of nearly 1 MHz, which if employed, would reduce imaging time to less than 30 seconds per scan [30]. Concurrently, to avoid mechanical limitations, more efficient scanning methods will be implemented. Future work will focus on incorporating these improvements aiming to image entire MMS excisions in one capture, with the same area as the presented grossing scans (Figure 3), and the same spatial sampling as the presented high-resolution captures (Figure 4 and Figure 5).
## 4 Conclusion
The presented results demonstrate the first visualization of nuclear morphology in human tissue samples exhibiting BCC, using a non-contact photoacoustic microscopy technique. By imaging unstained tissues, then staining and analyzing the same tissues we provide the first true one to one comparison between PARS microscopy and brightfield histopathological imaging. Shown here, subcellular structures are recovered from entire MMS sections, enabling full tissue margin analysis. Concurrently, small regions are captured providing close-ups of clinically relevant features such as nuclear organization, density and morphology. The one to one results presented to show the efficacy of the optimized PARS system, but do not presented here are provide a thorough clinical comparison. Moving forwards, the system and method developed here will soon be used to conduct a randomized controlled trial with clinicians to fully explore the pathological accuracy.
Notably, since PARS emulates the contrast provided by common histochemical stains, suggesting that if adopted, there may be little requirement to retrain pathologists to interpret a new image type. Ideally, the PARS system may facilitate recovery of diagnostic details more rapidly than conventional techniques by eliminating tissue processing steps. Moving forwards, the non-contact label-free reflection-mode PARS microscope presented here, could potentially be applied directly to unprocessed MMS excisions. As shown, PARS does not degrade or modify tissues samples in any way, causing no detriment to histopathological processing. This suggests bulk tissues could be imaged immediately after excision, while still being preserved in their entirety enabling re-examination, further histopathological processing and immunochemistry. Thus, this device may circumvent the need for frozen pathology, or could act as an adjunct to the current processing stream. Overall, the PARS modality is well suited to intraoperative guidance of MMS and could reduce MMS cycle time, increase patient flow,and free up histopathology staff to perform other tasks. Moving forwards, the performance of this system will be examined in freshly resected bulk MMS excisions.
## Funding
Natural Sciences and Engineering Research Council of Canada (DGECR-2019-00143, RGPIN2019-06134); Canada Foundation for Innovation (JELF #38000); Mitacs Accelerate (IT13594); University of Waterloo Startup funds; Centre for Bioengineering and Biotechnology (CBB Seed fund); illumiSonics Inc (SRA #083181); New frontiers in research fund - exploration (NFRFE-2019-01012).
## Acknowledgments
N/A
## Disclosures
KB: illumiSonics Inc. (F, I, E, P); DD: illumiSonics Inc. (I); JRM: illumiSonics Inc. (I); PHR: illumiSonics Inc. (F, I, P, S).
## References
* [1] E.M. Finley, \"The principles of mohs micrographic surgery for cutaneous neoplasia,\" Ochsner. J. **5**(2), 22-33 (2003).
* [2] E. Perera and R. Sinclair, \"An estimation of the prevalence of nonmelanoma skin cancer in the U.S. F1000Res,\" **2**,107 (2013).
* [3] J.T. Chen, S.J. Kempton, and V.K. Rao, \"The Economics of Skin Cancer: An Analysis of Medicare Payment Data,\" Plast. Reconstr. Surg. Glob. **4**, e868. (2016).
* [4] D.E. Rowe, R.J. Carroll and C.L. Day, \"Long-term recurrence rates in previously untreated (primary) basal cell carcinoma: Implications for patient follow-up,\" J. Dermatol. Surg. Oncol. **15**(3), 315-328 (1989).
* [5] D.E. Rowe, R.J. Carroll, C.L. Day, \"Prognostic factors for local recurrence, metastasis, and survival rates in squamous cell carcinoma of the skin, ear, and lip. Implications for treatment modality selection,\" J. Am. Acad. Dermatol. **26**(6), 976-990 (1992).
* [6] V. Samarasinghe and V. Madan, \"Nonmelanoma skin cancer,\" J. Cutan. Aesthet. Surg. **5**(1), 3-10 (2012).
* [7] D. Beaulieu, R. Fathi, D. Srivastava, and R.I. Nijhawan, \"Current perspectives on Mohs micrographic surgery for melanoma,\" Clin. Cosmet. Investig. Dermatol. **11**, 309-20 (2018).
* [8] M. Rajadhyaksha, G. Menaker, T. Flotte, P.J. Dwyer and S. Gonzalez, \"Confocal examination of nonmelanoma cancers in thick skin excisions to potentially guide mohs micrographic surgery without frozen histopathology,\" J. Invest. Dermatol. **117**(5), 1137-1143 (2001).
* a new approach with a mould and glass discs: review of the literature and comparative study,\" J. Otolaryngol. **35**(5), 292-304 (2006).
* [10] J. Cook and J. Zitelli, \"Mohs micrographic surgery: A cost analysis,\" J. Am. Acad. Dermatol. **39,** 698-703 (1998).
* [11] T. Yoshitake, M.G. Giacomelli, L.M. Quintana, H. Verdeh, L.C. Cahill, B.E. Faulkner-Jones, J.L. Connolly, D. Do and J.G. Fujimoto, \"Rapid histopathological imaging of skin and breast cancer surgical specimens using immersion microscopy with ultraviolet surface excitation,\" Sci. Rep. **8,** 4476 (2018).
* [12] C. Longo, M. Ragazzi, M. Rajadhyaksha, K. Nehal, A. Bennassar, G. Pellacani and J.M. Guilera, \"In Vivo and Ex Vivo Confocal Microscopy for Dermatologic and Mobs Surgeons,\" Dermatol. Clin. **34**(4), 497-504 (2016).
* [13] J. Paoli, M. Smedh, A. Wennberg and M.B. Ericson, \"Multiphoton Laser Scanning Microscopy on Non-Melanoma Skin Cancer: Morphologic Features for Future Non-Invasive Diagnostics,\" J. Invest. Dermatol. **128** (5), 1248-1255 (2008).
* [14] M. Larraona-Puy, A. Ghita, A. Zoladek, W. Perkins, S. Varma, I.H. Leach, A.A. Koloydenko, H. Williams and I. Notinger, \"Discrimination between basal cell carcinoma and hair follicles in skin tissue sections by Raman micro-spectroscopy,\" J. Mol. Struct. **993**(3), 57-61 (2011).
* [15] K. Kong, C.J. Rowlands, S. Varma, W. Perkins, I.H. Leach, A.A. Koloydenko, A. Pritiot, H.C. Williams, and I. Notinger, \"Increasing the speed of tumour diagnosis during surgery with selective scanning Raman microscopy,\" J. Mol. Struct. **1073**, 58-65 (2014).
* [16] C.A. Lieber, S.K. Majumder, D.L. Ellis, D.D. Billheimer and A. Mahadevan-Jansen, \"In vivo nonmelanoma skin cancer diagnosis using Raman microspectroscopy,\" Lasers. Surg. Med. **40**(7), 461-467 (2008).
* [17] U. Dahlstrand, R. Sheikh, A. Merdasa, R. Chakari, B. Persson, M. Cinthio, T. Erlov, B. Gesslein, and M. Malmjo, \"Photoacoustic imaging for three-dimensional visualization and delineation of basal cell carcinoma in patients,\" Photoacoustics. **18,** 100187 (2020).
* [18] A.B.E. Attia, S.Y. Chuah, D. Razansky, C. Jun Hui Ho, P. Malempati, U.S. Dinish, R. Bi, C. Yaw Fu, S.J. Ford, J. Siong See Lee, M.W. Ping Tan, M. Olivo, and S. Tien Guan Thng, \"Noninvasive real-time characterization of non-melanoma skin cancers with handheld optoacoustic probes,\" Photoacoustics. **7**, 20-26 (2017).
* a practical approach,\" Exp. Dermatol. **22**(8), 547-551 (2003).
* [20] D. Rashed, D. Shah, A. Freeman, R.J. Cook, C. Hopper and C.M. Perrett, \"Rapid ex vivo examination of Mobs specimens using optical coherence tomography,\" Photodiagnosis. Photodyn. Ther. **19**, 243-248 (2017).
* [21] C.S. Chan, T.E. Rohrer. \"Optical Coherence Tomography and Its Role in Mohs Micrographic Surgery: A Case Report,\" Case. Rep. Dermatol. **4**(3), 269-274 (2012).
* [22] A. Danielli, K.L. Maslov, A. Garcia-Uribe, A.M. Winkler, C. Li, L. Wang, Y. Cheng, G.W. Dorn II, L.V. Wong, \"Label-free photoacoustic nanoscopy\" J. Biomed. Opt. **19**, 086006 (2014).
* [23] E.M. Strohm, M.J. Moore, M.C. Kolios, \"High resolution ultrasound and photoacoustic imaging of single cells,\" Photoacoustics. **4**, 36 (2016).
* [24] Y. Bi, C. Yang, Y. Chen, S. Yan, G. Yang, Y. Wu, G. Zhang, P. Wang, \"Near-resonance enhanced label-free stimulated Raman scattering microscopy with spatial resolution near 130 nm,\" Light Sci. Appl. **7**, 81 (2018).
* [25] P. Haji Reza, W. Shi, K. Bell, P. Paproski, and R.J. Zemp, \"Non-interferometric photoacoustic remote sensing microscopy,\" Light. Sci. Appl., **6**, e16278 (2017).
* [26] S. Abbasi, M. Le, B. Sonier, K. Bell, D. Dinakaran, G. Bigras, J.R. Mackey and P. Haji Reza, \"Chromophore selective multi-wavelength photoacoustic remote sensing of unstained human tissues,\" Biomed. Opt. Express. **10**(11), 5461-5469 (2019).
* [27] S. Abbasi, M. Le, B. Sonier, D. Dinakaran, G. Bigras, K. Bell, J.R. Mackey and P. Haji Reza, \"All-optical Reflection-mode Microscopic Histology of Unstained Human Tissues,\" Sci Rep. **9**, 13392 (2019).
* [28] B.R. Ecclestone, S. Abbasi, K. Bell, D. Dinakaran, G. Bigras, J.R. Mackey, P. Haji Reza, \"Towards virtual biopsies of gastrointestinal tissues using photoacoustic remote sensing microscopy,\" Quant. Imaging Med. Surg. **0**, QIMS51162 (2020).
* [29] N.J.M. Haven, P. Kedarisetti, B.S. Restall, R.J. Zemp, \"Reflective objective-based ultraviolet photoacoustic remote sensing virtual histopathology,\" Opt. Lett. **42**, 535 (2020).
* [30] S. Abbasi, K. Bell, B.R. Ecclestone, P. Haji Reza, \"Real Time & 3D Photoacoustic Remote Sensing\" in Biophotonics Congress: Biomedical Optics 2020 (Translational, Microscopy, OCT, OTS, BRAIN), OSA Technical Digest (Optical Society of America, 2020) (2020), Poster JTh2A.13.
## Article | Mohs micrographic surgery (MMS) is a precise oncological technique where layers of tissue are resected and examined with intraoperative histopathology to minimize the removal of normal tissue while completely excising the cancer. To achieve intraoperative pathology, the tissue is frozen, sectioned and stained over a 20- to 60-minute period, then analyzed by the MMS surgeon. Surgery is continued one layer at a time until no cancerous cells remain, meaning MMS can take several hours to complete. Ideally, it would be desirable to circumvent or augment frozen sectioning methods and directly visualize subcellular morphology on the unprocessed excised tissues. Employing photoacoustic remote sensing (PARSTM) microscopy, we present a non-contact label-free reflection-mode method of performing such visualizations in frozen sections of human skin. PARS leverages endogenous optical absorption contrast within cell nuclei to provide visualizations reminiscent of histochemical staining techniques. Presented here, is the first true one to one comparison between PARS microscopy and standard histopathological imaging in human tissues. We demonstrate the ability of PARS microscopy to provide large grossing scans (\\(>\\)1 cm\\({}^{2}\\), sufficient to visualize entire MMS sections) and regional scans with subcellular lateral resolution (300 nm). | Summarize the following text. | 268 |
arxiv-format/2107_06282v1.md | **A Comparative Genomic Analysis of Coronavirus Families Using Chaos Game**
**Representation and Fisher-Shannon Complexity**
S. K. Laha
CSIR-Central Mechanical Engineering Research Institute
Durgapur, West Bengal, India, PIN-713209
Email: [email protected]
Mobile No. 9749172042
## 1 Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus causes severe respiratory illness along with various symptoms like fever, cough, headache, loss of smell etc. [1-2]. The novel coronavirus disease or COVID-19 disease has caused an unparallel global health crisis which has been declared as a pandemic by World Health Organization (WHO). SARS-CoV-2 is a new member of the coronavirus'fat'e family. Coronavirus's have distinct corona-like spike proteins emerging from the surfaces which enter their host cells by binding to the Angiotensin-converting enzyme 2 (ACE2) receptor [3]. The coronavirus'fat'e family is single-stranded, positive sense, RNA (+ssRNA) virus with an approximate genome size of 27-32 kb [4-5]. The family can be grouped into four genera: alpha, beta, gamma and delta [3].
Genomic signal processing (GSP), a subfield of Bioinformatics is a cross-disciplinary research topic at the intersection of genomics, computer science, statistics etc. Many biomolecular sequences such as DNA/RNA, proteins etc. can be converted into numeric strings before further downstream analysis. DNA/RNA is one such biomolecular sequence that consists of four nucleotides represented by four letters A, G, C and T/U which stands for Adenine, Guanine, Cytosine and Thymine/Uracil respectively. There are various encoding schemes developed for genomic data analysis such as atomic representation [7], chaos game representation [8], thermodynamic properties [9], Voss representation [10], DNA Walk model [11], H-Curve and Z-curve [12-13], 4D-Dynamic Representation of DNA/RNA Sequences [14] etc. The representation can be both graphical and non-graphical [15]. For a comprehensive review of such encoding schemes, authors can refer to [16].
Chaos Game Representation (CGR) proposed by Jeffrey [8] is one of the most popular encoding schemes. The 2D CGR of DNA sequence provides alignment-free fractal structure in a unit square matrix. CGR has been applied in the analysis of DNA sequence [17, 20-22] as well as protein structure [18-19]. Hoang et al. [17] studied the phylogenetic relationship using 2D CGR and the power spectrum of the corresponding DNA sequence. They obtained the phylogenetic tree of human rhinovirus, influenza virus and HPV using the above method and UPGMA. Sengupta et al. [20] carried out similarity studies of coronaviruses based on k-th order frequency chaos game representation (FCGR). Barbosa et al. [21] obtained the CGR maps of SARS-Cov-2 along with Betacoronavirus RaTG13, bat-SL-CoVZC45, and bat-SL-CoVZXC21. Deng and Luan [22] also used CGR in combination with Hurst exponent to study the similarity/dissimilarity of DNA sequences. They used their method on the first exon of \\(\\beta\\)-globin gene of different species. Once the CGR maps are obtained, the next step is then to characterize the complexity pattern of the 2D signal obtained from the CGR process. Researchers have developed various methods to measure the time series signal complexity such as fractal dimension [23], entropies [24], Lyapunov exponent [25] etc. Fisher Information Measure (FIM), originally introduced by Fisher [26] for statistical estimation problems has been shown by Frieden [27] that FIM can be used to measure the degree of disorder of a physical system. Martin et al. [28] have shown that FIM can effectively capture the change in the dynamic behavior of nonlinear systems such as the logistic map, the Lorenz system and the Rossler system. Dembo et al. [29] in a classical paper studied the information-theoretic inequalities including Shannon Entropy Power (SEP) and Fisher Information Measure (FIM). Vignat and Bercher [30] introduced the concept of Fisher-Shannon Information Plane (FSIP) in which they have shown that simultaneous examination of both FIM and SEP can characterize a complex, nonlinear signal. The product of FIM and SEP called Fisher-Shannon Complexity (FSC) has been used to study high-frequency wind speed data in geosciences [31], the evolution of the daily maximum surface temperature distributions [32] and the time series data of Standardized Precipitation Index (SPI) [33].
In this paper, the Fisher-Shannon information approach has been applied for a comparative genomic analysis of eight coronavirus namely Human coronavirus OC43 (HCoV-OC43), Human coronavirus HKU1 (HCoV-HKU1), Human coronavirus 229E (HCoV-229E), Human coronavirus NL63 (HCoV-NL63), Severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory syndrome-related coronavirus (MERS-CoV), Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and Bat coronavirus RaTG13 (Bat-CoV). Out of these viruses Bat coronavirus, RaTG13 causes infection in horseshoe bat, _Rhinolophus affinis,_ whereas the remaining seven viruses are known to cause infection in humans.
### Chaos Game Representation
Chaos Game Representation (CGR) is an interesting process through which one-dimensional sequences can be converted to a two-dimensional space by an iterated function system (IFS). CGR, an alignment-free method, was proposed by Jeffrey [8] to visualize DNA sequences. The resulting two-dimensional representation is in the form of a scatter plot and many remarkable fractal-like patterns can be observed which are difficult to observe in the initial 1D sequences such as DNA, RNA, or protein sequences.
The 2D planar CGR space is a continuous unit square described by four vertices assigned by the four nucleotides i.e., Adenine (A), Guanine (G), Cytosine (C) and Thymine (T). In other words, the coordinates of these four nucleotides are given by A = (0, 0); T = (1, 0); G = (1, 1) and C = (0, 1). In this Cartesian plane any nucleotide sequence of any length can be uniquely determined. The CGR coordinates are determined iteratively by the following process: the first nucleotide position is halfway between the starting point and the corresponding vertex of the nucleotide where the starting point is at (0.5, 0.5). The successive nucleotides are then plotted halfway between the previous nucleotide position and the vertex representing that nucleotide. For a DNA sequence, the equation for the above iterative function system (IFS) is given by
\\[\\begin{array}{l}X_{i}=0.5\\left(X_{i-1}+g_{ix}\\right)\\\\ Y_{i}=0.5\\left(Y_{i-1}+g_{iy}\\right)\\end{array} \\tag{1}\\]
where \\(\\left(g_{ix},g_{iy}\\right)\\) is the corresponding vertex the current nucleotide whereas \\(\\left(X_{i-1},Y_{i-1}\\right)\\) and \\(\\left(X_{i},Y_{i}\\right)\\) are the previous and current coordinates respectively. The iteration proceeds till the last nucleotide in the DNA sequence. Thus, from the CGR two series along the \\(X\\) and \\(Y\\) coordinates are obtained which are denoted by CGR-\\(X\\) and CGR-\\(Y\\) respectively.
As suggested by Deng and Luan [22], a one dimensional CGR-walk sequence can be obtained as a summation of the \\(X\\) and \\(Y\\) coordinates of the CGR map, i.e.
\\[\\text{CGR-\\emph{Walk}}=\\text{CGR-\\emph{X}}+\\text{CGR-\\emph{Y}} \\tag{2}\\]
where CGR-\\(X\\) and CGR-\\(Y\\) are the \\(X\\) and \\(Y\\) coordinates of the CGR map respectively.
### Fisher-Shannon Analysis
Let \\(X\\) be a continuous univariate random variable with a probability density function denoted by \\(\\emph{P}_{x}\\left(x\\right)\\). The differential entropy of \\(x\\) is given by,
\\[H_{x}=-\\int p_{x}\\left(x\\right)\\log p_{x}\\left(x\\right)dx \\tag{3}\\]
However, as suggested by Dembo et al. [29], it is sometimes convenient to express the above quantity in terms of Shannon Entropy Power (SEP), which is given by,
\\[N_{x}=\\frac{1}{2\\pi e}e^{2H_{x}} \\tag{4}\\]
Further, the quantity Fisher Information Measure (FIM) is expressed as,\\[I_{x}=\\int\\!\\!\\!\\left[\\frac{\\frac{\\partial p_{x}\\left(x\\right)}{\\partial x}}{p_{x} \\left(x\\right)}\\right]^{2}\\!\\!\\!\\!\\!dx \\tag{5}\\]
It should be noted that FIM is conceptually different from Fisher information which is the information content of the random variable \\(X\\) about its distribution parameters, \\(\\theta\\).
Both the FIM and SEP have been applied to study signal complexity. The Fisher-Shannon Complexity (FSC) can be defined as the product of FIM and SEP, as given by,
\\[C_{x}=N_{x}I_{x} \\tag{6}\\]
It can be shown that, \\(C_{x}\\geq 1\\), where the equality holds if and only if \\(X\\) has Gaussian distribution [29].
## 3 Methods and Data
The whole-genome reference sequences of the viruses in the present study are downloaded from the NCBI Genbank. The downloaded genome sequences of the viruses are in the Fasta format and their Accession IDs are given in the following Table.
\\begin{table}
\\begin{tabular}{c c c c} \\hline Sl. No. & Virus & Accession ID & Type & Remark \\\\ \\hline
1 & Human coronavirus OC43 & NC\\_006213.1 & \\(\\beta\\)-CoV & \\\\ & (HCOV-OC43) & & & \\\\ \\hline
2 & Human coronavirus HKU1 & NC\\_006577.2 & \\(\\beta\\)-CoV & \\\\ & (HCOV-HKU1) & & & \\\\ \\hline
3 & Human coronavirus NL63 & NC\\_005831.2 & \\(\\alpha\\)-CoV & \\\\ & (HCOV-NL63) & & & \\\\ \\hline
4 & Human coronavirus 229E & NC\\_002645.1 & \\(\\alpha\\)-CoV & \\\\ & (HCOV-229E) & & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Virus detailsInitially, a phylogenetic tree analysis has been carried out on the downloaded full-length genome sequences. The tree was constructed using the MEGA X software. This is based on maximum likelihood estimation (MLE), Clustal Omega as sequence alignment method with 1000 bootstrap replicates. The Chaos Game Representation map was constructed using the 'kaos' package [35] in R software environment [36] along with the'seqintr' package [37] for biological sequence analysis. The CGR process results in two series along the \\(X\\) and \\(Y\\) directions. The signals along the \\(X\\) and \\(Y\\) directions are then characterized by Fisher-Shannon Information measures. FIM, SEP and FSC are calculated for both \\(X\\) and \\(Y\\) coordinates. FIM and SEP are calculated for the CGR-_Walk_ sequence as well.
## 4 Result
The phylogenetic tree of the eight coronaviruses considered in the present study namely HCoV-OC43, HCoV-HKU1, HCoV-229E, HCoV-NL63, SARS-CoV, MERS-CoV, SARS-CoV-2 and RaTG13 is shown in Fig. 1.
From the Phylogenetic Tree in Fig.1 it is evident that SARS-CoV-2 is most similar to RaTG13, followed by SARS-CoV.
The CGR maps of the eight coronaviruses are shown in Fig. 2. Although fractal patterns in the whole genome sequence are discernible in these figures and they look very similar, it is difficult to compare the genomic sequences from visual inspection alone. Therefore, it is important to measure their complexity numerically for a comparative analysis. Thus, as mentioned above the Shannon Power Entropy (SEP), Fisher Information Measure (FIM) and their product Fisher-Shannon Complexity (FSC) are adopted for numerical characterization. These values can be then used for a comprehensive similarity/dissimilarity analysis of the whole genome sequences.
The Fisher-Shannon Information Plane for the \\(X\\) and \\(Y\\) coordinates of the above mentioned viruses are shown in Fig. 3 and 4 respectively. From the FSIP along \\(X\\) coordinates (Fig. 3) it can be seen that HCoV-OC43, RaTG13 and SARS-CoV-2 form a close cluster. Thus, it may be inferred that these viruses are very similar for the CGR-\\(X\\) coordinates. The horizontal stripes i.e. the \\(X\\)-coordinates give the dinucleotide similarities of AC, CA, GT and TG.
Figure 2: CGR maps of the coronavirus
Also, from the _FSIP-Y_ map, it can be seen that MERS-CoV, RaTG13, SARS-CoV-2 and SARS-CoV form a close cluster. Thus, these viruses are very similar for the _CGR-Y_ coordinates.
Figure 4: Fisher-Shannon Information Plane along \\(Y\\) direction (_FSIP-Y_)
Figure 3: Fisher-Shannon Information Plane along \\(X\\) direction (_FSIP-X_)
Accordingly, Fisher-Shannon Complexity (FSC) for both \\(X\\) and \\(Y\\) coordinates of the coronaviruses are plotted in Fig. 5. Here also, RaTG13 and SARS-CoV are very nearby which indicates that they are genetically very similar. Also, the most distant coronavirus from SARS-CoV-2 is found to be HCoV-NL63, which can be observed in the phylogenetic tree as well. Also, it worth mentioning that FSC values are not equal on unity, which shows the non-Gaussian nature of the signal.
In the previous analysis (i.e, Figs. 3, 4 and 5) the quantities like SEP, FIM and FSC are calculated from probability density functions (pdfs) of the _CGR-X_ and _CGR-Y_ values by treating them as independent random variables. Possibly, more robust quantification of the CGR may be accomplished by assuming a bivariate pdf consisting of random variables along both _CGR-X_ and _CGR-Y_ and then estimating SEP and FIM from that bivariate distribution.
Finally, we also obtain the FSIP of the CGR-walk, which is defined earlier as a summation of the \\(X\\) and \\(Y\\) coordinates. This results in one dimensional time series. The FSIP is shown in Fig. 6 and it can be again seen that SARS-CoV-2, SARS-Cov and RaTG13 are very nearby
Figure 5: Fisher-Shannon Complexity (FSC) of coronaviruses
which indicates that they are genetically very similar. Thus, from the Figs. 3-6 it can be concluded that the virus SARS-CoV-2 bears most close resemblance to the bacteroronavirus RaTG13 followed by SARS-CoV. This confirms the earlier reported results that SARS-Cov-2 genome bears 79.6% sequence similarity to SARS-CoV and 96% similarity to the bat coronavirus [34].
## 5 Conclusion
In this paper, comparative genome analysis of eight coronaviruses namely HCoV-OC43, HCoV-HKU1, HCoV-229E, HCoV-NL63, SARS-CoV, MERS-CoV, SARS-CoV-2 and RaTG13 is carried out using Chaos Game Representation and Fisher-Shannon Complexity (CGR-FSC) method. The genomic divergence of viruses can be observed in the Fisher Shannon Information Plane (FSIP). The results indicate that the coronavirus SARS-CoV-2 causing COVID-19 in humans is most similar to the horseshoe bacteroronavirus RaTG13. The CGR-FSC method is shown to be effective for carrying out a similarity/dissimilarity analysis of DNA sequences.
Figure 6: Fisher-Shannon Information Plane for CGR-_Walk_
### Declaration of competing interest
The authors declare no conflict of interest.
### Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
### Reference:
[1] Wang, Dawei, et al. \"Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in Wuhan, China.\" _Jama_ 323.11 (2020): 1061-1069.
[2] Guan, Wei-jie, et al. \"Clinical characteristics of coronavirus disease 2019 in China.\" _New England journal of medicine_ 382.18 (2020): 1708-1720.
[3] Hoffmann, Markus, et al. \"SARS-CoV-2 cell entry depends on ACE2 and TMPRSS2 and is blocked by a clinically proven protease inhibitor.\" _cell_ 181.2 (2020): 271-280.
[4] Su, Shuo, et al. \"Epidemiology, genetic recombination, and pathogenesis of coronaviruses.\" _Trends in microbiology_ 24.6 (2016): 490-502.
[5] Chen, Bin, et al. \"Overview of lethal human coronaviruses.\" _Signal transduction and targeted therapy_ 5.1 (2020): 1-16.
[6] Koyama, Takahiko, Daniel Platt, and Laxmi Parida. \"Variant analysis of SARS-CoV-2 genomes.\" _Bulletin of the World Health Organization_ 98.7 (2020): 495.
[7] Holden, Todd, et al. \"ATCG nucleotide fluctuation of Deinococcus radiodurans radiation genes.\" _Instruments, Methods, and Missions for Astrobiology X._ Vol. 6694. International Society for Optics and Photonics, 2007.
* [8] Jeffrey, H. Joel. \"Chaos game representation of gene structure.\" _Nucleic acids research_ 18.8 (1990): 2163-2170.
* [9] Breslauer, Kenneth J., et al. \"Predicting DNA duplex stability from the base sequence.\" _Proceedings of the National Academy of Sciences_ 83.11 (1986): 3746-3750.
* [10] Voss, Richard F. \"Evolution of long-range fractal correlations and 1/f noise in DNA base sequences.\" _Physical review letters_ 68.25 (1992): 3805.
* [11] Peng, C-K., et al. \"Long-range correlations in nucleotide sequences.\" _Nature_ 356.6365 (1992): 168-170.
* [12] Hamori, Eugene, and John Ruskin. \"H curves, a novel method of representation of nucleotide series especially suited for long DNA sequences.\" _Journal of Biological Chemistry_ 258.2 (1983): 1318-1327.
* [13] Zhang, Ren, and Chun-Ting Zhang. \"Z curves, an intutive tool for visualizing and analyzing the DNA sequences.\" _Journal of Biomolecular Structure and Dynamics_ 11.4 (1994): 767-782.
* [14] Bielinska-Waz, Dorota, and Piotr Waz. \"Non-standard bioinformatics characterization of SARS-CoV-2.\" _Computers in biology and medicine_ 131 (2021): 104247.
* [15] Bielinska-Waz, Dorota. \"Graphical and numerical representations of DNA sequences: statistical aspects of similarity.\" _Journal of mathematical chemistry_ 49.10 (2011): 2345.
* [16] Yu, Ning, Zhihua Li, and Zeng Yu. \"Survey on encoding schemes for genomic data representation and feature learning--from signal processing to machine learning.\" _Big Data Mining and Analytics_ 1.3 (2018): 191-210.
[17] Hoang, Tung, Changchuan Yin, and Stephen S-T. Yau. \"Numerical encoding of DNA sequences by chaos game representation with application in similarity comparison.\" _Genomics_ 108.3-4 (2016): 134-142.
* [18] Basu, Soumalee, et al. \"Chaos game representation of proteins.\" _Journal of Molecular Graphics and Modelling_ 15.5 (1997): 279-289.
* [19] Fiser, Andras, Gabor E. Tusnady, and Istvan Simon. \"Chaos game representation of protein structures.\" _Journal of molecular graphics_ 12.4 (1994): 302-304.
* [20] Sengupta, Dipendra C., et al. \"Similarity Studies of Corona Viruses through Chaos Game Representation.\" _Computational molecular bioscience_ 10.3 (2020): 61.
* [21] Barbosa, Raquel de M., and Marcelo AC Fernandes. \"Chaos game representation dataset of SARS-CoV-2 genome.\" _Data in brief_ 30 (2020): 105618.
* [22] Deng, Wei, and Yihui Luan. \"Analysis of similarity/dissimilarity of DNA sequences based on chaos game representation.\" _Abstract and Applied Analysis_. Vol. 2013. Hindawi, 2013.
* [23] Mandelbrot, Benoit B., and Benoit B. Mandelbrot. _The fractal geometry of nature_. Vol. 1. New York: WH freeman, 1982.
* [24] Cover, Thomas M. _Elements of information theory_. John Wiley & Sons, 1999.
* [25] Lyapunov, Aleksandr Mikhailovich. \"The general problem of the stability of motion.\" _International journal of control_ 55.3 (1992): 531-534.
* [26] Fisher, Ronald Aylmer. \"Theory of statistical estimation.\" _Mathematical Proceedings of the Cambridge Philosophical Society_. Vol. 22. No. 5. Cambridge University Press, 1925.
[27] Frieden, B. Roy. \"Fisher information, disorder, and the equilibrium distributions of physics.\" _Physical Review A_ 41.8 (1990): 4265.
[28] Martin, M. T., J. Perez, and A. Plastino. \"Fisher information and nonlinear dynamics.\" _Physica A: Statistical Mechanics and its Applications_ 291.1-4 (2001): 523-532.
[29] Dembo, Amir, Thomas M. Cover, and Joy A. Thomas. \"Information theoretic inequalities.\" _IEEE Transactions on Information theory_ 37.6 (1991): 1501-1518.
[30] Vignat, Christophe, and J-F. Bercher. \"Analysis of signals in the Fisher-Shannon information plane.\" _Physics Letters A_ 312.1-2 (2003): 27-33.
[31] Guignard, Fabian, et al. \"Advanced analysis of temporal data using Fisher-Shannon information: theoretical development and application in geosciences.\" _arXiv preprint arXiv:1912.02452_ (2019).
[32] Amato, Federico, et al. \"Spatio-temporal evolution of global surface temperature distributions.\" _Proceedings of the 10th International Conference on Climate Informatics_. 2020.
[33] da Silva, Antonio Samuel Alves, et al. \"Fisher Shannon analysis of drought/wetness episodes along a rainfall gradient in Northeast Brazil.\" _International Journal of Climatology_ 41 (2021): E2097-E2110.
[34] Zhou, Peng, et al. \"A pneumonia outbreak associated with a new coronavirus of probable bat origin.\" _Nature_ 579.7798 (2020): 270-273.
[35] Charif D, Lobry J (2007). \"SeqinR 1.0-2: a contributed package to the R project for statistical computing devoted to biological sequences retrieval and analysis.\" In Bastolla U, Porto M, Roman H, Vendruscolo M (eds.), _Structural approaches to sequence evolution:Molecules, networks, populations_, series Biological and Medical Physics, Biomedical Engineering, 207-232. Springer Verlag, New York. ISBN : 978-3-540-35305-8.
* [36] R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL [https://www.R-project.org/](https://www.R-project.org/).
* [37] Lochel H, Eger D, Sperlea T, Heider D (2019). \"Deep learning on chaos game representation for proteins.\" Bioinformatics. doi: 10.1093/bioinformatics/btz493. | From its first emergence in Wuhan, China in December, 2019 the COVID-19 pandemic has caused unprecedented health crisis throughout the world. The novel coronavirus disease is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which belongs to the coronavirusifiable family. In this paper, a comparative genomic analysis of eight coronaviruses namely Human coronavirus OC43 (HCoV-OC43), Human coronavirus HKU1 (HCoV-HKU1), Human coronavirus 229E (HCoV-229E), Human coronavirus NL63 (HCoV-NL63), Severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory syndrome-related coronavirus (MERS-CoV), Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and Bat coronavirus RaTG13 has been carried out using Chaos Game Representation and Fisher-Shannon Complexity (CGR-FSC) measure. Chaos Game Representation (CGR) is a unique alignment-free method to visualize one dimensional DNA sequence in a two-dimensional fractal-like pattern. The two-dimensional CGR pattern is then quantified by Fisher-Shannon Complexity (FSC) measure. The CGR-FSC can effectively identify the viruses uniquely and their similarity/dissimilarity can be revealed in the Fisher-Shannon Information Plane (FSIP).
**Keywords:** Chaos Game Representation; DNA sequence; coronavirus; Fisher Information; Shannon entropy | Give a concise overview of the text below. | 290 |
mdpi/02f78c9d_ae8c_40e6_a7ce_0b7df263c4d0.md | # Analysis of Land-Use Change between 2012-2018 in Europe in Terms of Sustainable Development
Piotr Gibas
1Department of Spatial and Environmental Economics, University of Economics in Katowice, 40-287 Katowice, Poland; [email protected]
Agnieszka Majorek
1Correspondence: [email protected]
Received: 18 December 2019; Accepted: 6 February 2020; Published: 8 February 2020
## 1 Introduction
The concept of sustainable development is one of the doctrines of economics and assumes that, \"it meets the needs of the present without compromising the ability of future generations to meet their own needs\" [1]. This term was originally used to describe forest management (i.e., rational logging) so that forests could always be restored. Today, the term sustainable development is well known, much more widely understood, and constitutes an important element of international law (e.g., Action Programme--Agenda 21, Leipzig Charter on Sustainable European Cities). It refers to the balance between economic growth (economic aspect), care for nature (environmental aspect), and quality of life (social aspect).
Initially a narrow term related only to spatial development (issues of logging and forest restoration), sustainable development now covers a much wider spectrum of activities (e.g., reduction of social stratification, reduction of pollutant emissions, and shaping of spatial order). The concept of sustainable development has led to the creation of a number of development models (mainly cities), including ecology [2; 3; 4], green city [5; 6; 7], compact city [8; 9; 10], redesigning a city [11; 12], and smart city [13; 14; 15] or MILU (multi-functional and intensive land use) [16; 17]. Although these models apply only to cities, it should be remembered that according to the European Commission's estimates, approximately 85% of Europeans live in urban areas [18].
In Europe, as well as in other parts of the world, climate change and accelerating urbanization contribute to the introduction of rational management of space, whose finite character is more and more visible. That is why emphasis was placed on the aspect of organizing spatial structures. This study presents the methodology of assessment of land-use changes in terms of sustainable development using the example of most European countries. Analysis of the cited model development concepts allows the authors to better understand the effects of specific spatial transformations and, consequently, to assess them. A sustainable city has an orderly functional and spatial structure and aims to make the most efficient use of its resources, including [19; 20]:
* Housing compaction (urban sprawl prevention) in mixed land use;
* Revitalization of contaminated and dysfunctional areas;
* Development of urban green areas and upgrading the quality of natural areas;
* Minimizing negative impacts on the environment by respecting the local community and taking economics into account.
Given the importance of functional and spatial issues in the definition of sustainable development, it is considered appropriate to develop assessment methods based precisely on a spatial factor. Land use changes tangibly indicate development directions and space management. By interpreting transformations in the time horizon, information about trends of change can be obtained, which is a valuable guideline for further policy development. The aim of this article is to present a new view on land use change analysis. Tracking changes taking place in space is a collection of certain guidelines referring to the directions of development of regions in the long term. This paper classifies all specific transformations into frameworks related to sustainable development.
## 2 Materials and Methods
### Land Use Reference Data
Data from CORINE Land Cover program was used for the analysis of land use changes. The program, established in 1985 by the European Community, aimed at collecting harmonized information on the condition of the geographical environment and at coordinating work at an international level, thereby ensuring the consistency of the information and the compatibility of data collected. Data are now available for all of Europe for the years 1990, 2000, 2006, 2012, and 2018. For some countries (including Poland), CORINE land cover data is the only database on land use that covers the whole country, is regularly updated, and is prepared in line with uniform principles [21].
To present a method of assessing transformations in terms of sustainable development, we used the land cover/use map in 2012 and 2018. These were based on satellite images of a certain resolution. Therefore, these data, although generalized to objects with a minimum area of 25 ha (minimum width of 100 m), are a reliable source of information, used in academia for many analyses [22; 23; 24].
The CORINE land cover classes (CLC) are hierarchically organized in three levels. The first one covers five main types of land use and land cover of the globe: Artificial surfaces (1), agricultural areas (2), forest and semi natural areas (3), wetlands (4), and water bodies (5). The second level is fifteen divisions (for example: 11 urban fabric or 21 Arabic land). The third level covers 44 classes (e.g., 111 continuous urban fabric, 112 discontinuous urban fabric or 242 complex cultivation patterns). It should be noted that the methodological scope of individual Level 3 classes is strictly defined [25], and so, for example, the class 242 includes both small, adjacent plots of land used to cultivate various crops (both annual and perennial), as well as small meadows and pastures. It also covers the areas of scattered housing development, including house clusters and entire villages) with homesetad adjacent lands and home orchards and gardens. [26].
CLC data are used in many analyses, including urban growth monitoring and urban sprawl comparisons between different countries, regions, and cities [27; 28], land use forecasts [29] or modelling of road travel speeds [30]. The land recycling report is also an interesting example [31]. The analysis classifies each land use change, which is then combined into the following indicators: Densification; green land recycling; and grey land recycling. This publication report became an inspiration for the research described above.
### Method--Assessment Matrices, DEGURBA Classification and Methodology for Obtaining Results
To develop the assessment matrices of land use transformation, each of the possible transformations (44 classes) in three dimensions, namely economic (Ec), social (So), and environmental (En) were analyzed. When evaluating a given class change, values from \\(-3\\) (very negative impact on sustainable development) to \\(+3\\) (very positive impact) were assigned. The value 0 was introduced for the transformations in which no impact on sustainable development in a given aspect was found. The assessment of the transformations is presented on the matrices at the end of the article (Appendix A. Figures A1-A3).
When analyzing the economic aspect (Ec) both the classes related to industry and transport (anthropogenic areas, classes 121-133), as well as agriculture, forestry, and salt-works were taken into account. Transformation into more specialized areas was rated higher, while losses in expensive classes (e.g. ports, airports) were rated much lower. The social aspect (So) was interpreted as transformations related to housing (classes 111 and 112), urban greenery, recreation areas (classes 141 and 142), and complex farming and land parcel systems (class 242). The following factors were taken into account: Striving for compact urban structures (concentration) and/or increasing recreational areas were taken into account in a positive way. On the other hand, changes related to the loss of heavily invested areas (e.g., scattered housing) were assessed negatively. The environmental aspect (En) includes transformations of classes dominated by nature, on which man (compared to others) has a low impact (e.g. meadows, forests, semi-natural ecosystems, wetlands and water bodies). Transformations into areas of higher biodiversity were assessed positively, while the loss of valuable natural areas into desolated and homogeneous areas was assessed negatively.
The matrices developed this way were then used to assess the changes that occurred in Europe in the years 2012-2018 (373,000 study areas). Additionally, the analysis area was trimmed (intersection option in QGIS version 2.14, which gave 412,600 study areas) to the EU classification presenting the degree of urbanization (DEGURBA), (i.e., the division of areas into basic units of national administrative systems according to population density and their function, where class 1 is key cities, class 2 is small towns and suburbs, and class 3 is rural areas) [32; 33]. The change test areas trimmed this way (ETRS89/ETRS-LAEA, EPSG: 3035 in the reference system) were subjected to the process of determining the surface area, and the result of this operation was used to determine the percentage of the surface area of the basic unit DEGURBA with changed CLC. This percentage was then multiplied by change weights standardized to the [-1.1] range. As a result of these transformations, weighted percentages of surface area changes were obtained for 2839 major cities, 8022 small towns and suburbs, and 39,076 rural areas in three economic (Ec), social (So), and environmental (En) dimensions.
The obtained results are described and presented in the form of a figure for 50,000 basic units presenting beneficial changes in positive (+Ec, +So, +En) and in negative (-Ec, -So, -En) together with all intermediate options. This article also presents a commentary to the network graphs showing the average results in 35 European countries (NUTS - Nomenclature of Territorial Units for Statistics level 0), also divided into three basic DEGURBA classes. The country abbreviations are in line with NUTS (a slightly different set of results visualizations are presented in Appendix B. Figures A4-A7).
The analysis consisted of four stages: (1) creating transformation matrices, (2) cross classification of areas, (3) calculation of weighted changes in the field of economy, society and the environment, and (4) visualization and interpretation of results. This is schematically shown in Figure 1.
## 3 Results
### Land Use Transformation Change Assessment on a Local Scale
The analyses performed show that European local and regional government units have a rather poor record of sustainability. Only 136 units scored positively in all three dimensions. Units of this type are scattered throughout Europe and do not form larger clusters. There are also relatively few basic units with a positive change in two dimensions and with no change in the third. There are 195 (+Ec and +So), 1617 (+Ec and +En), and 4 (+So and +En) respectively, while in this group a certain geographical concentration can be observed (especially in the UK, The Netherlands, and Spain). There are also visible areas that are developing in relation to one dimension while the other two remain unchanged. Such a tendency was identified in 1817 units in the scope of economic dimension, 781 in the scope of social dimension, and 372 in the scope of environmental dimension (Figure 2).
The worst score was given to 45 areas, which had negative changes in all three aspects. They are particularly visible in the northern part of Bulgaria, Northern Macedonia, and Lithuania, but such units are also found in France, Spain, and Portugal. A negative assessment for the two dimensions (the third unchanged) was given for the social and environmental dimension in 10 cases, for the economic and environmental dimension in 1749 cases, and for the social and economic dimension in 14 cases. A negative assessment in only one dimension was given to 3296 units for the environmental dimension, 6 units for the social dimension, and 527 units for the economic dimension.
However, the European space is dominated by changes that have been positively evaluated in economic terms, negatively evaluated in environmental terms, and unchanged in social terms. Such change was observed in over 28,000 DEGURBA units. Figure 2 shows a significant concentration of
Figure 1: Methodology flow chart research.
units with positive environmental dimensional changes (central and northern parts of Sweden and the northern part of Finland). This change takes place mainly on a European scale in 1749 units due to the shrinking of divisions assigned to the economic dimension. What can be seen in the Scandinavian space is illusory and is in fact related to the size of basic spatial units and not to the statistical significance of the changes described.
### Land Use Transformation Change Assessment--Cross-Sectional Results
Weighing the percentage of an area that has changed within large cities makes it possible to conclude that the most stable situation occurs within the changes classified in the social dimension (e.g., a relative increase in green urban areas, classes 141). The yellow line (Figures 3-5) is generally close to zero and deviates slightly for CY (Cyprus), EL (Greece), and LT (Lithuania). However, the biggest changes mainly concern economic dimension (e.g. a relative increase of fruit trees and berry plantations, class 222). The highest weighted increment in this area (by approx. 3.5) was observed in PT (Portugal) and LU (Luxembourg). Large cities in these countries are also characterized by a significant decrease in the weighted area in the environmental dimension (e.g., natural grasslands, class 321, by about 2.0) as shown in Figure 3.
However, the largest decreases in the weighted area classified in the environmental dimension were recorded in the large cities of IS (Iceland, a decrease of 5.0), EL (Greece, a decrease of 4.5), and HR (Croatia, a decrease of about 2.0). In addition, all the big cities in these countries were characterized by a negative assessment of economic changes (at the level of approximately1.0). The smallest average changes were observed in large cities in several countries including AL (Albania), BG (Bulgaria), CZ (Czech Republic), DE (Germany), LV (Latvia), NO (Norway), and the UK (United Kingdom). Large cities (according to the DEGURBA classification) do not exist in some countries such as LI (Lichtenstein) and MT (Northern Macedonia).
Figure 2: Sustainability assessment of local change directions.
As was the case in large cities, the least weighted change values also concerned the social dimension in small and urban areas. Generally, the change was around 0. However, small towns located in MT (Northern Macedonia), for which the weighted percentage of land has fallen to almost \\(-1.0\\), stand out from this standard. There are more separated areas in these locations, which proves that positive economic changes are occurring. The most visible is in the case of PT (Portugal), which had an increase of almost 3.0 and EE (Estonia), with an increase of almost 2.5. These changes take place mainly at the expense of space predisposed to development in the environmental dimension. Small towns in these countries recorded a decrease in the weighted mean of these areas by about 2.5 (Figure 4).
The situation was slightly different for small towns and suburbs in EL (Greece), IS (Iceland), and MT (Northern Macedonia), which recorded an average economic growth of about 1.5 (EL) and 1.0 (the latter two) respectively, with an almost unchanged weighted percentage for the environmental dimension of 0. This may mean that new economic activities are developing mainly in areas where environmental significance in the survey was assessed as relatively low. The average spreads by country are slightly higher in this cross-section than in the case of large cities, with AL (Albania), DE (Germany), DK (Denmark), and LT (Lithuania) being the most stable.
The changes in rural areas on a pan-European scale in the weighted mean of assessment for the social dimension were very stable (at around 0). However, this does not apply to the changes in the economic and social dimension. The biggest discrepancies in this respect were found in PT where there was an economic dimension increase slightly above 2.0 with a simultaneous decrease in environmental dimension at the level of nearly 3.0. In EE, there was an economic dimension increase of almost 1.5 with a decrease in the weighted mean percentage of the area for environmental dimension by approximately 1.75. A slightly smaller discrepancy can be observed for CY, IE, LU, and LV (Figure 5).
The highest average stability was found in rural areas of countries, such as CH (Switzerland), DE, DK, IS, LT, and NL (Netherlands). The countries of Central and Eastern Europe and the Balkan countries including BG, CZ, HR, HU (Hungary), PL, RO, SI (Slovenia), and SK (Slovakia) compensate for economic growth with an almost proportional decline in the areas considered important for environmental sustainability.
Figure 3: Sustainability assessment of local change directions in big cities.
## 4 Discussion
Research on sustainable development focuses on defining the scope of this concept and on identifying measures used to assess it. In all works, it is emphasized that sustainable development is multidimensional, and the proposed indicators are an attempt to combine them into a measurable set of assessments [34; 35; 36; 37]. Publications usually focus on a certain aspect of sustainable development (e.g., economic and industry [38; 39; 40], social and environmental [41; 42; 43], or culture [44; 45]). Research
Figure 4: Sustainability assessment of local change directions in small towns and suburbs.
Figure 5: Sustainability assessment of local change directions in rural areas.
aimed at assessing sustainable development is carried out on a different scale, from local [46; 47], to regional, tonational [48; 49], and to studies covering international comparisons [50; 51; 52].
In the context of rational land development, our research emphasized arranged functional and spatial structure, which effectively uses the existing resources. [53; 54]. However, this is impossible without a number of actions aimed at concentration of housing development (preventing urban sprawl) and mixed land use, as well as using rehabilitated and revitalized areas [55; 56; 57]. Managing changes with respect to economics should minimize the negative impact on the environment by acting with respect for the local community while taking the economy into account. However, this is not a simple task [58; 59]. It's also worth noting that development is not complete without blue-green infrastructure that lays the foundation for biological life in a specific area [60; 61; 62]. The development of urban greenery and the improvement of the quality of natural areas contributes to the preservation of biodiversity and can significantly reduce climate change [63; 64]. Observing the direction and intensity of changes in this respect [65; 66] can be considered not as an intellectual adventure, but rather as a duty of all actors shaping the future of spatial units, including local communities and other groups inhabiting them.
Research on sustainable development using digital maps (including CLC) is relatively rare e.g. [67; 68; 69; 70]. They focus mainly on specific issues such as deforestation [53], assessment of the state of the environment [71; 72], or assessment of the dynamics of spatial transformations [73; 74]. Our approach is different. It is not based on showing the changes themselves or on the search for a quantitative relationship between changes in land use and statistical data, as is the case in the cited works. Our research is based on the methodology developed for land recycling in Europe [31]. Although this approach affects a different research area, it is not without flaws. There were two main flaws, which were the author's assignment of weights to the observed changes and taking the elements connecting land use and land cover as a basis for analysis, as well as using relatively generalized objects with a minimum area of 25 ha (minimum width of 100 m) [25]. Weighing can be objectified by using an expert method. However, it is more difficult to limit the impact of CLC methodological assumptions on the results achieved - it would require using Urban Atlas data or reference data at the national level, and standardizing methodology across the continent. This is currently not possible. Slightly less important is the possible spatial errors related to the CLC database (especially the uncertainties related to the accuracy of the interpretation of the satellite images) which can also affect the results obtained. On the other hand, the presented method allows research to be conducted on the basis of public statistics that have been generalized to quasi-natural units or are difficult to compare during panel research. An unquestionable advantage of this method is the ability to generate results for a large area in quasi-natural units or in an analytical grid with a selected resolution, as well as the ability to generate time lists (for individual CLC editions) with a relatively uniform methodological basis, which encourages further analysis following this route.
The research presented in this article fills the research gap regarding finding a relationship between spatial transformations and the assessment of sustainable development. A minimum necessary condition for sustainability is the maintenance of the total natural capital stock at or above the current level, which also includes land resource. Basic sustainable development strategies are based on sufficiency and efficiency, guided by transformations calculated in this article on the basis of CORINE data.
## 5 Conclusions
This article presents the results of the application of the author's assessment method of land-use change that occurred in the European space in terms of sustainable development. This method is an integrated approach to studying the direction and intensity of changes taking place in the economic, social, and environmental dimensions of this process. Basing the method on assessment matrices (used to construct weighs) and territorial units presenting the degree of urbanization (DEGURBA) allowed for observation of the following trends in the 2012-2018 time horizon:* Development that can be considered sustainable (in terms of land use change) was observed in a relatively small number of basic territorial units of the countries concerned. Territorial units perceive development in terms of economic rather than social change, despite declarative intentions to focus on the latter aspect. In addition, this development often comes at the expense of sound management of spatial and environmental resources, such as blue-green infrastructure. The higher the population density and the more important the function in the functional system of a given country, the greater the differentiation of the weighted mean of the area determined within the described dimensions. The lowest diversity was in the social dimension and the highest was in the economic dimension. The economic dimension is often shaped at the expense of the environmental dimension. The smaller the population density and the lower the importance of the unit, the more often this type of situation was observed. It can therefore be concluded that large cities are growing faster, and that rural development is less sustainable.
* In Europe, significant concentration of areas with similar statistical characteristics of the weighted percentage of area in the described dimensions of sustainable development was relatively rare. However, there are indications that Portugal (PT), Luxembourg (LU), and Estonia (EE) are the countries with the recent greatest asymmetries in sustainable development. The countries with the least asymmetry are Albania (AL) and Germany (DE). The countries of Central and Eastern Europe and the Balkans compensate for economic growth at the expense of the areas considered important in social terms.
It is important to repeat the survey in the remaining time frames (at least for the 2000-2006 and 2006-2012 periods) in order to validate these results. This would allow the method to be tested against a slightly different spatial range (during these periods the DEGURBA classification, among others, was changed) and to generate and interpret information on the stability of the observed change trends. It is possible that these changes could be both cognitively valuable and empirically beneficial for further development policy at the local, regional, national and international level.
Conceptualization, P.G. and A.M.; methodology, P.G. and A.M.; software, P.G.; validation, P.G.; formal analysis, P.G. and A.M.; investigation, P.G. and A.M.; resources, P.G. and A.M.; data curation, P.G. and A.M.; writing--original draft preparation, P.G. and A.M; writing--review and editing, P.G. and A.M.; visualization, P.G.; supervision, P.G.; project administration, P.G. and A.M.; funding acquisition, P.G. and A.M. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The authors declare no conflict of interest.
Figure 12: Land use change assessment matrix in the social aspect.
Figure A3: Land use change assessment matrix in the environmental aspect.
## Appendix BFigure 5: Sustainability assessment of local change directions in country - DEGURBA level 1.
Figure 6: Sustainability assessment of local change directions in country - DEGURBA level 2.
Figure A7: Sustainability assessment of local change directions in country - DEGURBA level 3.
## References
* (1) The World Commission on Environment and Development. _Report of the World Commission on Environment and Development: Our Common Future_; Oxford University Press: Oxford, UK, 1987; p. 16.
* (2) Yu, C.; Dijkema, G.P.K.; De Jong, M.; Shi, H. From an eco-industrial park towards an eco-city: A case study in Suzhou, China. _J. Clean. Prod._**2015**, _102_, 264-274. [CrossRef]
* (3) Roseland, M. Dimensions of the eco-city. _Cities_**1997**, _14_, 198-202. [CrossRef]
* (4) Kenworthy, J.R. The eco-city: Ten key transport and planning dimensions for sustainable city development. _Environ. Urban._**2006**, _18_, 67-85. [CrossRef]
* (5) Melosi, M.V. The Emerald City Was Not a Green City. In _RCC Perspectives, No. 1, GREEN CITY: Explorations and Visions of Urban Sustainability_; Munich, Germany, 2018; pp. 65-72. [CrossRef]
* (6) Hulicka, A. Green City--Sustainable City. _Prof. Geogr._**2015**, _141_, 73-85.
* (7) Beatley, T. _Green Cities of Europe. Global Lessons on Green Urbanism_; Island Press: London, UK, 2012.
* (8) Abdullahi, S.; Pradhan, B.; Mansor, S.; Shariff, A.R.M. GIS-based modeling for the spatial measurement and evaluation of mixed land use development for compact city. _J. Gisci. Remote Sens._**2014**, _52_, 18-39. [CrossRef]
* (9) Majorek, A. Urbanizacja a proces rozlewania sie miaset. In _Analiza Zmian i Prognoza Przyrostu Zabudouy Mieszkaniowye Na Obszarze Polski do 2020 Roku_; Gibas, P., Ed.; Bogucki Wydawnictwo Naukowe: Poznar, Poland, 2017; pp. 10-17.
* (10) Dieleman, F.; Wegener, M. Compact City and Urban Sprawl. _Built Environ._**2004**, _30_, 308-323. [CrossRef]
* (11) Hanzeieha, S.; Tabibian, M. Redesigning Urban Spaces with an Emphasis on the Relationship Between the Physical Environment of the City and the Behavior of Citizens (Case Study: Adl Street in Qazvin). _Space Ontol. Int. J._**2018**, \\(7\\), 1-14.
* (12) Barnett, J. _Redesigning Cities. Principles, Practice, Implementation_; Routledge Taylor & Francis Group: London, UK; New York, NY, USA, 2017.
* (13) Hojer, M.; Wangel, J. Smart Sustainable Cities: Definition and Challenges. In _ICT Innovations for Sustainability_; Part of the Advances in Intelligent Systems and Computing Book Series (310); Hilty, L.M., Aebischer, B., Eds.; Springer: Warsaw, Poland, 2014; pp. 333-349.
* (14) Su, K.; Li, J.; Fu, H. Smart city and the applications. In Proceedings of the IEEE International Conference on Electronics, Communications and Control (IECC), Ningbo, China, 9-11 September 2011; pp. 1028-1031.
* (15) Lombardi, P.; Giordano, S.; Farouh, H.; Yousef, W. Modelling the smart city performance. _Innov. Eur. J. Soc. Sci. Res._**2012**, _25_, 137-149. [CrossRef]
* (16) Mierzejewska, L. Sustainable Development of a City: Systemic Approach. _Probl. Ekorozw. --Probl. Sustain. Dev._**2017**, _12_, 71-78.
* (17) Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M. Evaluating the compatibility of multi-functional and intensive urban land uses. _Int. J. Appl. Earth Obs. Geoinr._**2007**, \\(9\\), 375-391. [CrossRef]
* (18) Pesaresi, M.; Melchiorri, M.; Siragusa, A.; Kemper, T. _Atlas of the Human Planet--Mapping Human Presence on Earth with the Global Human Settlement Layer.__JRC103150_; Publications Office of the European Union, European Commission, Publications Office of the European Union: Luxembourg, 2016; DG JRC.
* (19) Mierzejewska, L. Sustainable development of a city: Selected theoretical frameworks, concepts and models. _Probl. Rozw. Miast_**2015**, _XII_, 5-11.
* (20) Broniewicz, E. _Gospodarovannie Przestrzenia w Warunkach Rozwoju Zdowowazonego_; Oficyna Wydawnicza Politechniki Bialostockiej: Bialystocki, Poland, 2017; pp. 67-83.
* (21) CORINE Land Cover--CLC. Available online: [http://clc.gios.gov.pl/index.php/o-clc/program-clc](http://clc.gios.gov.pl/index.php/o-clc/program-clc) (accessed on 15 November 2019).
* (22) Feranec, J.; Jaffrain, G.; Soukup, T.; Hazeu, G. Determinig changes and flows in European landscapes 1990-2000 using CORINE land cover data. _Appl. Geogr._**2010**, _30_, 19-35. [CrossRef]
* (23) Martinez-Fernandez, J.; Ruiz-Benito, P.; Bonet, A.; Gomez, C. Methodological variations in the production of CORINE land cover and consequences for long-term land cover change studies. The case of Spain. _Int. J. Remote Sens._**2019**, _40_, 8914-8932. [CrossRef]
* (24) Kuscisa, G.; Popovici, E.A.; Balteanu, D.; Grigorescu, I.; Dumitrascu, M.; Mitrica, B. Future land use/cover changes in Romania: Regional simulatons based on CLUE-S model and CORINE land cover database. _Landsc. Ecol. Eng._**2019**, _15_, 75-90. [CrossRef]* (25) CORINE Land Cover Technical Guide--Addendum 2000, Technical Report No 40/2000. Available online: [https://www.eea.europa.eu/publications/tech40add](https://www.eea.europa.eu/publications/tech40add) (accessed on 3 December 2019).
* (26) Rysz, K. Zakres pojeciowy kategorii pokrycia i uzytkowania ziemi stosowany w programe CORINE. In _Analiza Zminan 1 Pregnoza Przyrostu Zabudowy Mieszkaniowej Na Obsazarze Polski Do 2020 Roku_; Gibas, P., Ed.; Bogucki Wydawnictwo Naukowe: Poznan, Poland, 2017; pp. 31-35.
* (27) Urban Sprawl in Europe--The Ignored Challenge. In _EEE Report, No 10/2006_; Publications Office of the European Union: Luxembourg, 2006.
* (28) Urban Sprawl in Europe--Join EEA-FOEN Report. In _EEA Report, No 11/2016_; Publications Office of the European Union: Luxembourg, 2016.
* (29) Gibas, P. _Analiza Zminan 1 Pregnoza Przyrostu Zabudowy Mieszkaniowej Na Obsazarze Polski do 2020 Roku_; Bogucki Wydawnictwo Naukowe: Poznan, Poland, 2017.
* (30) Sleszynski, P. Expected traffic speed in Poland using Corine land cover, SRTM-3 and detailed population places data. _J. Maps_**2015**, _11_, 245-254. [CrossRef]
* (31) Land Recycling in Europe. Approaches to Measuring Extent and Impacts. In _EEA Report, No 31/2016_; Publications Office of the European Union: Luxembourg, 2016.
* (32) Degree of Urbanization (DEGURBA). Available online: [https://ec.europa.eu/eurostat/web/degree-of-urbanisation/background](https://ec.europa.eu/eurostat/web/degree-of-urbanisation/background) (accessed on 3 December 2019).
* (33) RAMON--Reference and Management of Nomenclatures. Available online: [https://ec.europa.eu/eurostat/ramon/miscellaneous/index.cfm?TargetUrl=DSP_DEGURBA](https://ec.europa.eu/eurostat/ramon/miscellaneous/index.cfm?TargetUrl=DSP_DEGURBA) (accessed on 3 December 2019).
* (34) Rennings, K.; Wiggering, H. Steps towards indicators of sustainable development: Linking economic and ecological concepts. _Ecol. Econ._**1997**, _20_, 25-36. [CrossRef]
* (35) Kuik, O.; Verbruggen, H. _In Search of Indicators of Sustainable Development_; Media, B.V., Ed.; Springer--Science + Business: Dordrecht, The Netherlands, 2012.
* (36) Hak, T.; Moldan, B.; Dahl, A.L. _Sustainability Indicators: A Scientific Assessment_; Scientific Committee on Problems of the Environment (SCOPE): Washington, DC, USA, 2007.
* (37) Steurer, R.; Hametner, M. Objectives and Indicators in Sustainable Development Strategies: Similarities and Variances across Europe. _Sustain. Dev._**2013**, _21_, 224-241. [CrossRef]
* (38) Archibugi, F.; Nijkamp, P. _Economy and Ecology: Towards Sustainable Development_; Springer: Berlin/Heidelberg, Germany, 1989.
* (39) Azapagic, A.; Perdan, S. Indicators of Sustainable Development for Industry: A General Framework. _Process Saf. Environ. Prot._**2000**, _78_, 243-261. [CrossRef]
* (40) Wallner, H.P. Towards sustainable development of industry: Networking, complexity and eco-clusters. _J. Clean. Prot._**1999**, \\(7\\), 49-58. [CrossRef]
* (41) Dempsey, N.; Bramley, G.; Power, S.; Brown, C. The social dimension of sustainable development: Defining urban social sustainability. _Sustain. Dev._**2011**, _19_, 289-300. [CrossRef]
* (42) Clini, C.; Musu, I.; Gullina, M.L. _Sustainable Development and Environmental Management_; Springer: Berlin/Heidelberg, Germany, 2008.
* (43) Huber, J. Towards industrial ecology: Sustainable development as a concept of ecological modernization. _J. Environ. Policy Plan._**2000**, \\(2\\), 269-285. [CrossRef]
* (44) Dessein, J.; Soni, K.; Fairclough, G.; Horlings, L. _Culture in, for and as Sustainable Development. Conclusions from COST Action IS1007 Investigating Cultural Sustainability_; University of Jyvaskyla: Jyvaskyla, Finland, 2015.
* (45) Tweed, C.; Sutherland, M. Built cultural heritage and sustainable urban development. _Landsc. Urban Plan._**2007**, _83_, 62-69. [CrossRef]
* (46) Oduwaye, L. Challenges of Sustainable Physical Planning and Development in Metropolitan Lagos. _J. Sustain. Dev._**2009**, \\(2\\), 159-171. [CrossRef]
* (47) Ianos, I.; Peptenatu, D.; Pintilii, R.D.; Draghici, C. About sustainable development of the territorial emergent structures from the metropolitan area of Bucharest. _Environ. Eng. Manag. J._**2012**, _11_, 1535-1545. [CrossRef]
* (48) Xie, K.; Li, W.; Zhao, W. Coal chemical industry and its sustainable development in China. _Energy_**2010**, _35_, 4349-4355. [CrossRef]
* (49) Nourry, M. Measuring sustainable development: Some empirical evidence for France from eight alternative indicators. _Ecol. Econ._**2008**, _67_, 441-456. [CrossRef]
* (50) Krueger, R.; Gibbs, D. _The Sustainable Development Paradox. Urban Political Economy in the United States and Europe_; The Guilford Press: New York, NY, USA; London, UK, 2007.
* Bertram (1986) Bertram, G. \"Sustainable development\" in Pacific micro-economies. _World Dev._**1986**, _14_, 809-822. [CrossRef]
* Pearce et al. (1990) Pearce, D.; Barbier, E.; Markandya, A. _Sustainable Development. Economic and Environment in the Third World_; Edward Elgar Publishing Limited: London, UK; Washington, DC, USA, 1990.
* Pen et al. (2016) Pen, J.; Ma, J.; Du, Y.; Zhang, L.; Hu, X. Ecological suitability evaluation for mountainous area development based on conceptual model of landscape structure, function, and dynamics. _Ecol. Indic._**2016**, _61_, 500-511.
* Yang et al. (2019) Yang, J.; Li, S.; Lu, H. Quantitative influence of land-use changes and urban expansion intensity on landscape pattern in Qingdao, China: Implications for urban sustainability. _Sustainability_**2019**, _11_, 6174. [CrossRef]
* Analysing and Managing Urban Growth--European Environment Agency. EEA. 2011. Available online: [http://www.eea.europa.eu/articles/analysing-and-managing-urban-growth](http://www.eea.europa.eu/articles/analysing-and-managing-urban-growth) (accessed on 3 December 2019).
* Amer et al. (2018) Amer, M.; Reiter, S.; Attia, S. Urban densification through roof stacking: Case study. _Eur. Netw. Hous. Res._**2018**, _97_.
* Brandon (2018) Brandon, E.J. The Rehabilitation of Contaminated Land to Enhance Future Options for Cities. In _International Yearbook of Soil Law and Policy 2018. International Yearbook of Soil Law and Policy_; Ginzky, H.; Dooley, E., Heuser, I., Kasimbazi, E., Markus, T., Qin, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2018. [CrossRef]
* Harvanova (2018) Harvanova, J. Selected aspects of integrated environmental management. _Ann. Agric. Environ. Med._**2018**, _25_, 403-408.
* Friedl and Reichl (2016) Friedl, C.; Reichl, J. Realizing energy infrastructure projects-A qualitative empirical analysis of local practices to address social acceptance. _Energy Policy_**2016**, _89_, 184-193. [CrossRef]
* Pelorosso et al. (2016) Pelorosso, R.; Gobattoni, F.; La Rosa, D.; Leone, A. Ecosystem Services Based Planning and Design of Urban Green Infrastructure for Sustainable Cities. In XVIII Conferenza Nazionale SIU-Societa italiana degli Urbansti. 2016. Available online: [https://www.researchgate.net/profile/R_Pelorosso/publication/299411561_Ecosystem_Services_based_planning_and_design_of_Urban_Green_Infrastructure_for_sustainable_cities/links/56f5090d08ae81582bf14c33/Ecosystem-Services-based-planning-and-design-of-Urban-Green-Infrastructure-for-sustainable-cities.pdf](https://www.researchgate.net/profile/R_Pelorosso/publication/299411561_Ecosystem_Services_based_planning_and_design_of_Urban_Green_Infrastructure_for_sustainable_cities/links/56f5090d08ae81582bf14c33/Ecosystem-Services-based-planning-and-design-of-Urban-Green-Infrastructure-for-sustainable-cities.pdf) (accessed on 3 December 2019).
* Jerome (2017) Jerome, G. Defining community-scale green infrastructure. _Landsc. Res._**2017**, _42_, 223-229. [CrossRef]
* Nilsson et al. (2019) Nilsson, K.; Slatmo, E.; Turunen, E. GREEN INFRASTRUCTURE-strategic land use for well-being, business and biodiversity. In _Report of Nordic Council of Ministers_; Nordregio: Stockholm, Sweden, 2019. [CrossRef]
* Liu et al. (2017) Liu, J.; Li, J.; Qin, K.; Zhou, Z.; Yang, X.; Li, T. Changes in land-uses and ecosystem services under multi-scenarios simulation. _Sci. Total Environ._**2017**, _586_, 522-526. [CrossRef]
* Stache et al. (2019) Stache, E.; Jonkers, H.; Ottele, M. Integration of Ecosystem Services in the Structure of the City is Essential for Urban Sustainability. In _Ecological Wisdom Inspired Restoration Engineering_; Achal, V., Mukherjee, A., Eds.; EcoWISE (Innovative Approaches to Socio-Ecological Sustainability); Springer: Singapore, 2019; pp. 131-150. [CrossRef]
* Du et al. (2019) Du, H.; Liu, D.; Lu, Z.; Crittenden, J.; Mao, G.; Wang, S.; Zou, H. Research Development on Sustainable Urban Infrastructure From 1991 to 2017: A Bibliometric Analysis to Inform Future Innovations. _Earth's Future_**2019**, \\(7\\), 718-733. [CrossRef]
* Wu et al. (2018) Wu, Q.; Hao, J.; Yu, Y.; Liu, J.; Li, P.; Shi, Z.; Zheng, T. The way forward confronting eco-environmental challenges during land-use practices: A bibliometric analysis. _Environ. Sci. Pollut. Res._**2018**, _25_, 28296-28311. [CrossRef]
* Castanho et al. (2019) Castanho, R.A.; Naranjo Gomez, J.M.; Kurowska-Pysz, J. Assessing Land Use Changes in Polish Territories: Patterns, Directions and Socioeconomic Impacts on Territorial Management. _Sustainability_**2019**, _11_, 1354. [CrossRef]
* Mezosi et al. (2019) Mezosi, G.; Meyer, B.C.; Bata, T.; Kovacs, F.; Czucz, B.; Ladanyi, Z.; Blanka, V. Integrated Approach to Estimate Land Use Intensity for Hungary. _J. Environ. Geogr._**2019**, _12_, 45-52. [CrossRef]
* Salata (2017) Salata, S. Land use change analysis in the urban region of Milan. _Manag. Environ. Qual. Int. J._**2017**, _28_, 879-901. [CrossRef]
* Sandru et al. (2017) Sandru, M.I.V.; Iatu, C.; Sandru, D.C.; Cimbru, D.G. Approaching Land Cover-Land Use Changes Using Statistical Data Validation for Urban Policies Improvement. _J. Settl. Stat. Plan._**2017**, \\(8\\), 119-129.
* Petr (2015) Petr (2015) Petr, A.I. Using CORINE data to look at deforestation in Romania: Distribution & possible consequences. _INCD URBAN-INCERC Urbanism. Arbitecruat. Constructiii_**2015**, _2015_, 83-90.
* Gardi et al. (2010) Gardi, C.; Bosco, C.; Rusco, E.; Montanerella, L. An analysis of the Land Use Sustainability Index (LUSI) at territorial scale based on Corine Land Cover. _Manag. Environ. Qual._**2010**, _21_, 680-694. [CrossRef]* (73) Radovic, A.; Bukovec, D.; Tvrtkovic, N.; Tepic, N. Corine land cover changes during the period 1990-2000 in the most important areas for birds in Croatia. _Int. J. Sustain. Dev. World Ecol._**2011**, _18_, 341-348. [CrossRef].
* (74) Feranec, J.; Soukup, T.; Hazeu, G.; Jaffrain, G. _European Landscape Dynamics: CORINE Land Cover Data_; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA; London, UK; New York, NY, USA, 2016. | This article presents methodology of land use change assessment in the context of sustainable development and the results of its application based on the transformations that occurred in individual areas of Europe in the years 2012-2018. This method is based on data from the CORINE (CO-oRdination of INformation on Environment) Land Cover program) and local government units presenting the degree of urbanization (DEGURBA). The transformations taking place in space were evaluated and reduced to economic, social, and environmental dimensions. We then analyzed the results in terms of space (covering almost all of Europe) and in terms of division (large cities, small towns, suburbs, and rural areas). Results indicate that development of the economic dimension most often takes place at the expense of natural resources. It was also determined that the higher the population density, the greater the sustainable development differentiation level in the analyzed dimensions, of which the social dimension was characterized by the lowest differentiation and the economic dimension was highest. The development of rural areas was found to be less sustainable than large urban centers. Interpretation of the results also leads to the conclusion that areas of Europe are very diverse in terms of sustainable development. However, the method itself, despite the imperfections observed by the authors, may be used in further or similar studies.
s +
Footnote †: journal: Journal of LaTe Journal
1
Footnote 1: _Land_ | Condense the content of the following passage. | 285 |
cambridge_university_press/435eed55_b275_43fc_8721_5fbd92559e7c.md | # A new global gridded glacier dataset based on the Randolph Glacier Inventory version 6.0
Yaojun Li
1State Key Laboratory of Cryospheric Science, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, China; 2Key Laboratory of Exorbidity of inland River Basin, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, China; 3University of Chinese Academy of Sciences, Beijing, China and 4Key Laboratory of Tibetan Environment Change and Land Surface Processes, Institute of Tibetan Plateau Research, Chinese Academy of Sciences, Beijing, China
2
Fei Li
34
Donghui Shangguan
1State Key Laboratory of Cryospheric Science, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, China; 2Key Laboratory of Exorbidity of inland River Basin, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, China; 3University of Chinese Academy of Sciences, Beijing, China and 4Key Laboratory of Tibetan Environment Change and Land Surface Processes, Institute of Tibetan Plateau Research, Chinese Academy of Sciences, Beijing, China
2
Yongjian Ding
1State Key Laboratory of Cryospheric Science, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, China; 2Key Laboratory of Exorbidity of inland River Basin, Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou, China; 3University of Chinese Academy of Sciences, Beijing, China and 4Key Laboratory of Tibetan Environment Change and Land Surface Processes, Institute of Tibetan Plateau Research, Chinese Academy of Sciences, Beijing, China
2
Footnote 1: [https://doi.org/10.1017/96.2021.28](https://doi.org/10.1017/96.2021.28) Published online by Cambridge University Press
## Introduction
Since the release of its first version in February 2012, the Randolph Glacier Inventory (RGI) has been a centerpiece of glaciological research and successfully applied in a range of glacier change studies, as well as in impact assessments of glacial change at global and regional scales (Pfeffer and others, 2014; Hock and others, 2019). The latest version (6.0) of the RGI was released in July 2017 (RGI Consortium, 2017). In t, glacier outlines and their accompanying attributes in RGI are provided as shapefiles (a vector format). In addition, a gridded glacier map is supplied at 0.5deg spatial resolution in which zonal records of glacierized areas (in km\\({}^{2}\\)) are stored in a plain-text.DAT file (RGI Consortium, 2017). However, there are significant differences in total glacierized area between the gridded and the shapefile data in RGI 6.0, at both global and regional scales (Table 1).
Climate change studies frequently rely on gridded datasets, which are easy to use and comprise an effective tool for various glaciological and climatic applications, including, but not limited to, glacier change assessments (e.g. Brun and others, 2017), glaciological characteristics (e.g. Scherler and others, 2018), assessing glacier response to climate change (e.g. Sakai and Fujita, 2017), hydrology-related research (Huss and Hock, 2018) and modeling glacier changes (e.g. Raper and Braithwaite, 2006; Shannon and others, 2019). Increasingly, global and regional gridded datasets at different resolutions, such as the outputs of Coupled Model Intercomparison Project phase 6 (CMIPe, Eyring and others, 2016), are emerging and gridded data are critical for linking between glaciers and the corresponding meteorological variables. In this study, we identify errors in the currently available global gridded glacier dataset and present a new global gridded glacier dataset based on the RGI 6.0 which used an alternative gridding method to eliminate the errors.
## Data and methods
We obtained the RGI 6.0 shapefiles as input (original) data, as well as the \\(0.5\\lx@math@degree\\times 0.5\\lx@math@degree\\) grid data for the (following) comparative analysis, both of which are freely available at GLIMS ([https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html)). The steps to generate the gridded dataset are shown in Figure 1. First, we produced the global grid map at a given resolution which is referenced to the WGS84 datum. Second, we split the glaciers using the boundaries of the cell. Third, we re-projected all polygons in the cell to Mollweide projection and recalculated the area of each polygon in the cell, and then summed them up to get the total glacier area of the cell at last (Fig. 1). Compared to the previous method, which typically uses the center point of glaciers and attributes the area to the cells in which the center point is located (so-called CP method), we eliminated the error caused by the glaciers overlapping grids which is unavoidable in the CP method. Since the sources of glacier inventory outlines are remarkably diverse, we recommend using the method by Pfeffer and others (2014) to propagate uncertainty, as this eliminates the effort of the inventory source. The errors for each grid in the 0.5deg dataset are provided in the Supplementary material.
different gridded areas for High Mountain Asia, Southern Andes, New Zealand and Antarctic and Sub-Antarctic regions, where the RGI shapefile and gridded areas differ greatly to one another. Our dataset greatly reduced errors existing in the RGI gridded glacier map (Table 2). The accuracy of our gridded dataset depends critically on the selection of map projection and the method for area calculation. Since there is no previous study which utilized glacier polygons split by the boundaries of the cell for reference, we compared the area of each glacier provided by the RGI Consortium and the recalculated area to assess the map projection and the calculation method we used is reasonable or not (Fig. S2). We found that the recalculated area of each glacier was \\(\\sim\\)0.3% smaller than the RGI-provided area. However, 0.3% is considerably smaller than the uncertainty of the glacier extent in RGI (\\(\\sim\\)5%, Pfeffer and others, 2014).
We demonstrate the application of our gridded dataset with two examples. First, we use two glacier inventories for the alps (Paul and others, 2011, 2020) to demonstrate the utility for glacier area change assessments. It is always hard to perform a glacier-by-glacier area change assessment between the latest and the earlier inventories due to the inhomogeneous interpretation of glacier extents, but a cell-by-cell comparison could reveal the spatial variability of glacier area changes (Fig. 4). The uncertainty in area change for each grid could be assessed following the law of error propagation:
\\[\\frac{\\sqrt{x_{1}^{2}+e_{1}^{2}}}{2}\\]
\\begin{table}
\\begin{tabular}{l l c c c c} \\hline & & 1\\({}^{\\circ}\\) area & 0.5\\({}^{\\circ}\\) area & 0.25\\({}^{\\circ}\\) area & 0.1\\({}^{\\circ}\\) area \\\\ ID & Region name & (\\%) & (\\%) & (\\%) & (\\%) \\\\ \\hline
1 & Alaska & & \\(-\\)0.38 & \\(-\\)0.37 & \\(-\\)0.35 \\\\
2 & Western Canada and USA & 0.07 & 0.07 & 0.01 & \\(-\\)0.06 \\\\
3 & Arctic Canada North & \\(-\\)0.62 & \\(-\\)0.62 & \\(-\\)0.62 & \\(-\\)0.62 \\\\
4 & Arctic Canada South & \\(-\\)0.51 & \\(-\\)0.51 & \\(-\\)0.51 & \\(-\\)0.51 \\\\
5 & Greenland Periphery\\({}^{\\star}\\) & \\(-\\)0.54 & \\(-\\)0.54 & \\(-\\)0.54 & \\(-\\)0.54 \\\\
6 & Iceland & & \\(-\\)0.42 & \\(-\\)0.42 & \\(-\\)0.42 \\\\
7 & Svalbard & \\(-\\)0.62 & \\(-\\)0.62 & \\(-\\)0.62 & \\(-\\)0.62 \\\\
8 & Scandrinavia & \\(-\\)0.41 & \\(-\\)0.41 & \\(-\\)0.41 & \\(-\\)0.41 \\\\
9 & Russian Arctic & \\(-\\)0.61 & \\(-\\)0.61 & \\(-\\)0.61 & \\(-\\)0.61 \\\\
10 & North Asia & \\(-\\)0.22 & \\(-\\)0.22 & \\(-\\)0.22 & \\(-\\)0.22 \\\\
11 & Central Europe & \\(-\\)0.02 & \\(-\\)0.02 & \\(-\\)0.02 & \\(-\\)0.02 \\\\
12 & Caucas and Middle East & \\(-\\)0.01 & \\(-\\)0.01 & \\(-\\)0.01 & \\(-\\)0.01 \\\\
13 & Central Asia & 0.77 & 0.77 & \\(-\\)0.56 & 0.28 \\\\
14 & South Asia West & 1.16 & 1.16 & 0.21 & \\(-\\)0.17 \\\\
15 & South Asia East & \\(-\\)3.76 & -3.76 & 2.84 & 1.00 \\\\
16 & Low Latitudes & 0.61 & 0.61 & 0.61 & 0.61 \\\\
17 & Southern Andes & \\(-\\)0.07 & \\(-\\)0.07 & \\(-\\)0.07 & \\(-\\)0.07 \\\\
18 & New Zealand & 0.01 & 0.01 & 0.01 & 0.01 \\\\
19 & Antarctic and Sub-Antarctic & \\(-\\)0.51 & \\(-\\)0.51 & \\(-\\)0.51 & \\(-\\)0.51 \\\\ Total & & \\(-\\)0.39 & \\(-\\)0.39 & \\(-\\)0.39 & \\(-\\)0.39 \\\\ \\hline \\end{tabular} \\({}^{\\star}\\)General glaciers strongly connected to the ice sheet are included.
\\end{table}
Table 2: Area difference between the RGI 5.0 shapefile values and the new gridded dataset values for each region
Figure 4: (a) glacier
where \\(e_{1}\\) means the grid error in the initial state and \\(e_{t}\\) means the grid error in the final state, both of which could be calculated using the method by Pfeffer and others (2014). Second, we use this gridded dataset to determine correlation (Pearson coefficients) between glacier variations and climate changes on global scale for the period 1981-2016 (Fig. 5). The annual mass balance is derived from the latest compilation dataset of direct and geodetic observations developed by Zemp and others (2019). By using the method of area weighted average (Li and others, 2019) which is dependent on the glacierized areas (in km\\({}^{2}\\)) in each cell, the annual climate variable series (including temperature and precipitation) for each first-order region can be derived from the ERA5 dataset with \\(0.5^{\\circ}\\times 0.5^{\\circ}\\) resolution (Hersbach and others, 2020). The results show that there is a higher level of correlation between mass balance and temperature than between mass balance and precipitation (Fig. 5).
We acknowledge that addressing the issue of gridded glacier maps in the RGI 6.0 is not technically difficult. However, it is an academic issue that must be overcome in order to provide more accurate datasets for glaciological research.
## Conclusions
There are significant differences in the total glacier area between the gridded data and the shapefile data in the RGI 6.0 at both global and regional scales. Based on the RGI 6.0, we developed a new global gridded glacier dataset which reduced the errors of the RGI gridded glacier map. Moreover, we demonstrated the application of this gridded dataset with two examples.
## References
* S. M. M. (2018)The statistics and the code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. A. P. (2017)The statistics and the code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. A. P. (2017)The statistics and the code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. A. P. (2018)The statistics and the code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. A. P. (2018)The statistics and the code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. A. P. (2018)The statistics and the code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. A. P. (2018)The statistics and code can be downloaded from [https://github.com/rylanler/RGI-Gridded.git](https://github.com/rylanler/RGI-Gridded.git) and is linked on [https://www.glims.org/RGI/rgi60_dl.html](https://www.glims.org/RGI/rgi60_dl.html).
* M. | Gridded glacier datasets are essential for various glaciological and climatological research because they link glacier cover with the corresponding gridded meteorological variables. However, there are significant differences between the gridded data and the shapefile data in the total area calculations in the Randolph Glacier Inventory (RGI) 6.0 at global and regional scales. Here, we present a new global gridded glacier dataset based on the RGI 6.0 that eliminates the differences. The dataset is made by dividing the glacier polygons using cell boundaries and then recalculating the area of each polygon in the cell. Our dataset (1) exhibits a good agreement with the RGI area values for those regions in which gridded areas showed a generally good consistency with those in the shapefile data, and (2) reduces the errors existing in the current RGI gridded dataset. All data and code used in this study are freely available and we provide two examples to demonstrate the application of this new gridded dataset.
G +
Footnote †: journal: Computer Vision and Image Processing
1
Footnote 1: [https://doi.org/10.1017/96.2021.28](https://doi.org/10.1017/96.2021.28) Published online by Cambridge University Press | Condense the content of the following passage. | 271 |
arxiv-format/2406_00282v1.md | # Adversarial 3D Virtual Patches using Integrated Gradients
Chengzeng You
_Department of Computing_
_Imperial College London_
[email protected]
Zhongyuan Hau
_Department of Computing_
_Imperial College London_
[email protected]
Binbin Xu
_Robotics Institute_
_University of Toronto_
[email protected]
Soteris Demetriou
_Department of Computing_
_Imperial College London_
[email protected]
## I Introduction
Connected and Autonomous Vehicles (CAVs) leverage various sensing modalities to improve situation awareness. One of those modalities is near-infrared laser light which is leveraged by Light Detection and Ranging (LiDAR) sensors to provide high-precision 3D measurements. These measurements are stored in point clouds, as collections of 3D points. Several CAV manufacturers already leverage LiDARs and there is an array of 3D object detectors which can recognise vehicles, pedestrians and cyclists based on LiDAR measurements. However, prior works have demonstrated the feasibility of LiDAR spoofing attacks which can be controlled to both inject ghost objects [1, 2, 3] and hide real objects [4, 5, 6, 7]. These works have progressively improved the adversarial capability in both software and hardware, focusing on increasing the adversarial budget (the number of points that can be reliably spoofed) and the adversary's success rate against 3D object detectors.
Nonetheless, no prior study has focused on reducing the area that the adversary needs to apply their spoofing capability. Prior works [4, 5, 8] considered the area inside a bounding box surrounding the target object or even larger areas as the region of interest. In this work, we are the first to explore whether it is possible to hide 3D objects from detectors by concentrating the attack on a sub-region of the bounding box. This comes with reduced attack complexity and increased stealthiness benefits for the adversary: it reduces the number of signals needed to be reliably spoofed for a successful attack and it reduces the attack's footprint.
Inspired by prior works on adversarial patches in computer vision [9, 10, 11, 12, 13], we introduce the concept of 3D _virtual patches_ (VPs), a region in a point cloud on which an attack strategy can be applied. We then introduce VP-LiDAR, a methodology for analyzing and perturbing measurements in VPs in the digital domain with the goal of bypassing 3D object detection.
We apply VP-LiDAR in two settings: (a) with manually crafted VPs (MVPs) and (b) with critical VPs (CVPs) designed using a novel framework for identifying critical regions in point clouds. In the first setting, we design four MVPs based on common shapes covering different parts of the target object. Applying VP-LiDAR in the second setting is non-trivial as we first need to identify critical regions. Toward this, we design a novel method we call Saliency-LiDAR or _SALL_. _SALL_ computes point-level contributions to object detection using Integrated Gradients (IG), an explainability-aware approach [14]. _SALL_ can aggregate contributions at the voxel level and across several 3D scenes and objects into a universal saliency map. Based on _SALL_'s universal saliency map, we define three critical VPs (CVPs).
To evaluate VP-LiDAR, we conducted LiDAR relay attacks simulating the physics of LiDAR operations. Our attacks were applied on MVPs and CVPs and empirically evaluated on their ability to hide vehicle objects from popular object detectors. We found that VP-LiDAR with MVPs can achieve similar success rates with an effective object removal attack (ORA-Random) [4] but while attacking a significantly smaller (visually shown) region of interest. We also found that VP-LiDAR attacks with _SALL_-based CVPs are at least 15% more effective than MVP attacks and require focusing the LiDAR relay attack on a CVP area which scales better with the size of target objects (analytically shown) compared to prior work [4].
## II Background and Related Work
**LiDAR Spoofing Attacks.** LiDAR measurements can be spoofed by replaying LiDAR pulses to create fake points in the sensed environment. Such an attack is challenging for the LiDAR system to recognize as it doesn't require any physical contact with the LiDAR sensor or interference with the processing of sensor measurements. To perform realistic attacks, researchers have been improving the hardware and software of LiDAR spoofers [2, 5, 6, 15]. A common attack strategy is to capture LiDAR signals from the victim LiDAR, then add a time delay and fire fake laser beams back to the victim LiDAR. Fake points are shown to be reliablyinjected to fool 3D object detectors to output erroneous predictions. As a result, real objects can be hidden while ghost objects can be injected. Real object hiding is regarded as a more dangerous type of attack than ghost object injection, as it is more likely to cause fatal collisions. Object hiding attacks can be achieved through synchronized methods [2, 3, 6, 7] and asynchronized methods such as relay attacks [1], saturating attacks [15] and high-frequency removal attacks [5]. In our work, we consider an adversary with the ability of mounting relay attacks to hide real objects from 3D object detectors.
**Adversarial Patches.** Our approach of using virtual patches in point clouds to reduce the spoofing region of interest is inspired by prior works on adversarial patches in the 2D image domain. Previous studies have presented strong adversarial patches [9] for several downstream tasks such as person detection [10], face recognition [11, 12] and semantic segmentation [13]. There has been very little exploration of adversarial patches applied in the 3D domain. Chen et al. [16] leveraged the information of 3D adversaries and added perturbations on 2D planes managing to attack optical image sensors. Xiao et al. [17] generated adversarial meshes successfully misleading classifiers and 2D object detectors.
**Critical Points.** One of the main challenges we tackled in this work is identifying critical regions. It has been shown that critical points [18] contribute to features of max-pooling layers and summarize skeleton shapes of input objects [14]. Based on critical points, researchers further studied the model robustness by perturbing or dropping critical point set identified through monitoring the max-pooling layer or accumulating losses of gradients [19, 20, 21]. However, capturing the output of the max-pooling layer struggled to identify discrepancies between key points, and simultaneously, saliency maps based on raw gradients have been proven to be defective [22, 23]. To overcome these issues, Tan et al. [14] introduced Integrated Gradient (IG) [24] which are oriented on generating saliency maps of inputs by calculating gradients during propagation, to investigate the sensitivity of model robustness to the critical point sets and successfully fooling target classifiers with very few point perturbations. However, this study identified critical points specific to object point clouds without further summarizing richer 3D scenes. As a result, for every instance of an object, the attacker needs to run a separate iterative optimization process. In our work, we improve on the Integrated Gradient (IG) [24] approach in two main ways. First, we adapt the proposed framework to the task of 3D object detection in autonomous driving. Secondly, we integrate IG into an end-to-end framework (_SALL_) which aggregates the saliency maps across several 3D scenes and objects to derive a _universal_ saliency map across all instances of an object type which we can use to construct critical virtual patches (CVPs).
## III Threat Model
We perform all our attacks digitally simulating an adversary (\\(\\mathcal{A}\\)) which we assume has the ability to physically realize the attacks. We based our simulation assumptions on prior works whenever possible. In particular, we assume \\(\\mathcal{A}\\) is equipped with a state-of-the-art LiDAR spoofer capable of spoofing LiDAR return signals [1, 2, 5, 7, 15, 25].
\\(\\mathcal{A}\\) can use her spoofing capability to displace a 3D point. The displacement can be achieved along the victim LiDAR's ray direction, such that the fake point can appear either further [4, 25] or nearer [15] than the genuine point relative to the victim vehicle, within a range of 4m [-2m, 2m] and at the granularity of 1m. \\(\\mathcal{A}\\) should also be able to perform the displacements on a number of points (e.g. 1-200) and reliably as the victim vehicle moves [2, 5, 25]. Similarly with Hau et al. [4], we assume \\(\\mathcal{A}\\) can predict the bounding boxes of the victim's 3D object detector but does not have knowledge of the internals of the victim's 3D object detector. To achieve this, \\(\\mathcal{A}\\) detects target objects and transforms their 3D coordinates according to its position's relativity to the victim LiDAR.
The goal of \\(\\mathcal{A}\\) is to leverage the above capabilities to lower the confidence level of 3D object detectors causing them to miss real objects.
## IV Virtual Patches and VP-LiDAR Methodology
We first define virtual 3D patches (VPs) and then introduce our framework (VP-LiDAR) for simulating LiDAR spoofing attacks using VPs.
**Virtual Patches.** A _Virtual 3D Patch_ or simply _VP_ is a subspace within a 3D object's point cloud on which \\(\\mathcal{A}\\) can apply her perturbations. More formally, a 3D scene is a point cloud \\(S\\in\\mathbb{R}^{n\\times d}\\), where \\(n\\) is the number of 3D points in the scene. In each scene, there can be a collection of bounding boxes, one for each detected object. A bounding box \\(B\\), is \\(B\\in\\mathbb{R}^{n_{b}\\times d}\\), where \\(n_{b}<n\\). Then, a virtual patch can be defined as a sub-region \\(V\\in\\mathbb{R}^{n_{v}\\times d}\\), where \\(n_{v}\\leq n_{b}<n\\). The goal of \\(\\mathcal{A}\\) is to come up with a perturbed \\(V^{\\prime}\\), \\(V^{\\prime}\\in\\mathbb{R}^{n_{v^{\\prime}}\\times d}\\) where \\(n_{v^{\\prime}}\\leq n_{v}\\) because some points might be displaced or shifted outside the VP area.
**VP-LiDAR.** VP-LiDAR is a 3D adversarial VP analysis framework that aims to facilitate experimentation with VP-based attack strategies and defenses. VP-LiDAR consists of five phases, taking in raw LiDAR point cloud (\\(\\mathcal{S}\\)) of the scene, performing perturbation of target objects, and producing the adversarial point cloud (\\(S^{\\prime}\\)) as its output.
_Phase 1: Extraction._ VP-LiDAR detects objects from \\(\\mathcal{S}\\). Then it separates \\(S\\) into background points \\(G\\) (\\(G\\in\\mathbb{R}^{n_{g}\\times d}\\)) and a set of target point clouds \\(T=\\{T^{1},T^{2}, ,T^{m}\\}\\). There is exactly one target point cloud \\(T^{i}\\) for each of the \\(m\\) detected objects.
_Phase 2: 2D Indexing._ Each target point cloud \\(T^{i}\\) is further discretized. We use the approach by Lang et al. [26] to find the corresponding indices of each point in pillar format. 2D indexing is more efficient than voxelisation methods, because it does not need to convert points to voxels. Also, the corresponding voxel size is customized and can be set to near point level where each voxel only contains a few points or even one point.
_Phase 3: Virtual Patch Simulation._ Based on the indices, we can apply a 2D virtual patch \\(V\\). Virtual patches can be defined manually (see SS V) or using our _SALL_ method (see SS VI).
_Phase 4: Perturbation._ Different selection strategies can be applied to select points from \\(\\mathcal{V}\\) under an adversarial point budget. For example, VP-LiDAR supports the random selection strategy similarly to ORA-Random [4] which randomly selects points within a target bounding box. VP-LiDAR also supports selecting points according to their criticality - we can calculate such criticalities using our _SALL_ method (SS VI). Due to VP-LiDAR's modular architecture, other novel strategies can be easily incorporated.
To obey the physics of LiDAR, VP-LiDAR shifts points in accordance with the rays that the LiDAR points fall on. Each point in the cartesian coordinate system is first transformed to the spherical coordinates with the radius \\(\\mathcal{R}\\) and the firing angle relative to the LiDAR origin. Then a distance \\(R_{d}\\) is added to the radius \\((\\mathcal{R})\\). The shifted radius \\(\\mathcal{\\bar{R}}=\\mathcal{R}+R_{d}\\) along with the firing angle is then transformed back to the cartesian coordinate. The result is a perturbed virtual patch \\(V^{\\prime}\\) with \\(n_{v^{\\prime}}\\) perturbed points.
_Phase 5: Merge._ All \\(V^{\\prime}\\)s are then merged with \\(G\\) to output the final adversarial 3D LiDAR scene \\(S^{\\prime}=G\\bigoplus V^{\\prime}\\). \\(S^{\\prime}\\) is in the same format as the original LiDAR scene \\(S\\), and can be fed into any LiDAR-based detectors for evaluations.
## V VP-LiDAR with Manual Virtual Patches
To study the feasibility of using virtual patches to reduce the spoofing area of 3D objects, we first manually defined patches, and used VP-LiDAR to evaluate their effectiveness.
### _Manual Virtual Patches_
We designed four MVPs. All MVPs were defined based on the bottom surface (Rec) of the target object.
* _Edges._ This patch is defined as 4 edges of Rec. The thickness of each edge is 3 voxels.
* _Nearest-Corner._ This patch is defined as Rec's corner nearest to the LiDAR unit of the ego vehicle. The dimension of the patch is 8 voxels \\(\\times\\) 8 voxels.
* _Center._ This patch shares the same center with Rec but in a smaller size. We define the patch width as \\(3/4\\) of Rec's width and length as \\(3/4\\) of Rec's length.
* _X._ This patch contains all voxels around the diagonal lines of Rec. The maximum distance from the voxel to either diagonal line is set to 1.5 voxels.
### _Experimental Setup._
For our dataset, we randomly selected 300 autonomous driving scenes from the KITTI dataset [27] and for our evaluation metric, we used the Attack Success Rate (ASR) as the ratio of the number of hidden objects out of all targeted objects. We used ground-truth labels from KITTI as the object detection results. In practice, \\(\\mathcal{A}\\) can choose any state-of-the-art 3D object detector to obtain detection results in the target scene. We focused on _Car_ objects in the front-near region which refers to the region directly in front of the ego-vehicle up to a distance of 10m from the LiDAR unit. _Car_ objects are more dense than _Cyclist_ and _Pedestrian_ objects and are more challenging to attack. For VP-LiDAR's 2D Indexing, we set the corresponding voxel size as per Pointpillars [26] to \\(0.16m\\times 0.16m\\). For the VP simulation, we used the 4 MVPs we defined above. For point shifting, we configured VP-LiDAR to select target points within an MVP using a random strategy as ORA-Random [4] with various point budgets from 1 to 400 (step size = 40) for each object. Lastly, after point shifting, all perturbed data was fed into a target model. For our evaluation, we chose PointPillars [26] which is used in an industry-grade AD system Baidu Apollo 6.0 [28].
### _Results_
**MVP simulations and spoofing area.** To better understand how well VP-LiDAR can simulate attacks on a VP, we use a visualization approach. We chose a _Car_ object as the target (Figure 0(a) and 0(f)). MVPs are applied on 3D point clouds as shown in Figures 0(b)0(c)0(e
metric that captures false negatives and therefore can give us an indication on how many objects are missed.
As shown in Figure 3, _X-Shifting_ performs similarly with _ORA-Random_, even though it attacks points within a much smaller area. For objects nearer than 15 meters, _X-Shifting_ can achieve even better performance (marginally) than ORA-Random for all budgets. For objects further away which are sparser, a random strategy on the entire area might still be the better option, although attacks on those objects are less impactful. Overall, we can see that MVPs can be effective in attacking near-front objects with a fraction of the spoofing area compared to ORA-Random. The reason might be that some MVPs happen to contain certain regions that are critical to the object detector. In the following part, we propose a new approach to identify critical regions and help with the design of even smaller critical VPs (CVPs).
## VI Saliency-LiDAR and Critical Virtual Patches
### _Saliency-LiDAR Method_
To identify critical regions, we develop a method we call Saliency-LiDAR (_SALL_). _SALL_, inspired by Tan et al [14] leverages the Integrated Gradient approach to generate saliency maps of inputs. _SALL_ adapts IG the task of object detection in autonomous driving scenarios, and can aggregate saliency maps across instances of an object type within and across scenes to generate a universal saliency map. _SALL_'s overall architecture is shown on Figure 4 and below we explain each component.
**Preprocessing.**_SALL_ takes a raw 3D scene \\(S\\) as input. Before IG computation, it preprocesses the scene through an extraction module (\\(\\mathcal{E}\\)) which identifies regions of interest \\(R\\), one per target object. It then extracts target objects \\(T\\) and background points \\(G\\). Subsequently, the target objects \\(T\\) are fed into the IG component to compute point-level contributions.
**Integrated Gradient Computation.** For each IG step, points in a \\(T^{i}\\) are perturbed by a perturbation module (\\(\\mathcal{P}\\)) which works similarly to VP-LiDAR's \\(\\mathcal{P}\\)(SS V) and outputs \\({T^{i}}^{\\prime}\\). All \\({T^{i}}^{\\prime}\\) in \\(T^{\\prime}\\) are then merged with the background points (\\(G\\)) to produce the perturbed 3D scene (\\(S^{\\prime}\\)): \\(S^{\\prime}=T^{\\prime}\\bigoplus G\\). \\(S^{\\prime}\\) is used for the gradient computation. It passes through a
Fig. 1: VP-LiDAR Visualization. Top row: manual virtual patches. Blue voxels and red points are selected while green voxels and black points are not selected. Bottom row: red points denote the shifted points while grey points remain unchanged.
Fig. 3: Comparison of X MVP with ORA-Random.
Fig. 2: ASR of MVPs for different point budgets.
3D object detector (\\(\\mathcal{D}\\)) which outputs a set of logits \\(O^{i}\\) for each target object \\(i\\). To focus on target objects instead of the whole LiDAR scene, Intersection of Unions(IOUs) between the focus regions \\(R\\) and predicted bounding boxes are first computed to identify the best predictions. Gradients of the best predictions are saved while other gradients are filtered out in the filter module (\\(\\mathcal{F}\\)). Finally, a point-level contribution map \\(C_{i}^{p}\\) is generated per target object. Lastly, an integrator function integrates all \\(C_{i}^{p}\\) across all IG steps, to produce a point-level saliency or contribution map \\(C^{p}\\) for objects in a single scene.
**Adaptive Indexing (\\(\\tilde{\\mathcal{V}}\\)).** Since \\(R\\) regions have different dimensions and rotations in different LiDAR scenes, to generate a universal saliency map, point-level saliency maps need to be downsampled to pixel-level with the same size. To achieve that, each extracted target point cloud \\(T^{i}\\) is first converted from LiDAR coordinates to bounding box coordinates. Then, given the target size of the universal saliency map (2D-pixel image), \\(\\tilde{\\mathcal{V}}\\) adaptively computes the voxel size for each target object based on \\(R\\)'s dimension. After that, indices of each point in \\(T^{i}\\) can be computed. According to the point-level saliency map \\(C^{p}\\), the contribution of each voxel is summed up to generate a 2D-pixel matrix \\(C^{v}\\) in which each element indicates the contributions of each voxel.
**Aggregation Across Scenes (\\(\\sum\\)).** For each scene \\(S\\), we generate a contribution matrix \\(C^{v}\\). \\(C^{v}\\)s are then aggregated across all \\(k\\) scenes by simple matrix additions to generate the universal saliency map \\(C^{uv}\\) for the target object type. Figure 4(a) shows the saliency map for _Car_ objects at 5-8m. Most positive pixels fall in edges with some less critical and negative pixels falling in the center of the bounding box. This is true for LiDAR objects where most points appear on the surfaces.
### _Critical Virtual Patches_
**Constructing Adversarial Critical Virtual Patches.** With the guidance of the universal saliency map, \\(\\mathcal{A}\\) can generate adversarial CVPs by perturbing points in voxels with top contribution values. As a proof of concept, we constructed three CVPs: _Top_30_ (Figure 4(b)), _Half-Edges_ (Figure 4(c)) and _Critical-X_. _Top_30_ uses only voxels with top 30% positive contributions of the universal saliency map. _Half-Edges_ is designed to capture areas which include the most contributing voxels. _Critical-X_ is a more space-efficient version of the well-performing _X-Shifting_ MVP (SS V) which contains all voxels around the diagonal lines of the bounding box.
## VII Evaluation of Critical Virtual Patches
For our experiments, we selected 500 autonomous driving scenes from the KITTI dataset [27] with 400 scenes for generating saliency maps and 100 scenes for testing the effectiveness of CVPs. We used the ground-truth labels of _Car_ objects in front of the ego-vehicle between 5m and 8m as the region of interest. For integrated gradient computation, we set IG steps = 25. As for the _adaptive indexing_ module, we set the output matrix to a fixed size of \\(64\\ voxels\\times 32\\ voxels\\). The corresponding voxel size of each target object is around \\(0.05m\\times 0.05m\\) on average. In terms of the target model for 3D object detection, we chose PointPillars [26].
### _Spoofing Area Analysis_
Let the size of the target object and a voxel be determined by \\((l_{tar},w_{tar},h_{tar})\\) and \\((l_{v},w_{v})\\) respectively, where we use \\(l\\), \\(w\\) and \\(h\\) to indicate width, length and height of an area. Let also \\(\\alpha\\) and \\(\\beta\\) be scale factors for \\(l_{tar}\\) and \\(w_{tar}\\) to control the patch thickness. For complex patches such as _Critical-X_, we calculate the spoofing area by subtracting 2 pairs of equilateral triangles from the whole area as shown in Equation 2. Given these, we calculate the number of pillars needed for each patch as shown in Equations 1-4.
\\[Area_{whole}=\\frac{l_{tar}*w_{tar}}{l_{v}*w_{v}} \\tag{1}\\]
Fig. 4: Overview of Universal Saliency Map Generation for LiDAR Objects with _SALL_.
Fig. 5: Saliency Map Visualization. Red pixels denote positive contribution values while blue pixels denote negative contribution values.
\\[Area_{critical\\_x}=(1-\\frac{(0.5-\\alpha)(1-2\\alpha)(1-\\beta)}{(1-\\alpha)}\\] \\[-\\frac{(0.5-\\beta)(1-2\\beta)(1-\\alpha)}{(1-\\beta)}) \\tag{2}\\] \\[*\\frac{l_{tar}*u_{tar}}{l_{v}*w_{v}}\\] \\[Area_{half\\_edges}=\\frac{l_{tar}*\\beta*w_{tar}}{l_{v}*w_{v}} \\tag{3}\\]
\\[Area_{top\\_n}=n\\%*Area_{whole} \\tag{4}\\]
Assuming \\(h_{tar}=1.5m\\), \\(w_{tar}=S\\), \\(l_{tar}=2S\\), \\(l_{v}=w_{v}=0.05\\), and setting \\(\\alpha=0.1\\), and \\(\\beta=0.2\\), we plot the number of pillars needed for different sizes (\\(S\\)) of the target object (Figure 6). As a baseline for comparison, we calculate the entire size of the object (e.g. its bounding box) which we call _Whole Area_. _Whole Area_ corresponds to approaches like ORA-Random which target the entire object area. We observe that CVPs can drastically reduce the spoofing areas as the object size increases compared to the _Whole Area_ approach. If the average vehicle length is 5m, then CVPs can reduce the spoofing areas by at least 50%.
### _Effectiveness of CVPs_
**Setup.** We used VP-LiDAR to generate adversarial point clouds and attack the target model on the test dataset. For each CVP, we apply 2 different point selection strategies while perturbing: _Random Selection_ and _Critical First_. With _Random Selection_, given a point-perturbation budget, \\(\\mathcal{A}\\) randomly selects points among all point candidates. With _Critical First_, \\(\\mathcal{A}\\) selects the most critical points inside a CVP according to the _SALL_-generated universal saliency map of the target object.
We compare our CVPs using the above strategies with ORA-Random [4]. We also design an optimized version of ORA-Random (which we call _Whole Area_) for which points can be shifted between -2m to 2m--ORA-Random uses shifting distances between 0m to 2m but prior work [15] has shown \\(\\mathcal{A}\\) can relay LiDAR signals to inject points both nearer and further than the genuine location. Also, in contrast with ORA-Random, _Whole Area_ can be configured to use any of the point selection strategies above. For the rest of the CVPs, VP-LiDAR is also configured to shift points between -2m to 2m.
**Results.** In Table I, we show the ASRs of different CVPs using different selection strategies under different point budgets. Compared with the _X-Shifting_ MVP (see Figure 2), the improved _Critical-X_ demonstrates over 15% ASR improvements under all point budgets. This indicates that CVPs are more effective than MVPs. Moreover, all CVPs and our optimized _Whole Area_ attack can achieve significantly better performance compared to ORA-Random (15%-20% ASR improvements when shifting more than 100 points). At the same time, CVPs only use a significantly reduced spoofing area compared to _Whole Area_.
_Critical First_ point selection strategies did not perform much better than _Random Selection_. The reason might be that within the CVPs we already capture the most critical points. We plan to analyze this further in future work.
### _Transferability_
To evaluate whether our CVPs are effective against other detectors, we selected one point-based detector (PV-RCNN) and two voxel-based detectors (SECOND(one stage detector) and Voxel-RCNN(two stage detector)) as shown in Table II. While detecting objects in 5-8m, all 3 detectors achieved the same recall of 98.2% with undetected objects tending to be the same or around the same location. Although our target objects are very dense (one object normally contains thousands of points), using CVPs under the budget of 200 points, the recall of 3 detectors dropped from 98.2% to 61.1%-70.4% (with a decrease between 28.3% and 38.7%). For objects at larger distances or smaller objects (such as pedestrians and cyclists) that are sparser and more vulnerable, this approach would likely cause greater drops in recall.
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline
**Detectors** & **Clean** & **Half Edges** & **Top\\_30** \\\\ \\hline
**PV-RCNN** & 98.2\\% & 66.7\\% (132.1\\%) & 62.0\\% (\\(\\downarrow\\)36.9\\%) \\\\ \\hline
**SECOND** & 98.2\\% & 60.2\\% (\\(\\downarrow\\)38.7\\%) & 61.1\\% (\\(\\downarrow\\)37.8\\%) \\\\ \\hline
**Voxel R-CNN** & 98.2\\% & 63.9\\% (\\(\\downarrow\\)34.9\\%) & 70.4\\% (\\(\\downarrow\\)28.3\\%) \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Recall of Different Detectors in Benign Scenarios (Clean) compared to when exposed to LiDAR spoofing attacks using CVPs (Half Edges, Top_30). \\(\\downarrow\\) indicates the percentage decrease compared to the benign scenarios.
\\begin{table}
\\begin{tabular}{|l|l|l|l|l|l|l|} \\hline \\multicolumn{1}{|c|}{\\multirow{2}{*}{**Selection**}} & \\multicolumn{4}{c|}{**Point Budget**} \\\\ \\hline
**Selection** & **Patch** & **200** & **150** & **100** & **50** & **10** \\\\ \\hline \\multirow{4}{*}{**Random**} & ORA-Random [4] & 74.1\\% & 68.5\\% & 36.5\\% & 42.6\\% & 17.6\\% \\\\ \\cline{2-6} & Whole Area & 94.4\\% & 88.0\\% & 76.9\\% & 56.5\\% & 27.8\\% \\\\ \\cline{2-6} & Critical X & 91.7\\% & 90.7\\% & 75.0\\% & 61.1\\% & 25.9\\% \\\\ \\cline{2-6} & Half Edges & 92.6\\% & 90.7\\% & 84.3\\% & 37.4\\% & 24.1\\% \\\\ \\hline \\multirow{4}{*}{**Critical**} & Top\\_30 & 91.7\\% & 88.0\\% & 74.1\\% & 65.7\\% & 24.1\\% \\\\ \\cline{2-6} & Whole Area & 91.7\\% & 86.1\\% & 82.4\\% & 50.9\\% & 18.5\\% \\\\ \\cline{2-6} & Critical X & 90.7\\% & 85.2\\% & 77.8\\% & 54.6\\% & 21.3\\% \\\\ \\cline{2-6} & Half Edges & 91.7\\% & 88.0\\% & 78.7\\% & 55.6\\% & 25.9\\% \\\\ \\cline{2-6} & Top\\_30 & 93.5\\% & 8.0\\% & 83.3\\% & 35.6\\% & 22.2\\% \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Effectiveness comparison of CVPs in hiding cars at 5-8m for different point budgets.
Fig. 6: Spoofing Areas of VPs.
## VIII Discussion & Future Work
We defined 3D virtual patches (VPs) and proposed a modular analysis framework (VP-LiDAR) which leverages VPs to digitally test 3D object-hiding attacks. We first demonstrate the potential of VPs by defining manual VPs (MVPs) and showing that they can achieve similar attack success rates compared to strong object-hiding attacks while reducing the spoofing area. We then introduce _SALL_, a method that uses integrated gradients to generate universal saliency maps for target objects and show how we can use such maps to construct critical virtual patches (CVPs). Our evaluations showed that with a point budget of 200, one can leverage CVPs to attack state-of-the-art detectors with more than a 90% success rate. This is 15-20% better compared to ORA-Random [4] while it requires targeting only a fraction of the spoofing area.
In future work, we plan to explore the effectiveness of CVP-based attacks against other object types such as pedestrians and cyclists. We expect our attacks to be more effective against these since they exhibit higher point sparsity [30]. We will also test the robustness of our attack method against point and object-level defenses such as CARLO [25], Shadow Catcher [30], 3D-TC2 [8] and ADoPT [31]. Lastly, we note that our spoofing capability simulations are based on prior works' findings on the physical capability of LiDAR spoofers. We leave it to future work to verify the feasibility of physically realizing VP-LiDAR attacks with MVPs and CVPs.
## References
* [1] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, \"Remote attacks on automated vehicles sensors: Experiments on camera and lidar,\" _Black Hat Europe_, vol. 11, p. 2015, 2015.
* [2] J. Sun, Y. Cao, Q. A. Chen, and Z. M. Mao, \"Towards robust {LiDAR-based} perception in autonomous driving: General black-box adversarial sensor attack and countermeasures,\" in _29th USENIX Security Symposium (USENIX Security 20)_, 2020, pp. 877-894.
* [3] R. S. Hallyburton, Y. Liu, Y. Cao, Z. M. Mao, and M. Pajic, \"Security analysis of camera-lidar fusion against black-box attacks on autonomous vehicles,\" in _31st USENIX Security Symposium (USENIX Security 22)_, 2022, pp. 1903-1920.
* [4] Z. Hau, T. Kenneth, S. Demetriou, and E. C. Lupu, \"Object removal attacks on lidar-based 3d object detectors,\" in _Workshop on Automotive and Autonomous Vehicle Security (AutoSec)_, vol. 2021, 2021, p. 25.
* [5] T. Sato, Y. Hayakawa, R. Suzuki, Y. Shiiki, K. Yoshioka, and Q. A. Chen, \"Revisiting lidar spoofing attack capabilities against object detection: Improvements, measurement, and new attack,\" _arXiv preprint arXiv:2303.10555_, 2023.
* [6] Z. Jin, X. Ji, Y. Cheng, B. Yang, C. Yan, and W. Xu, \"Pla-lidar: Physical laser attacks against lidar-based 3d object detection in autonomous vehicle,\" in _2023 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2023, pp. 1822-1839.
* [7] Y. Cao, S. H. Bhupathirju, P. Naghavi, T. Sugawara, Z. M. Mao, and S. Rampazzi, \"You can't see: Physical removal attacks on {LiDAR-based} autonomous vehicles driving frameworks,\" in _32nd USENIX Security Symposium (USENIX Security 23)_, 2023, pp. 2993-3010.
* [8] C. You, Z. Hau, and S. Demetriou, \"Temporal consistency checks to detect lidar spoofing attacks on autonomous vehicle perception,\" in _Proceedings of the 1st Workshop on Security and Privacy for Mobile AI_, 2021, pp. 13-18.
* [9] T. B. Brown, D. Mane, A. Roy, M. Abadi, and J. Gilmer, \"Adversarial patch,\" _arXiv preprint arXiv:1712.09665_, 2017.
* [10] S. Thys, W. Van Ranst, and T. Goedeme, \"Fooling automated surveillance cameras: adversarial patches to attack person detection,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops_, 2019, pp. 0-0.
* [11] X. Yang, F. Wei, H. Zhang, and J. Zhu, \"Design and interpretation of universal adversarial patches in face detection,\" in _European Conference on Computer Vision_. Springer, 2020, pp. 174-191.
* [12] Z. Xiao, X. Gao, C. Fu, Y. Dong, W. Gao, X. Zhang, J. Zhou, and J. Zhu, \"Improving transferability of adversarial patches on face recognition with generative models,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 11 845-11 854.
* [13] F. Nesti, G. Rossolini, S. Nair, A. Biondi, and G. Buttazzo, \"Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks,\" in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2022, pp. 2280-2289.
* [14] H. Tan and H. Kotthaus, \"Explainability-aware one point attack for point cloud neural networks,\" in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2023, pp. 4581-4590.
* [15] H. Shin, D. Kim, Y. Kwon, and Y. Kim, \"Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications,\" in _International Conference on Cryptographic Hardware and Embedded Systems_. Springer, 2017, pp. 445-467.
* [16] C. Chen and T. Huang, \"Camdar-adv: Generating adversarial patches on 3d object,\" _International Journal of Intelligent Systems_, vol. 36, no. 3, pp. 1441-1453, 2021.
* [17] C. Xiao, D. Yang, B. Li, J. Deng, and M. Liu, \"Meshadv: Adversarial meshes for visual recognition,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 6898-6907.
* [18] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, \"Pointnet: Deep learning on point sets for 3d classification and segmentation,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 652-660.
* [19] J. Kim, B.-S. Hua, T. Nguyen, and S.-K. Yeung, \"Minimal adversarial examples for deep learning on 3d point clouds,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 7797-7806.
* [20] J. Yang, Q. Zhang, R. Fang, B. Ni, J. Liu, and Q. Tian, \"Adversarial attack and defense on point sets,\" _arXiv preprint arXiv:1902.10899_, 2019.
* [21] T. Zheng, C. Chen, J. Yuan, B. Li, and K. Ren, \"Pointcloud saliency maps,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 1598-1606.
* [22] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim, \"Sanity checks for saliency maps,\" _Advances in neural information processing systems_, vol. 31, 2018.
* [23] M. Sundararajan, A. Taly, and Q. Yan, \"Gradients of counterfactuals,\" _arXiv preprint arXiv:1611.02639_, 2016.
* [24] ----, \"Axiomatic attribution for deep networks,\" in _International conference on machine learning_. PML. 2017, pp. 3319-3328.
* [25] Y., C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, \"Adversarial sensor attack on lidar-based perception in autonomous driving,\" in _Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security_, 2019, pp. 2267-2281.
* [26] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, \"Pointpillars: Fast encoders for object detection from point clouds,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2019, pp. 12 697-12 705.
* [27] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \"Vision meets robotics: The kitti dataset,\" _The International Journal of Robotics Research_, vol. 32, no. 11, pp. 1231-1237, 2013.
* [28] \"Baidu apollo,\" [http://apollo.auto](http://apollo.auto), 2020.
* [29] S. Shi, X. Wang, and H. Li, \"Pointrcnn: 3d object proposal generation and detection from point cloud,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2019, pp. 770-779.
* [30] Z. Hau, S. Demetriou, L. Munoz-Gonzalez, and E. C. Lupu, \"Shadowcatcher: Looking into shadows to detect ghost objects in autonomous vehicle 3d sensing,\" in _Computer Security-ESORICS 2021: 26th European Symposium on Research in Computer Security, Darmstadt, Germany, October 4-8, 2021, Proceedings, Part I 26_. Springer, 2021, pp. 691-711.
* [31] M. Cho, Y. Cao, Z. Zhou, and Z. M. Mao, \"Adopt: Lidar spoofing attack detection based on point-level temporal consistency,\" 2023. | LiDAR sensors are widely used in autonomous vehicles to better perceive the environment. However, prior works have shown that LiDAR signals can be spoofed to hide real objects from 3D object detectors. This study explores the feasibility of reducing the required spoofing area through a novel object-hiding strategy based on virtual patches (VPs). We first manually design VPs (MVPs) and show that VP-focused attacks can achieve similar success rates with prior work but with a fraction of the required spoofing area. Then we design a framework Saliency-LiDAR (_SALL_), which can identify critical regions for LiDAR objects using Integrated Gradients. VPs cranted on critical regions (CVPs) reduce object detection recall by at least 15% compared to our baseline with an approximate 50% reduction in the spoofing area for vehicles of average size. | Summarize the following text. | 181 |
arxiv-format/2406_09945v1.md | # SemanticSpray++: A Multimodal Dataset for Autonomous Driving in Wet Surface Conditions
Aldi Piroli\\({}^{1}\\), Vinzenz Dallabetta\\({}^{2}\\), Johannes Kopp\\({}^{1}\\), Marc Walessa\\({}^{2}\\),
Daniel Meissner\\({}^{2}\\), and Klaus Dietmayer\\({}^{1}\\)
\\({}^{1}\\) Institute of Measurement, Control, and Microtechnology, Ulm University, Germany {firstname.lastname}@uni-ulm.de\\({}^{2}\\) BMW AG, Pettehring 130, 80809 Munich, Germany {vinzenz.dalabetta, marc.walessa}@bmw.de and [email protected]
## I Introduction
The pursuit of achieving autonomous driving has accelerated research across various disciplines. Notably, advancements in computer vision applications for diverse sensor modalities such as camera, LiDAR, and radar have significantly benefited from this endeavor. Consequently, there has been an unparalleled advancement in tasks like object detection and semantic segmentation. One of the main factors for innovations in these fields has been the advancements in deep learning methods. These advances have been possible mainly as a result of increases in computing power and data availability.
Despite the many improvements, the task of autonomous driving is not yet considered complete. One of the many reasons is that neural networks perform unexpectedly when tested in a different domain than the one used during training [1, 2, 3, 4, 5]. For example, modern object detectors tend to detect unknown objects as one of the training classes, often with high confidence scores (e.g., an animal is classified as a pedestrian) [1]. A more mundane but far more common example is the performance degradation of camera and LiDAR-based sensor systems in adverse weather conditions such as rain, snow, and fog [6, 7, 8, 9]. For example, in foggy and rainy conditions, the camera's view is greatly reduced, resulting in fewer objects being detected. In LiDAR sensors, the water particles that make up rain, spray, and snow can cause the measurement signal to be scattered, resulting in missed point detections. In addition, these same water particles can cause partial or total reflection of the signal, resulting in additional unwanted noise in the measurements. These effects can cause detectors that perform well in good weather conditions to missdetect objects and introduce false positive detections in the perceived environment. Since autonomous vehicles rely heavily on these sensors, such unexpected behavior can have very serious consequences and, in extreme cases, pose a threat to passengers and other road users.
Therefore, it is important to test perception methods in different weather conditions thoroughly. However, few datasets are currently available for testing such systems, and even fewer provide labeled data for all common sensor modalities (i.e., camera, LiDAR, and radar). For example, the Waymo
Fig. 1: The proposed SemanticSpray++ dataset offers multimodal labels across camera, LiDAR, and radar sensors for testing the effect of spray on perception systems. **Top:** shows the camera image with overlayed 2D ground truth bounding box (in green) of the vehicle in front. **Bottom-left:** shows the captured LiDAR scan, where the 3D ground truth bounding box (in green) represents the leading vehicle. Additionally, each point has an associated semantic label, where the colors represent \\(\\bullet\\)_background_\\(\\bullet\\)_foreground_\\(\\bullet\\)_noise_. **Bottom-right:** shows the radar target represented by the Doppler velocity vector (green arrow). We also overlay the LiDAR scan for visualization purposes in gray.
Open dataset [10] contains a large number of scenes in rainy conditions and provides labels for both camera and LiDAR sensors. However, no radar data is available. In addition, general-purpose datasets like the Waymo Open or nuScenes datasets [11] lack the systematic testing of a specific weather condition and all of its many variations. For example, on wet surfaces, the resulting spray effect is highly dependent on driving speed and vehicle type [12].
To address some of the problems mentioned above, we propose the SemanticSpray++ dataset, which has labels for vehicles traveling in wet surface conditions at different speeds. We provide labels for camera, LiDAR, and radar sensors for object detection as well as for semantic segmentation tasks. Our work is based on the RoadSpray dataset [13], which contains the raw and unlabeled recordings, and in our previous publication, where we published the SemanticSpray dataset [8] that provides semantic labels for the LiDAR point clouds. In this paper, we extend our previous work by additionally labeling 2D bounding boxes for the camera image, 3D bounding boxes for the LiDAR point clouds, and semantic labels for the radar targets. An example of the different annotated modalities is shown in Fig. 1. As the data extensively covers different speeds, vehicles, and amounts of surface water, it provides a unique test bed where 2D and 3D object detectors and semantic segmentation methods can be tested to understand their limitations in this particular weather effect. In addition, we test different baseline perception methods like 2D and 3D object detectors and 3D semantic segmentation networks and analyze the effect that spray has on their performance.
In summary, our main contributions are:
* We extend the SemanticSpray dataset to include multimodal labels for vehicles traveling in wet surface conditions.
* We provide 2D bounding box labels for the camera images, 3D bounding boxes for the LiDAR point clouds, and semantic labels for the radar targets.
* We provide label statistics on both object-level and point-wise semantic.
* We test popular perception methods across different tasks and analyze how their performance is affected by spray.
## II Related Work
### _Datasets for Autonomous Driving_
The recent advances in the methods for autonomous driving have been made possible in part by the influx of large and diverse datasets. The KITTI dataset [14] pioneered this field by proposing annotated labels for both LiDAR and camera images, allowing 2D and 3D object detectors to be tested. The SemanticKITTI dataset [15] provides additional LiDAR point-wise semantic labels, allowing training and testing of semantic segmentation networks. The nuScenes dataset [11] is a popular dataset that provides labels for many tasks, among which are multi-camera object detection, semantic segmentation, and 3D object detection. It is recorded in urban scenarios with different weather conditions in North America and Southeast Asia. The Waymo Open dataset [10] is a large-scale dataset that provides annotated camera and LiDAR point clouds in both urban and extra-urban scenarios in sunny and rainy conditions. Additional datasets such as Argoverse [16] and ZOD [17] also provide a large and diverse set of annotated frames for autonomous driving applications. Furthermore, there are many datasets which only provide single modalities (e.g., camera only) data annotations like Cityscapes [18] and BDD100K [19].
### _Adverse Weather Datasets for Autonomous Driving_
Recently, a few datasets have been proposed for autonomous driving applications where the focus is on adverse weather conditions [7]. The Seeing Through Fog dataset (STF) [20] provides annotated open-world recordings while driving in foggy conditions for both camera and LiDAR point clouds. SemanticSTF [21] has recently extended the STF dataset to provide semantic labels. The DENSE dataset [22] is recorded in a weather chamber where artificial fog, snow, and rain are measured with a LiDAR sensor with a simulated urban scenario in the background. The ADUULM dataset [23] provides semantic labels for camera and LiDAR point clouds in diverse weather conditions. The WADS dataset [24] focuses on snowy conditions, providing semantic segmentation labels for the 3D LiDAR point clouds. The CADC dataset [25] instead contains 3D bounding boxes for LiDAR point clouds in snowy conditions. The RADIATE dataset [26] also focuses on adverse weather and is one of the few datasets that provides object-level annotations for the radar sensor. The RoadSpray dataset [13] provides an extensive list of recordings in wet surface conditions. It contains scenes in a highway-like environment at different speeds, with camera, LiDAR, and radar measurements. However, the dataset only contains raw recordings without any annotations for any of the sensor modalities. The SemanticSpray dataset [8], which is based on a subset of scenes of the RoadSpray dataset, provides semantic segmentation labels for the LiDAR scenes, differentiating between the foreground objects in the scenes, background points, and noise points like spray and other adverse weather artifacts. The proposed SemanticSpray++ dataset is based on the RoadSpray and SemanticSpray datasets and aims to extend the label annotations to different sensor modalities and formats.
## III SemanticSpray++ Dataset
This section introduces the SemanticSpray++ dataset and gives an overview of the scenarios, the recording setup, the data annotation, the label format, and label statistics.
### _Scenario Setup_
The SemanticSpray++ dataset provides LiDAR, camera, and radar labels for a subsection of the scenes of the RoadSpray dataset [13]. The RoadSpray dataset itself provides unlabeled data for vehicles traveling on wet surfaces at different speeds in a highway-like scenario. In the relevant experiments, the ego vehicle follows a leading vehicle at a fixed distance while traveling at different speeds. The distances between the ego and the leading vehicles are between \\(20\\,\\mathrm{m}\\) and \\(30\\,\\mathrm{m}\\), whereas the traveling speeds are in the range \\(50\\)-\\(130\\,\\mathrm{km}/\\mathrm{h}\\), with \\(10\\,\\mathrm{km}/\\mathrm{h}\\) increments. There are two types of lead vehicles: a small size car and a large van vehicle. The experiments are conducted in an empty airstrip field to recreate a highway-like scenario. The ego vehicle is equipped with the following sensors:
* High-resolution LiDAR (top-mounted),
* Long-range radar (front-mounted),
* Camera sensor (front-mounted).
For more information on the raw data recording and a more detailed sensor description, we refer the reader to the original dataset publication [13].
### _Data Format and Annotation_
As the RoadSpray dataset provides raw unlabeled data in a rospag format, we extract the camera, LiDAR, and radar data. Because the different sensors record at different frequencies, we use the LiDAR sensor as the synchronization signal in the extraction process. The LiDAR point clouds are saved in binary files with each point having features \\((x,y,z,\\mathrm{intensity},\\mathrm{ring})\\), where \\((x,y,z)\\) is the 3D position of the points, intensity is a value ranging from \\(0-255\\) which quantifies the calibrated-intensity of the point, and ring represents which of the LiDAR layers the point originated from. The radar sensor data is also saved in a binary file, where each point has features \\((x,y,v_{x},v_{y})\\), where \\((x,y)\\) is the 2D Cartesian position of the point and \\((v_{x},v_{y})\\) are the components of the Doppler velocity vector. The camera sensor data is saved in an RGB.jpg image file of size \\(2048\\times 1088\\) pixels.
The SemanticSpray dataset [8] provides semantic labels for the LiDAR point clouds, assigning to each point one of the three possible labels: _background_ (vegetation, building, road, ), _foreground_ (leading vehicle), and _noise_ (spray and other weather artifacts). The labels are provided in a binary file with label mapping \\(\\{0:background,1:\\textit{foreground},2:\\textit{noise}\\}\\).
With the SemanticSpray++ dataset, we extend the available labels to 2D bounding boxes for the camera image, 3D bounding boxes for the LiDAR point cloud, and semantic labels for the radar targets. We select a subset of 36 scenes from the SemanticSpray dataset, choosing scenes with different traveling speeds and vehicle distances. In Fig. 1 and Fig. 2, we show examples of all annotation types, and in Fig. 3, an overview of the dataset statistics.
**LiDAR 3D Bounding Box Annotation.** We annotate the 3D LiDAR point clouds using bounding boxes with the format \\([x,y,z,w,h,l,\\theta]\\), where \\((x,y,z)\\) is the center of the bounding box, \\((w,h,l)\\) are its height, width and length and \\(\\theta\\) the orientation around the \\(z\\)-axis. Since there are two different types of leading vehicles, we label them as separate classes, namely _Car_ and _Van_. We store the labels in.json files, which are easy to parse and still human-readable.
**Camera 2D Bounding Box Annotation.** For the camera data, we annotate each image by assigning a bounding box to each object in the scene. We use the format [_top-left_, _top-right_, _bottom-left_, _bottom-right_], where for each point, we give the \\((x,y)\\) coordinates in the camera image. For each box, we use the same class categories _Car_ and _Van_ as for the LiDAR data. In the recorded data, there are many instances where the leading vehicle is totally occluded or only partially visible in the camera image. This is mainly due to windshield wipers blocking the field of view or water particles caused by the vehicle in front blocking or blurring the camera view. In addition, there are many cases where the camera image is locally overexposed due to sunlight reflecting directly onto the camera sensor or off the wet surface. To address this issue during the labeling process, we interpolate the occluded bounding boxes between two visible camera frames. We report some examples of the effects and the interpolation process in Fig. 2.
**Semantic Labels of Radar Targets.** Since the radar and LiDAR point clouds are calibrated, we use a semi
Fig. 3: Statistics of the proposed dataset. **Top-left**: shows the distributions of the LiDAR-based semantic labels among different velocities. **Top-right**: shows the distributions of the radar-based semantic labels among different velocities. **Bottom-left**: shows the number of 2D and 3D object box annotations in the camera and LiDAR point clouds. The number of boxes matches among the different modalities as both sensors always capture the leading vehicle. **Bottom-right**: shows the number of vehicle points in the 3D LiDAR bounding boxes at different speeds.
Fig. 2: Overview of some of the scenes present in the proposed dataset. **Top row**: show the occlusion effect caused by the windshield wipers. **Middle row**: shows the blurriness effect caused by the spray particles generated by the leading vehicle. **Bottom row**: shows how sunlight directly reflecting off the camera sensor or on the wet surface leads to locally overexposed images, which block the leading vehicle from the field of view. We show with green boxes the provided 2D box annotations.
automatic approach for the semantic labeling of the radar targets. First, we project the radar points in the LiDAR point cloud coordinate frame. Then, using the 3D bounding boxes described in the previous section, we check which of the radar targets are contained within the LiDAR-based boxes and automatically label them as either _Car_ or _Van_ (depending on the leading vehicle type) or as _Background_ if they do not belong to any of the vehicles in the scene. After the automatic labeling, we manually check each radar point cloud and fix possible incorrect labels.
### _Dataset Toolkit_
Together with the dataset, we provide a toolkit that provides useful data processing and visualization scripts. Among these, we provide a PyTorch data loader for the 3D object detection framework OpenPCDet [27], which allows for easy testing of different object detectors and for the SPVCNN framework [28], which allows to train and test multiple semantic segmentation networks. Additionally, we provide useful scripts to convert the labels into different formats (e.g., COCO, YOLO, OpenLABEL [29]).
## IV Experiments
In this section, we report the evaluation of baseline methods among different tasks. In particular, we aim to test how the performance of 3D LiDAR object detectors, 2D camera-based detectors, and 3D LiDAR semantic segmentation networks are affected when evaluating their performance in the adverse weather conditions present in SemanticSpray++.
### _Experiment Setup_
**3D LiDAR Object Detector.** As spray points heavily impact LiDAR sensors, we test the performance of three popular object detectors (PointPillars [30], SECOND [31] and CenterPoint [32] with a pillar backbone) trained on the nuScenes dataset [11], using the respective OpenPCDet [27] implementation. We follow the evaluation setup presented in [9], where we first test the trained models directly on the SemanticSpray++ dataset. In addition, we fine-tune the detectors on a small subset of scenes from SemanticSpray++ where no spray points are present. This allows us to reduce the domain gap caused by factors such as sensor placement and better understand the impact of spray on the detectors [9]. In contrast to the evaluation reported in [9], we use the ground truth data provided in SemanticSpray++ instead of pseudo-labels. Moreover, we include scenes with both the _Car_ and _Van_ leading vehicle, instead of only using the _Car_ class. For a detailed description of the fine-tuning process, we refer the reader to [9]. As evaluation metrics for 3D object detection, we use the Average Precision (AP) metric with Intersection over Union (IoU) at 0.5 and class-wise mean AP (mAP). We compute it at different ranges, namely 0 m to 25 m and \\(>\\) 0 to 25 m. As the nuScenes dataset provides multiple vehicle categories, we use the following output mapping {_Truck, Construction Vehicle, Bus, Trailer_} \\(\\rightarrow\\)_Van_ for the large-vehicle detections.
**2D Object Detection.** For testing the performance of 2D camera-based detections, we use the popular YOLO [33] object detector, adapting the implementation provided by [34]. We use the YOLOv8m (mid-sized) model trained on the COCO dataset [35]. Additionally, we train the same model using the default configurations on the Argoverse dataset [16], which contains annotated images from the perspective of the ego-vehicle. As mentioned in the previous section, camera images are affected in different ways by wet surface conditions. To overcome some of these effects (i.e., partial or total occlusion, blur, and overexposure), we test the performance of the YOLOv8m models with two different object trackers (ByteTrack [36] and BoT-SORT [37]), which allow the use of temporal information when detecting objects, even in the presence of occlusion. As evaluation metrics, we use the standard AP at 0.5 IoU.
**3D LiDAR Semantic Segmentation.** We additionally test how the performance of semantic segmentation networks is affected when testing on scenes with spray. For this purpose, we use SPVCNN [28] as the base network and train on both the nuScenes [11] and SemanticKITTI dataset [15] using the official implementations. Since the classes of the training set do not all match the classes of the SemanticSpray++ dataset, a direct quantitative comparison of performance is not possible. Instead, we report the confusion matrix for each method, which provides insight into how the _noise_ points tend to be misclassified.
### _Experiment Results_
**3D LiDAR Object Detector.** We begin our evaluation by testing the performance of PointPillars, SECOND, and CenterPoint trained only on the nuScenes dataset and report the results in Table I. We can observe that all detectors perform higher when detecting the _Car_ type. Additionally, we see that in most cases, the performance between 0-25 m is higher than for \\(>\\)25 m. This is expected as the density of LiDAR points diminishes with the distance from the sensor. Moreover, the total number of returned points is also reduced due to the scattering and partial/total occlusion effects. This can also be seen in Fig. 3, where the number of points inside the bounding boxes decreases as the driving speed increases due to more spray. When analyzing the results for the fine-tuned detectors, we see that all of the models benefit from it, greatly increasing the performance for both _Car_ and _Van_ classes. When we compare the qualitative results from Fig. 4, we notice that both fine-tuned and non-fine-tuned detectors are affected by spray points, which cause false-positive detections to arise. However, we can observe that for the non-fine-tuned detectors, additional factors like the surrounding environment also cause false positive detections.
**2D Camera Object Detector.** In Table II, we report the results of the camera-based 2D object detector and the additional object tracking post-processing. We can see that the performance for both YOLOv8m trained on COCO and Argoverse are similar for both the _Car_ and _Van_ classes. We can also observe that object tracking substantially improvesthe performance of both models. For example, on YOLOv8m trained on Argoverse combined with BoT-SORT, the mAP improves by \\(+5.04\\%\\) points. Looking at the qualitative results of Fig. 4, we see that the performance of the object detector without any associated tracking is indeed affected by the occlusion of the windshield wipers, the blurriness of the water spray particles, and the locally overexposed images. The same figure shows that these problems are less pronounced when an object tracker is used as a post-processing step.
**3D LiDAR Semantic Segmentation.** As mentioned before, we provide the confusion matrices for SPVCNN trained on different semantic segmentation datasets and report it in Fig. 5. When observing SPVCNN trained on nuScenes, we see that the _background_ class is mainly classified by the network as _man-made_, _vegetation_ and _drivable-surface_. Similar predictions can be observed when SPVCNN is trained on the SemanticKITTI dataset. The foreground class, which consists of points belonging to two different lead vehicles, is for the most correctly associated with the vehicle classes of the two datasets. The noise class, which contains spray points and other weather artifacts, is instead associated with different semantic classes. For example, when the model is trained on nuScenes, the _mandmade_ and _truck_ classes are the two most associated classes. For SPVCNN trained on SemanticKITTI, it is instead _vegetation_. These results highlight the overconfidence problem seen in modern neural networks when faced with unknown inputs [1, 3, 5, 4] and show the importance of counterbalancing methods like out-of-distribution detections and open-word classification.
## V Conclusion
In this paper, we present the SemanticSpray++ dataset, which extends the SemanticSpray dataset [8] with object labels for the camera and LiDAR data and semantic labels for the radar targets. We provide details on the annotation process and the challenges associated with it. Afterward, we
Fig. 4: Qualitative results for 2D and 3D object detectors tested on SemanticSpray++. **Top row**: shows the camera image with overlayed ground truth bounding boxes \\(-\\), predictions using YOLOv8m --, and predictions using predictions using YOLOv8m + BoT-SORT --. **Bottom row**: shows the LiDAR point could with ground truth boxes \\(-\\), predictions from SECOND trained only on nuScenes \\(-\\), and SECOND trained on nuScenes with additional fine-tuning on SemanticSpray++ \\(-\\). The semantic labels for the LiDAR point cloud have the following color map: \\(\\bullet\\)_background \\(\\bullet\\) foreground \\(\\bullet\\) noise_.
present statistics for labels across the different sensor modalities and tasks. Additionally, we evaluate the performance of popular 2D and 3D object detectors and 3D point cloud semantic segmentation methods and give insights on how spray affects their performances.
In future work, we aim to provide additional labels for the dataset, like semantic masks for the camera image, and more proprieties on the object labels like occlusion levels.
## VI Acknowledgement
We would like to thank the authors for the RoadSpray dataset [13], on which our work is based. Their effort in meticulously and comprehensively recording the large amount of data, allowed us to build the proposed dataset.
## References
* [1] X. Du, Z. Wang, M. Cai, and Y. Li, \"Vos: Learning what you don't know by virtual outlier synthesis,\" _arXiv preprint arXiv:2202.01197_, 2022.
* [2] A. Piroli, V. Dallabetta, M. Walessa, D. Meissner, J. Kopp, and K. Dietmayer, \"Detection of condensed vehicle gas exhaust in lidar point clouds,\" in _2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2022, pp. 600-606.
* [3] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, \"On calibration of modern neural networks,\" in _International conference on machine learning_. PMLR, 2017, pp. 1321-1330.
* [4] A. Piroli, V. Dallabetta, J. Kopp, M. Walessa, D. Meissner, and K. Dietmayer, \"LS-VOS: Identifying Outliers in 3D Object Detections Using Latent Space Virtual Outlier Synthesis,\" in _2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2023, pp. 1242-1248.
* [5] T. DeVries and G. W. Taylor, \"Learning confidence for out-of-distribution detection in neural networks,\" _arXiv preprint arXiv:1802.04865_, 2018.
* [6] A. Piroli, V. Dallabetta, M. Walessa, D. Meissner, J. Kopp, and K. Dietmayer, \"Robust 3d object detection in cold weather conditions,\" in _2022 IEEE Intelligent Vehicles Symposium (IV)_. IEEE, 2022, pp. 287-294.
* [7] M. Dreissig, D. Scheuble, F. Piewak, and J. Boedecker, \"Survey on LiDAR Perception in Adverse Weather Conditions,\" _arXiv preprint arXiv:2304.06312_, 2023.
* [8] A. Piroli, V. Dallabetta, J. Kopp, M. Walessa, D. Meissner, and K. Dietmayer, \"Energy-based Detection of Adverse Weather Effects in LiDAR Data,\" _IEEE Robotics and Automation Letters_, 2023.
* [9] ----, \"Towards Robust 3D Object Detection In Rainy Conditions,\" in _2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)_. IEEE, 2023, pp. 3471-3477.
* [10] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, _et al._, \"Scalability in perception for autonomous driving: Waymo open dataset,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 2446-2454.
* [11] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, \"nuscenes: A multimodal dataset for autonomous driving,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 11 621-11 631.
* [12] Y.-C. Shih, W.-H. Liao, W.-C. Lin, S.-K. Wong, and C.-C. Wang, \"Reconstruction and synthesis of lidar point clouds of spray,\" _IEEE Robotics and Automation Letters_, vol. 7, no. 2, pp. 3765-3772, 2022.
* [13] C. Linnhoff, L. Elster, P. Rosenberger, and H. Winner, \"Road spray in lidar and radar data for individual moving objects,\" 2022.
* [14] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \"Vision meets robotics: The kitti dataset,\" _The International Journal of Robotics Research_, vol. 32, no. 11, pp. 1231-1237, 2013.
* [15] J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, \"Semantickitti: A dataset for semantic scene understanding of lidar sequences,\" in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 9297-9307.
* [16] M.-F. Chang, I. W. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, and J. Hays, \"Argoverse:
Fig. 5: Confusion matrix of SPVCNN trained on the nuScenes-semantic and SemanticKITTI datasets and evaluated on the SemanticSpray++ dataset. Notice that, as the training labels do not match the test labels, the matrices are not square. Additionally, small values (\\(<0.01\\)) are truncated to 0 for visualization purposes.
3D Tracking and Forecasting with Rich Maps,\" in _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2019.
* [17] M. Alibegiri, W. Ljungbergh, A. Tonderski, G. Hess, A. Lilja, C. Lindstrom, D. Motorniuk, J. Fu, J. Widahl, and C. Petersson, \"Zenseact Open Dataset: A large-scale and diverse multimodal dataset for autonomous driving,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2023, pp. 20 178-20 188.
* [18] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, \"The cityscapes dataset for semantic urban scene understanding,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 3213-3223.
* [19] F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, \"Bdd100k: A diverse driving dataset for heterogeneous multitask learning,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 2636-2645.
* [20] M. Bijelic, T. Gruber, F. Mannan, F. Kraus, W. Ritter, K. Dietmayer, and F. Heide, \"Seeing through fog without seeing fog: Deep multi-modal sensor fusion in unseen adverse weather,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 11 682-11 692.
* [21] A. Xiao, J. Huang, W. Xuan, R. Ren, K. Liu, D. Guan, A. E. Saddik, S. Lu, and E. Xing, \"3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds,\" _arXiv preprint arXiv:2304.06960_, 2023.
* [22] R. Heinzler, F. Piewak, P. Schindler, and W. Stork, \"Cnn-based lidar point cloud de-noising in adverse weather,\" _IEEE Robotics and Automation Letters_, vol. 5, no. 2, pp. 2514-2521, 2020.
* [23] A. Pfeuffer, M. Schon, C. Ditzel, and K. Dietmayer, \"The ADUULM-Dataset-a Semantic Segmentation Dataset for Sensor Fusion.\" in _BMVC_, 2020.
* [24] A. Kurup and J. Bos, \"Dsor: A scalable statistical filter for removing falling snow from lidar point clouds in severe winter weather,\" _arXiv preprint arXiv:2109.07078_, 2021.
* [25] M. Pitropo, D. E. Garcia, J. Rebello, M. Smart, C. Wang, K. Czarnecki, and S. Waslander, \"Canadian adverse driving conditions dataset,\" _The International Journal of Robotics Research_, vol. 40, no. 4-5, pp. 681-690, 2021.
* [26] M. Sheeny, E. De Pellegrin, S. Mukherjee, A. Ahrabian, S. Wang, and A. Wallace, \"RADIATE: A radar dataset for automotive perception in bad weather,\" in _2021 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2021, pp. 1-7.
* [27] O. D. Team, \"OpenPCDet: An Open-source Toolbox for 3D Object Detection from Point Clouds,\" [https://github.com/open-mmlab/OpenPCDet](https://github.com/open-mmlab/OpenPCDet), 2020.
* [28] H. Tang, Z. Liu, S. Zhao, Y. Lin, J. Lin, H. Wang, and S. Han, \"Searching efficient 3d architectures with sparse point-voxel convolution,\" in _European conference on computer vision_. Springer, 2020, pp. 685-702.
* [29] N. Hagedorn, \"OpenLABEL Concept Paper.\" [Online]. Available: [https://www.asam.net/project-detail/asam-openlabel-v100/](https://www.asam.net/project-detail/asam-openlabel-v100/)
* [30] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, \"Pointpillars: Fast encoders for object detection from point clouds,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 12 697-12 705.
* [31] Y. Yan, Y. Mao, and B. Li, \"Second: Sparsely embedded convolutional detection,\" _Sensors_, vol. 18, no. 10, p. 3337, 2018.
* [32] T. Yin, X. Zhou, and P. Krahenbuhl, \"Center-based 3d object detection and tracking,\" in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2021, pp. 11 784-11 793.
* [33] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, \"You only look once: Unified, real-time object detection,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 779-788.
* [34] G. Jocher, A. Chaurasia, and J. Qiu, \"Ultrayltics YOLOv8,\" 2023. [Online]. Available: [https://github.com/ultralyltics/ultralyltics](https://github.com/ultralyltics/ultralyltics)
* [35] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, \"Microsoft coco: Common objects in context,\" in _Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_. Springer, 2014, pp. 740-755.
* [36] Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang, \"Bytterrack: Multi-object tracking by associating every detection box,\" in _European Conference on Computer Vision_. Springer, 2022, pp. 1-21.
* [37] N. Aharon, R. Orfaig, and B.-Z. Bobrovsky, \"BoT-SORT: Robust associations multi-pedestrian tracking,\" _arXiv preprint arXiv:2206.14651_, 2022.
* [38] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, \"Microsoft coco: Common objects in context,\" in _Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_. Springer, 2014, pp. 740-755. | Autonomous vehicles rely on camera, LiDAR, and radar sensors to navigate the environment. Adverse weather conditions like snow, rain, and fog are known to be problematic for both camera and LiDAR-based perception systems. Currently, it is difficult to evaluate the performance of these methods due to the lack of publicly available datasets containing multimodal labeled data. To address this limitation, we propose the SemanticSpray++ dataset, which provides labels for camera, LiDAR, and radar data of highway-like scenarios in wet surface conditions. In particular, we provide 2D bounding boxes for the camera image, 3D bounding boxes for the LiDAR point cloud, and semantic labels for the radar targets. By labeling all three sensor modalities, the SemanticSpray++ dataset offers a comprehensive test bed for analyzing the performance of different perception methods when vehicles travel on wet surface conditions. Together with comprehensive label statistics, we also evaluate multiple baseline methods across different tasks and analyze their performances. The dataset will be available at [https://semantic-spray-dataset.github.io](https://semantic-spray-dataset.github.io) | Provide a brief summary of the text. | 227 |
arxiv-format/2305_11913v2.md | Machine learning for phase-resolved reconstruction of nonlinear ocean wave surface elevations from sparse remote sensing data
Svenja Ehlers
[email protected]
Marco Klein
Alexander Heinlein
Mathies Wedler
Nicolas Desmars
Norbert Hoffmann
Merten Stender
Hamburg University of Technology, Dynamics Group, Schlossmuhlendamm 30, 21073 Hamburg, Germany German Aerospace Center, Institute of Maritime Energy Systems, Ship Performance Dep., 21502 Geesthacht, Germany Delft University of Technology, Delft Institute of Applied Mathematics, 2628 CD Delft, Netherlands Imperial College London, Department of Mechanical Engineering, London SW7 2AZ, United Kingdom Technische Universitat Berlin, Cyber-Physical Systems in Mechanical Engineering, 10623 Berlin, Germany
## 1 Introduction
Offshore installations and vessels are strongly impacted by the dynamics of the surrounding ocean waves. Thus, accurate predictions of future wave conditions are desirable for enhancing their safe and efficient operation. For this purpose, several numerical methods have been developed, involving two fundamental steps: the assimilation and reconstruction of initial wave conditions from wave measurement data, followed by the prediction of the future wave evolution. While one line of research focuses on predicting simplified phase-averaged wave quantities based on statistical parameters, marine applications such as wind turbine installations, helicopter landings, or control of wave energy converters require phase-resolved spatio-temporal wave information \\(\\eta(x,t)\\) to identify periods of low wave conditions or enable extreme event warnings. The X-band radar is a remote sensing device that can obtain such phase-resolved wave information. However, the radar backscatter is affected by the geometrical mechanism of tilt and shadowing modulation, creating a nonlinear and sparse relationship between radar measurement intensities \\(\\xi(x,t)\\) and the actual ocean wave surface elevation \\(\\eta(x,t)\\). This makes a reconstruction of wave information from radar information necessary in the assimilation step, which is also referred to as _radar inversion_ and is graphically exemplified in Figure 1.
Contemporary phase-resolved wave reconstruction and prediction methods face a trade-off between accuracy and real-time capability. To achieve computationally efficient methods, linear wave theory (LWT) is commonly employed during the prediction step (cf. Morris et al., 1998; Naaijen and Wijaya, 2014; Hilmer and Thornhill, 2015), along with prior spectral- or texture-analysis-based reconstruction of initial wave conditions from radar data (Borge et al., 2004; Dankert and Rosenthal, 2004). However, these reconstruction methods necessitate additional calibration by wave buoys or rely on simplified assumptions concerning the radar backscatter. Furthermore, the accuracy of the linear approach decreases remarkably for larger temporal horizons of prediction and increasing wave steepness (Lunser et al., 2022), necessitating a wave prediction using nonlinear wave models, especially for capturing safety-critical events such as rogue waves (Ducrozet et al., 2007; Kharif et al., 2009). Comparative studies on phase-resolved nonlinear ocean wave prediction have demonstrated that the high-order spectral (HOS) method, introduced by West et al. (1987) and Dommermuth and Yue (1987), provides the best prediction accuracy over a wide spatio-temporal domain as well as characteristic wave steepness (Klein et al., 2020; Wu, 2004; Lunser et al., 2022; Blondel-Couprie, 2009). While the HOS prediction step itself is also numerically efficient, the reconstruction step currently represents the weakest part in the entire process (Kollisch et al., 2018): the inversion of initial conditions relies on an optimization procedure of the wave model parameters for the subsequent prediction (Wu, 2004; Blondel-Couprie, 2009), which decreases the possible horizon of prediction and hinders the real-time capability so far (Desmars, 2020). Even though the alternative for the HOS inversion proposed by Kollisch et al. (2018) is able to improve the real-time capability, this method instead assumes an unrealistic radar snapshots data rate \\(\\Delta t_{\\rm r}\\), making it not suitable for real-world applications (Desmars, 2020).
The aforementioned shortcomings of conventional ocean wave reconstruction and prediction methods have motivated the exploration of alternatives based on machine learning (ML) techniques. For instance, ML methods are able to predict simple phase-averaged wave quantities such as significant wave height \\(H_{\\rm s}\\), peak period \\(T_{\\rm p}\\) or mean wave direction (cf. Deo et al., 2001; Asma et al., 2012; James et al., 2018; Wu et al., 2020; Yevnin and Toledo, 2022). Recent advancements have also allowed for the more complex task of predicting the spatio-temporal evolution of phase-resolved wave fields, achieved by training multilayer perceptrons (MLPs) (Desouky and Abdelkhalik, 2019; Law et al., 2020; Duan et al., 2020; Zhang et al., 2022), recurrent neural networks (RNNs) (Kagemoto, 2020; Mohaghegh et al., 2021; Liu et al., 2022), or convolutional neural networks (CNNs) (Klein et al., 2022; Wedler et al., 2023) on synthetic or experimental one-dimensional elevation data. However, these studies presuppose that either temporal sequences of wave elevations can be solely measured at a single point in space by buoys \\(\\eta(x=x_{\\rm p},t)\\) or snapshots of initial wave conditions are available throughout the entire space domain \\(\\eta(x,t=t_{\\rm s})\\). In practice, neither of these assumptions is feasible due to the lack of directional wave information of single-point measurements and the fact that the acquisition of spatial snapshots using remote sensing systems such as radars leads to sparse and unscaled observations \\(\\xi(x,t=t_{\\rm s})\\), requiring a reconstruction of wave surface elevations first.
Consequently, it would be advantageous to employ ML methods also for the phase-resolved reconstruction of wave elevations \\(\\eta(x,t)\\) from X-band radar data \\(\\xi(x,t)\\). However, as far as the authors are aware, this topic has not yet been addressed. Prior studies have solely focused on reconstructing phase-averaged
Figure 1: Graphical illustration of the phase-resolved reconstruction task of ocean wave surfaces \\(\\eta\\) from sparse radar intensity surfaces \\(\\xi\\) for the case of waves travelling in one spatial dimension. The radar measurement (left panel) is a snapshot acquired at time instant \\(t_{\\rm s}\\) and is considered as _sparse_ due to reoccurring areas with zero intensity caused by the geometrical shadowing modulation. This radar snapshot is used for reconstructing the wave surface elevation at the same time instant \\(t_{\\rm s}\\) (right panel).
statistical parameters of the prevailing sea state from radar data. For instance, Vicen-Bueno et al. (2012) and Salcedo-Sanz et al. (2015) improved the estimation of \\(H_{\\mathrm{s}}\\) by extracting scalar features from sequences of radar images \\(\\xi(x,t)\\) in a preprocessing step, which in turn were employed to train MLPs and support vector regression models. In contrast, Yang et al. (2021) extracted features from each of the consecutive radar images itself for improved \\(H_{s}\\) estimation at the current time instant. While these methods rely on handcrafted features acquired during a preprocessing step, end-to-end approaches that automatically extract important features from their input have also been proposed. For instance, Duan et al. (2020) and Chen and Huang (2022) used CNN-based methods to estimate \\(H_{\\mathrm{s}}\\) and \\(T_{\\mathrm{p}}\\) from radar images.
Although there seems to be no relevant research on ML-based reconstruction of phase-resolved wave surfaces from sparse X-Band radar data, we hypothesize that ML offers a valuable alternative for the radar inversion task (Hypothesis 1). This hypothesis is derived from the observation that the reconstruction of zero-valued areas in the radar input, exemplified in Figure 1, shares similarities with typical inverse problems encountered in imaging (Bertero et al., 2022; Ongie et al., 2020) such as inpainting and restoration, where ML-methods have demonstrated successful applications (Pathak et al., 2016; Zhang et al., 2017). Two neural network architectures, with network components involving either a local or global approach of data processing, are investigated in detail for their performance in our task. Specifically, we will adapt the U-Net proposed by Ronneberger et al. (2015), a fully convolutional neural network that employs a mapping approach in Euclidean space, and the Fourier neural operator (FNO) proposed by Li et al. (2020), which is designed to learn a more global mapping in Fourier space. Despite the success of CNN-based approaches in imaging problems, we hypothesize that FNO models may be better suited for handling the complex and dynamic nature of ocean waves (Hypothesis 2), since we can assume that the wave features are already explicitly encoded in the network structure, as it learns data patterns in Fourier space. In contrast, the U-Net needs to learn these wave features by aggregating information from multiple layers. Lastly, we expect that incorporating historical context via spatio-temporal radar data will enhance the reconstruction quality of both ML architectures (Hypothesis 3), which we infer from classical radar inversion methods that also rely on temporal sequences of multiple radar snapshots (cf. Dankert and Rosenthal, 2004; Borge et al., 2004).
In general, the fast inference capabilities of trained ML models, make them ideal for maintaining the real-time capability of the entire process composed of wave reconstruction and prediction (Criterion 1) due to the rapid surface reconstruction without particular data preprocessing. Besides the real-time capability, ensuring high reconstruction accuracy is crucial to prevent initial reconstruction errors that will accumulate and deteriorate the subsequent wave prediction. Hence, we strive for an empirical reference value for the surface similarity parameter (SSP) error metric (Perlin and Bustamante, 2014) of less than SSP \\(\\leq\\) 0.10 between ground truth and reconstructed wave surfaces (Criterion 2), which is a commonly used error threshold in ocean wave research (Klein et al., 2020; Lunser et al., 2022). In addition, the proposed ML methods must be capable of handling real-world measurement conditions of radar snapshots taken at intervals of \\(\\Delta t_{\\mathrm{r}}=[1,\\,2]\\,\\mathrm{s}\\) (Criterion 3), a common X-band radar revolution period (Neill and Hashemi, 2018).
To summarize, the objective of this work is to develop an ML-based approach for phase-resolved radar inversion. This involves training ML models to learn mapping functions \\(\\mathcal{M}\\) that are able to reconstruct spatial wave elevation snapshots \\(\\eta(x,t=t_{\\mathrm{s}})\\) from one or \\(n_{\\mathrm{s}}\\) consecutive historical radar snapshots \\(\\xi(x,t_{j})\\), where \\(t_{j}=\\{t_{\\mathrm{s}}-j\\Delta t_{\\mathrm{r}}\\}_{j=0,\\ldots,n_{\\mathrm{s}}-1}\\). As obtaining ground truth wave surface elevation data for large spatial domains in real ocean conditions is almost impractical, we first generate synthetic yet highly realistic one-dimensional spatio-temporal wave surfaces \\(\\eta(x,t)\\) using the HOS method for different sea states in Section 2. The corresponding X-band radar surfaces \\(\\xi(x,t)\\) are generated using a geometric approach and incorporate tilt- and shadowing modulations. In Section 3, two neural network architectures are introduced, a U-Net-based and FNO-based network, which are investigated for their suitability for radar inversion. In Section 4, we discuss the computational results. In particular, we first compare the wave reconstruction performance of the U-Net-based and the FNO-based models, each trained using either \\(n_{\\mathrm{s}}=1\\) radar snapshot in each input or spatio-temporal input data, meaning that multiple consecutive radar snapshots \\(n_{\\mathrm{s}}\\) are provided. Afterwards, the observations are generalized for the entire data set and discussed. Finally, in Section 5, we draw conclusions based on these results and suggest future research directions.
## 2 Data generation and preparation
This section briefly introduces the generation of long-crested nonlinear synthetic wave data \\(\\eta(x,t)\\) using the HOS method, followed by the generation of synthetic radar data \\(\\xi(x,t)\\) that accounts for the tilt- and shadowing modulation mechanisms. The final step involves extracting a number of \\(N\\) input-output \\((\\mathbf{x}_{i}\\),\\(\\mathbf{y}_{i}),i=1,\\ldots,N\\) data samples from the synthetic radar and wave data, which we employ to train the supervised ML models. This first study on ML-based phase-resolved wave reconstruction focuses on the scenario of one-dimensional wave and radar data, driven by the advantages of easier data generation, simplified implementation, and faster neural network training with fewer computational resources.
### Nonlinear synthetic wave data
To generate synthetic one-dimensional wave data, the water-wave problem can be expressed by potential flow theory. Assuming a Newtonian fluid that is incompressible, inviscid, and irrotational, the underlying wave model is described by a velocity potential \\(\\Phi(x,z,t)\\) satisfying the _Laplace equation_
\\[\
abla^{2}\\Phi=\\frac{\\partial^{2}\\Phi}{\\partial x^{2}}+\\frac{\\partial^{2}\\Phi} {\\partial z^{2}}=0 \\tag{1}\\]
within the fluid domain, where \\(z=0\\,\\mathrm{m}\\) is the mean free surface with \\(z\\) pointing in upward direction. The domain is bounded by the _kinematic_ and _dynamic boundary conditions_ at the free surface \\(\\eta(x,t)\\) and the _bottom boundary condition_ at the seabed at depth \\(d\\)
\\[\\eta_{t}+\\eta_{x}\\Phi_{x}-\\Phi_{z} =0 \\text{on }z=\\eta(x,t) \\tag{2}\\] \\[\\Phi_{t}+g\\eta+\\frac{1}{2}\\left(\\Phi_{xx}^{2}+\\Phi_{zz}^{2}\\right) =0 \\text{on }z=\\eta(x,t)\\] \\[\\Phi_{z} =0 \\text{on }z=-d.\\]
Solving this system of equations is challenging due to the nonlinear terms in the boundary conditions, which must be satisfied additionally at the unknown free surface \\(\\eta(x,t)\\). Even though linear wave theory (Airy, 1849) provides adequate approximations for certain engineering applications, capturing realistic ocean wave effects requires modelling the nonlinear behaviour of surface gravity waves. Thus, we employ the HOS method, as formulated by West et al. (1987), which transforms the boundary conditions to the free surface and expresses them as a perturbation series of nonlinear order \\(M\\) around \\(z=0\\). In practice, an order of \\(M\\leq 4\\) is sufficient for capturing the nonlinear wave effects of interest (Desmars, 2020; Lunser et al., 2022). The HOS simulation is linearly initialized by spatial wave surface elevation snapshots \\(\\eta(x,t_{\\text{s}}=0)\\) sampled from the JONSWAP spectrum for finite water depth (Hasselmann et al., 1973; Bouws et al., 1985). The corresponding initial potential is linearly approximated. Subsequently, the initial elevation and potential are propagated nonlinearly in time with the chosen HOS order \\(M\\). The referred JONSWAP spectrum attains its maximum at a peak frequency \\(\\omega_{\\text{p}}\\), whereas the peak enhancement factor \\(\\gamma\\) determines the energy distribution around \\(\\omega_{\\text{p}}\\). The wave frequencies \\(\\omega\\) are linked to the wavenumbers \\(k\\) by the linear dispersion relation \\(\\omega=\\sqrt{gk\\cdot\\tanh\\left(kd\\right)}\\). The relations \\(\\omega=\
icefrac{{2\\pi}}{{T}}\\) and \\(k=\
icefrac{{2\\pi}}{{L}}\\) allow for substituting the peak frequency with a peak period \\(T_{\\text{p}}\\), peak wavelength \\(L_{\\text{p}}\\), or peak wavenumber \\(k_{\\text{p}}\\). Moreover, a dimensionless wave steepness parameter \\(\\epsilon=k_{\\text{p}}\\cdot\
icefrac{{H_{\\text{s}}}}{{2}}\\) is defined based on the significant wave height \\(H_{\\text{s}}\\). For more details on the HOS simulation, consider the work of Wedler et al. (2023) or Lunser et al. (2022), for example.
In this study, we select a wave domain length of \\(4000\\,\\mathrm{m}\\), discretized by \\(n_{x}=1024\\) grid points, resulting in \\(\\Delta x=3.906\\,\\mathrm{m}\\). A peak enhancement factor of \\(\\gamma=3\\) is employed to emulate North Sea conditions. The water depth is \\(d=500\\,\\mathrm{m}\\) and the sea state parameters peak wavelength \\(L_{\\text{p}}\\) and steepness \\(\\epsilon\\) are varied systematically over \\(L_{\\text{p}}\\in\\{80,90,\\ldots,190,200\\}\\,\\mathrm{m}\\) and \\(\\epsilon\\in\\{0.01,0.02,\\ldots,0.09,0.10\\}\\), resulting in 130 possible \\(L_{\\text{p}}\\)-\\(\\epsilon\\)-combinations. For each \\(L_{\\text{p}}\\)-\\(\\epsilon\\)-combination, we generate four different initial surfaces \\(\\eta(x,t_{\\text{s}}=0)\\) by superimposing the wave components of the JONSWAP spectrum with random phase shifts. The subsequent wave evolution \\(\\eta(x,t>0)\\) for \\(t=0,\\ldots,50\\,\\mathrm{s}\\) with \\(\\Delta t_{\\text{save}}=0.1\\,\\mathrm{s}\\) is performed considering the nonlinearities imposed by HOS order \\(M=4\\). As a result, we generate a total of 520 unique spatio-temporal HOS wave data arrays, each of shape \\(E_{\\text{HOS}}\\in\\mathbb{R}^{1024\\times 500}\\), where \\((E_{\\text{HOS}})_{kj}=\\eta(x_{k},t_{j})\\) with \\(x_{k}=\\cdot\\Delta x\\) and \\(t_{j}=j\\cdot\\Delta t_{\\text{save}}\\).
### Corresponding synthetic radar data
As X-band radar systems are often pre-installed on marine structures for navigation and object detection purposes, they also gained attention for observing ocean surface elevations (Borge et al., 1999). The system antenna rotates with a device-specific revolution time \\(\\Delta t_{\\mathrm{r}}\\) of between \\(1-2\\,\\mathrm{s}\\)(Neill and Hashemi, 2018) while emitting radar beams along a range \\(r\\). These radar beams interact with short-scale capillary waves distributed on large-scale ocean surface waves by the Bragg resonance phenomenon, resulting in backscatter to the antenna (Valenzuela, 1978). This procedure provides measurement data \\(\\xi(r,t)\\) as a proxy of wave surface elevations \\(\\eta(r,t)\\), which are not directly relatable to each other due to the influence of different modulation mechanisms. Most influential are assumed to be _tilt modulation_(Dankert and Rosenthal, 2004), _shadowing modulation_(Borge et al., 2004; Wijaya et al., 2015) or a combination of both (Salcedo-Sanz et al., 2015). In order to generate synthetic radar snapshots for this work, the modulation mechanisms are simulated according to Salcedo-Sanz et al. (2015) and Borge et al. (2004), as illustrated in Figure 2.
_Tilt modulation_ refers to the variation in radar backscatter intensity depending on the local incidence angle \\(\\tilde{\\Theta}(r,t)\\) between the unit normal vector \\(\\mathbf{n}(r,t)\\) perpendicular to the illuminated wave facet \\(\\eta(r,t)\\) and the unit normal vector \\(\\mathbf{u}(r,t)\\) pointing towards the antenna. As the backscatter cannot reach the antenna if the dot product \\(\\mathbf{n}\\cdot\\mathbf{u}\\) approaches negative values for \\(|\\tilde{\\Theta}|>\\frac{\\pi}{2}\\), the tilt modulation \\(\\mathcal{T}\\) is simulated by
\\[\\mathcal{T}(r,t)=\\mathbf{n}(r,t)\\cdot\\mathbf{u}(r,t)=\\cos\\tilde{\\Theta}(r,t) \\quad\\text{ if }\\,|\\tilde{\\Theta}(r,t)|\\leq\\frac{\\pi}{2} \\tag{3}\\]
The _shadowing modulation_ instead occurs when high waves located closer to the antenna obstruct waves at greater distances. Shadowing depends on the nominal incidence angle \\(\\Theta(r,t)\\) of a wave facet \\(\\eta(r,t)\\) with horizontal distance \\(R(r)\\) from the antenna at height \\(z_{\\mathrm{a}}\\) above the mean sea level, geometrically expressed as
\\[\\Theta(r,t)=\\tan^{-1}\\left[\\frac{R(r)}{z_{\\mathrm{a}}-\\eta(r,t)}\\right]. \\tag{4}\\]
At a specific time instance \\(t\\), a wave facet \\(\\eta(r,t)\\) at point \\(r\\) is shadowed in case there is another facet \\(\\eta^{\\prime}=\\eta(r^{\\prime},t)\\) closer to the radar \\(R^{\\prime}=R(r^{\\prime})<R(r)\\) that satisfies the condition \\(\\Theta^{\\prime}=\\Theta(r^{\\prime},t)\\geq\\Theta(r,t)\\). The shadowing-illumination mask \\(\\mathcal{S}\\) can be constructed from this condition as follows
\\[\\mathcal{S}(r,t)=\\begin{cases}0&\\quad\\text{if }\\,R(r^{\\prime})<R(r)\\text{ and } \\Theta(r^{\\prime},t)\\geq\\Theta(r,t),\\\\ 1&\\quad\\text{otherwise.}\\end{cases} \\tag{5}\\]
Assuming that tilt-and shadowing modulation contribute to the radar imaging process, the image intensity is proportional to the local radar cross-section, that is \\(\\xi(r,t)\\sim\\mathcal{T}(r,t)\\cdot\\mathcal{S}(r,t)\\). As marine radars are not calibrated, the received backscatter \\(\\xi(r,t)\\) may be normalized to a user-depended range of intensity values.
Figure 2: Geometric display of tilt- and shadowing modulation. Tilt modulation \\(\\mathcal{T}(r,t)\\) is characterized by the local incidence angle \\(\\tilde{\\Theta}\\) between surface normal vector \\(\\mathbf{n}\\) and antenna vector \\(\\mathbf{u}\\), while shadowing modulation \\(\\mathcal{S}(r,t)\\) of a wave facet occurs if another wave closer to the radar systems obstructs the radar beams.
This work aims to develop a robust ML reconstruction method capable of handling even suboptimal antenna installation conditions. For this reason, we consider a X-band radar system with a comparatively low antenna installation height of \\(z_{\\mathrm{a}}=18\\,\\mathrm{m}\\). This choice causes an increased amount of shadowing-affected areas in radar images, which can be inferred from Equations (4) and (5). Around this antenna exists a system's dead range \\(r_{\\mathrm{min}}\\) where the radar beams cannot reach the water surface. In this study, we estimate \\(r_{\\mathrm{min}}=100\\,\\mathrm{m}\\), which is again a comparatively small value and results in the increased magnitude of the tilt modulation influence close to the radar. Moreover, the radar scans the wave surface with a spatial range resolution of \\(\\Delta r=3.5\\,\\mathrm{m}\\) at \\(n_{r}=512\\) grid points. Thus, the maximum observation range is computed as \\(r_{\\mathrm{max}}=1892\\,\\mathrm{m}\\). The radar revolution period is chosen according to Criterion 3 to be a snapshot each \\(\\Delta t_{\\mathrm{r}}=1.3\\,\\mathrm{s}\\), i.e., \\(n_{t}=38\\) radar snapshots for \\(50\\,\\mathrm{s}\\) of simulation time. Using these definitions, we first transform the 520 wave data arrays \\(E_{\\mathrm{HOS}}\\in\\mathbb{R}^{1024\\times 500}\\) from their HOS grid to the radar system's grid, yielding \\(E_{\\mathrm{sys}}\\in\\mathbb{R}^{512\\times 38}\\), where \\((E_{\\mathrm{sys}})_{kj}=\\eta(r_{k},t_{j})\\) with \\(r_{k}=i\\cdot\\Delta r\\) and \\(t_{j}=j\\cdot\\Delta t_{\\mathrm{r}}\\). To obtain highly realistic corresponding radar observations, we model tilt modulation \\(\\mathcal{T}(r,t)\\) and shadowing modulation \\(\\mathcal{S}(r,t)\\), resulting in 520 radar data arrays, each denoted as \\(Z_{\\mathrm{sys}}\\in\\mathbb{R}^{512\\times 38}\\) with \\((Z_{\\mathrm{sys}})_{kj}=\\xi(r_{k},t_{j})\\).
### Preparation of data for machine learning
To train a supervised learning algorithm, labelled input-output data pairs are required. As visualized in Figure 3, from each of the 520 generated radar-wave arrays-pairs we extract six radar input snapshots \\(\\mathbf{x}_{i}\\) from the radar surface array \\(Z_{\\mathrm{sys}}\\) and wave output snapshots \\(\\mathbf{y}_{i}\\) from the wave surface array \\(E_{\\mathrm{sys}}\\) at six distinct time instances \\(t_{\\mathrm{s}}\\) with the largest possible temporal distance. Each output sample \\(\\mathbf{y}_{i}\\in\\mathbb{R}^{512\\times 1}\\) contains a single snapshot at time \\(t_{\\mathrm{s}}\\), while each input sample \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{512\\times n_{\\mathrm{s}}}\\) can incorporate a number of \\(n_{\\mathrm{s}}\\) historical radar snapshots at discrete, earlier times \\(\\{t_{\\mathrm{s}}-j\\cdot\\Delta t_{\\mathrm{r}}\\}_{j=0,\\ldots,n_{\\mathrm{s}}-1}\\). A single snapshot (\\(n_{\\mathrm{s}}=1\\)) at a time \\(t_{\\mathrm{s}}\\) can be used as input, however, as we assumed in Hypothesis 3, that larger temporal context may enhance the quality of a network's reconstruction \\(\\mathbf{\\hat{y}}_{i}\\). Therefore, the optimal value of \\(n_{\\mathrm{s}}\\) is also a subject of investigation as discussed in Sections 4.1.2 and 4.2.2. In total, \\(N=6\\cdot 520=3120\\) input-output data pair samples are generated, each corresponding to a descriptive \\(L_{\\mathrm{p}}\\)-\\(\\epsilon\\)-combination. The data set takes the of shape \\(\\mathbf{X}=[\\mathbf{x}_{1},\\ldots,\\mathbf{x}_{N}]^{\\mathrm{T}}\\in\\mathbb{R}^{3 120\\times 512\\times n_{\\mathrm{s}}}\\) and \\(\\mathbf{Y}=[\\mathbf{y}_{1},\\ldots,\\mathbf{y}_{N}]^{\\mathrm{T}}\\in\\mathbb{R}^{ 3120\\times 512\\times 1}\\) and is split into 60% training, 20% validation, and 20% test data using a stratified data split w.r.t. the sea state parameters \\((L_{\\mathrm{p}},\\epsilon)\\). This ensures an equal representation of each wave characteristic in the resulting subsets, as described in detail in A.
## 3 Machine learning methodology
The U-Net (Ronneberger et al., 2015) and the Fourier neural operator (FNO) (Li et al., 2020) are neural network architectures for data with grid-like structures such as our radar and wave surface elevation
Figure 3: Schematic representation of the ML training sample extraction process. The left-hand side illustrates one of the raw radar and wave surface simulations \\((Z_{\\mathrm{sys}},E_{\\mathrm{sys}}\\in\\mathbb{R}^{512\\times 38})\\), which are utilized to extract input-output samples shown on the right-hand side. Each input \\(\\mathbf{x}_{i}\\) consists \\(n_{\\mathrm{s}}\\) radar snapshots acquired at intervals of \\(\\Delta t_{\\mathrm{r}}=1.3\\,\\mathrm{s}\\), while each output \\(\\mathbf{y}_{i}\\) represents a single-snapshot wave surface elevation at time instant \\(t_{\\mathrm{s}}\\). In total \\(N=6\\cdot 520=3120\\) data samples are generated.
snapshots. Their fundamental difference is the inductive bias encoded by each architecture, which refers to prior assumptions about either the solution space or the underlying data-generating process (Mitchell, 1980; Battaglia et al., 2018). The U-Net is a special type of CNN (LeCun et al., 1989) and imposes an inductive bias by assuming that adjacent data points in Euclidean space are semantically related and learns local mappings between input patches and output features in each layer. This local information is aggregated into more global features due to the utilization of multiple downsampling and convolutional layers. In contrast, the FNO operates under the assumption that the data information can be meaningfully represented in Fourier space. It employs multiple Fourier transformations to learn a mapping between the spectral representation of the input and desired output, directly providing a global understanding of the underlying patterns in the data. This section presents the U-Net- and FNO-based architectures used in our study for radar inversion. In addition, suitable loss and metric functions are introduced for assessing the model's performance.
### U-Net-based network architecture
We first adopt the U-Net concept, originally developed for medical image segmentation by Ronneberger et al. (2015), which has since been applied to a variety of image-to-image translation and surrogate modelling problems, for instance by Isola et al. (2016); Liu et al. (2018); Stoian et al. (2019); Wang et al. (2020); Eichinger et al. (2022); Niekamp et al. (2023) and Stender et al. (2023). The mirrored image dimensions in a fully convolution autoencoder network allow for the U-Net's key property, that is the use of skip-connections for concatenating the output features from the encoding path with the inputs in the decoding path. This enables the reuse of data information of different spatial scales that would otherwise be lost during downsampling and assists the optimizer to find the minimum more efficiently (Li et al., 2018).
Our proposed encoder-decoder architecture is the result of a four-fold cross-validated hyperparameter study, documented in Table 2 in the appendix. As depicted in Figure 4, the adapted U-Net architecture, has a depth of \\(n_{\\mathrm{d}}=5\\) consecutive encoder blocks followed by the same number of consecutive decoder blocks with skip-connections between them.
In more detail, each encoder block in our U-Net-based architecture is composed of a 1D convolutional layer with \\(n_{\\mathrm{k}}=32\\) kernels of size \\(s_{\\mathrm{k}}=5\\), that are responsible for identifying specific features in the input by shifting the smaller-sized kernels, containing the networks trainable weights, across the larger input feature maps in a step-wise manner. Each convolutional layer is followed by a GeLU activation function \\(\\sigma\\)(Hendrycks and Gimpel, 2016) and an average pooling downsampling layer of size 2. To summarize, in the encoding path each radar input sample \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times n_{\\mathrm{s}}}\\) is transformed by the first convolutional layer resulting in \\(v_{\\mathrm{c1}}\\in\\mathbb{R}^{n_{r}\\times n_{\\mathrm{k}}}\\), with \\(n_{r}=512\\) being the number of spatial grid-points and \\(n_{\\mathrm{s}}\\) being the historic snapshots in the radar input. Subsequently, this intermediate output is send through \\(\\sigma\\), before the pooling
Figure 4: Fully convolutional encoder-decoder architecture based on the U-Net (Ronneberger et al., 2015). Each input \\(\\mathbf{x}_{i}\\) is processed by \\(n_{\\mathrm{d}}=5\\) alternating convolutional-, activation- and average pooling layers in the encoding path. The decoding path contains convolutional-, activation- and transpose convolutional layers for a gradual upsampling to calculate the output \\(\\mathbf{\\hat{y}}_{i}\\). Moreover, the outputs of the encoding stages are transferred to the decoding path via skip-connections.
layer reduces the spatial dimension to \\(v_{\\rm p1}\\in\\mathbb{R}^{\\frac{1}{2}n_{r}\\times n_{\\rm k}}\\). This process is repeated until the final encoding block's output is \\(v_{\\rm p5}\\in\\mathbb{R}^{\\frac{1}{32}n_{r}\\times n_{\\rm k}}\\). Next, the decoding blocks are applied, each consisting of a convolutional layer with again \\(n_{\\rm k}=32\\) kernels of size \\(s_{\\rm k}=5\\), followed by GeLU activation. Afterwards, the feature maps' spatial dimensions are upsampled using transpose convolutional layers with linear activation. The resulting feature maps then are concatenated with the output of the corresponding stage in the encoding path via skip-connections, before the next convolution is applied. This process is repeated until the final wave output \\(\\mathbf{\\hat{y}}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\) is calculated using a convolutional layer with a single kernel and linear activation.
As indicated above, the U-Net architecture assumes local connections between neighbouring data points, which is accomplished through two mechanisms. Firstly, the convolutional layers use kernels with a receptive field of \\(s_{\\rm k}=5\\) pixels to process different local parts of the larger input feature maps in the same manner. This is referred to as weight sharing, causing a property called _translational equivariance_: each patch of the input is processed by the same kernels. Secondly, the pooling layers induce locality by assuming that meaningful summations of information from small local regions in the intermediate feature maps can be made and creates a property referred to as _translational invariance_(Goodfellow et al., 2016).
### FNO-based network architecture
In the second step, we explore a neural network based on the FNO (Li et al., 2020). While a CNN is limited to map between finite-dimensional spaces, neural operators are in addition capable to learn nonlinear mappings between a more general class of function spaces. This makes the FNO well-suited for capturing the spatio-temporal patterns that govern the dynamics of various physical problems that obey partial differential equations if the solutions are well represented in Fourier space. FNO variants have been applied to e.g., fluid dynamics (Peng et al., 2022; Li et al., 2022), simulation of multiphase flow (Yan et al., 2022; Wen et al., 2022), weather forecasting (Pathak et al., 2022), material modeling (Rashid et al., 2022; You et al., 2022), and image classification (Williamson et al., 2022).
The FNO-based iterative architecture approach (\\(\\mathbf{x}_{i}\\to v_{0}\\to v_{1}\\rightarrow\\ldots\\rightarrow\\mathbf{\\hat{y}} _{i}\\)) applied in this work is illustrated in Figure 5, while Table 3 in the appendix summarizes the determination of model hyperparameters by four-fold cross-validation.
The proposed FNO transforms radar input data \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times n_{\\rm w}}\\) into a higher-dimensional latent representation \\(v_{0}\\in\\mathbb{R}^{n_{r}\\times n_{\\rm w}}\\) of channel width \\(n_{\\rm w}=32\\), using a linear neural network layer \\(P\\) with \\(n_{\\rm w}\\) nodes. Subsequently, the latent representation passes through \\(n_{\\rm f}=3\\) Fourier layers, each consisting of two paths. In the upper path, a global convolution operator defined in Fourier space is applied to each channel of \\(v_{0}\\) separately utilizing discrete Fourier transforms \\(F\\). A linear transformation \\(R_{0}\\) is then applied to the lower-order Fourier modes after truncating the Fourier series at a maximum number of \\(n_{\\rm m}=64\\) modes. Subsequently, this scaled and filtered content is back-transformed to the spatial domain using inverse discrete Fourier transforms \\(F^{-1}\\). In the lower path, a linear transformation \\(W_{0}\\) in the spatial domain is applied to the input \\(v_{0}\\) to account for non-periodic boundary conditions and higher-order modes that are neglected in the upper path of the Fourier layer. The outputs of the upper and lower paths are added, and the sum
Figure 5: Network architecture based on the Fourier neural operator (Li et al., 2020). Each input \\(\\mathbf{x}_{i}\\) is lifted to a higher dimensional representation \\(v_{0}\\) of channel width \\(d_{\\rm w}\\) by a neural network \\(P\\). Afterwards, \\(n_{\\rm f}=3\\) Fourier layers are applied to each channel. Finally, \\(v_{3}\\) is transferred back to the target dimension of the output \\(\\mathbf{\\hat{y}}_{i}\\) by another neural network \\(Q\\). More specifically, each Fourier layer is composed of two paths. The upper one learns a mapping in Fourier space by adapting \\(R_{j}\\) for scaling and truncating the Fourier Series after \\(n_{\\rm m}\\) modes, while the lower one learns a local linear transform \\(W_{j}\\).
is passed through a nonlinear GeLU activation \\(\\sigma\\) resulting in \\(v_{1}\\in\\mathbb{R}^{n_{r}\\times\\text{m}_{w}}\\), before entering the next Fourier layer. In summary, the output of the \\((j+1)\\)-th Fourier layer is defined as
\\[v_{j+1}=\\sigma\\left(F^{-1}\\left(R_{j}\\cdot F(v_{j})\\right)+W_{j}\\cdot v_{j} \\right). \\tag{6}\\]
Finally, the output \\(v_{3}\\) of the last Fourier layer is transferred to the target wave output dimension \\(\\mathbf{\\hat{y}}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\) using another linear layer \\(Q\\). In summary, the FNOs weights correspond to \\(P\\in\\mathbb{R}^{n_{s}\\times d_{w}}\\), \\(Q\\in\\mathbb{R}^{n_{s}\\times d_{w}}\\) and all \\(R_{j}\\in\\mathbb{C}^{d_{w}\\times d_{w}\\times d_{m}}\\) and \\(W_{j}\\in\\mathbb{R}^{d_{w}\\times d_{w}}\\). As the \\(R_{j}\\)-matrices contain the main portion of the total number of weights, most parameters are learned in the Fourier space rather than the original data space.
As previously noted, the FNO architecture incorporates a global inductive bias that assumes the input data exhibits approximately periodic properties and can be effectively represented in Fourier space. Furthermore, the FNO's design presupposes that the Fourier spectrum of the input data is smooth, enabling its frequency components to be represented by a limited number of low-wavenumber Fourier coefficients, as the \\(R_{j}\\) matrices, which are responsible for the global mapping, truncate higher-frequency modes.
### Training and evaluation
Both the U-Net- and FNO-based architecture are implemented using the PyTorch library (Paszke et al., 2019). To enable a fair comparison and account for wave training data of varying spatial scales, the mean of the relative L2-norm of the error is employed as loss function \\(\\mathcal{L}\\) for both architectures. The relative L2-norm error for one sample \\(i\\) is defined as follows, where \\(\\mathbf{y}_{i}\\) and \\(\\mathbf{\\hat{y}}_{i}\\in\\mathbb{R}^{512\\times 1}\\) represent the true and reconstructed wave surface
\\[\\text{nL2}(\\mathbf{y}_{i},\\mathbf{\\hat{y}}_{i})=\\text{nL2}_{i}=\\frac{\\| \\mathbf{\\hat{y}}_{i}-\\mathbf{y}_{i}\\|_{2}}{\\|\\mathbf{y}_{i}\\|_{2}}. \\tag{7}\\]
While we use the subscript \\(i\\) to represents a sample-specific error \\(\\text{nL2}_{i}\\), the value \\(\\text{nL2}\\) without a subscript denotes the mean value across a number of samples \\(N\\), for example the mean error across the training set \\(\\mathcal{L}:=\\text{nL2}=\\frac{1}{N_{\\text{train}}}\\sum_{i=1}^{N_{\\text{train} }}\\text{nL2}(\\mathbf{y}_{i},\\mathbf{\\hat{y}}_{i})\\). To minimize the loss, we use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of \\(0.001\\). The training is executed for \\(800\\) epochs on an NVIDIA GeForce RTX \\(3050\\) Ti Laptop GPU. For both, the U-Net-based models \\(\\mathcal{M}_{\\text{U},n_{s}}\\) and FNO-based models \\(\\mathcal{M}_{\\text{F},n_{s}}\\), only the models with the lowest test loss within the \\(800\\) epochs is stored for performance evaluation and visualization.
Established machine learning metrics based on Euclidean distances treat the deviation of two surfaces in frequency or phase as amplitude errors (Wedler et al., 2022). Therefore, we introduce the surface similarity parameter (SSP) proposed by Perlin and Bustamante (2014) as an additional performance metric
\\[\\text{SSP}(\\mathbf{y}_{i},\\mathbf{\\hat{y}_{i}})=\\text{SSP}_{i}=\\frac{\\sqrt{ \\int|F_{\\mathbf{y}_{i}}(k)-F_{\\mathbf{\\hat{y}}_{i}}(k)|^{2}dk}}{\\sqrt{\\int|F_ {\\mathbf{y}_{i}}(k)|^{2}dk}+\\sqrt{\\int|F_{\\mathbf{\\hat{y}}_{i}}(k)|^{2}dk}} \\in[0,1], \\tag{8}\\]
where \\(k\\) denotes the wavenumber vector and \\(F_{\\mathbf{y}_{i}}\\) denotes the discrete Fourier transform of a surface \\(\\mathbf{y}_{i}\\). The SSP is a normalized error metric, with \\(\\text{SSP}_{i}=0\\) indicating perfect agreement and \\(\\text{SSP}_{i}=1\\) a comparison against zero or of phase-inverted surfaces. As the SSP combines phase-, amplitude-, and frequency errors in a single quantity, it is used in recent ocean wave prediction and reconstruction studies by Klein et al. (2020, 2022), Wedler et al. (2022, 2023), Desmars et al. (2021, 2022) and Lunser et al. (2022).
While metrics such as the \\(\\text{nL2}_{i}\\) or \\(\\text{SSP}_{i}\\) evaluate the average reconstruction quality of each \\(\\mathbf{\\hat{y}}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\) across the entire spatial domain \\(r\\) with \\(n_{r}=512\\) grid points, it is important to consider the potential imbalance in reconstruction error between those areas where the radar input \\(\\mathbf{x}_{i}\\) was either shadowed or visible. This imbalance ratio can be quantified by \\(\\frac{\\text{nL2}_{\\text{shad}_{i}}}{\\text{nL2}_{\\text{vis}_{i}}}\\). Here, \\(\\text{nL2}_{\\text{shad}_{i}}=\\text{nL2}(\\mathbf{y}_{\\text{shad}_{i}},\\mathbf{ \\hat{y}}_{\\text{shad}_{i}})\\) and \\(\\text{nL2}_{\\text{vis}_{i}}=\\text{nL2}(\\mathbf{y}_{\\text{vis}_{i}},\\mathbf{ \\hat{y}}_{\\text{vis}_{i}})\\) are the errors of the output wave elevations in the shadowed or visible areas, respectively. We separate the visible and shadowed parts using the shadowing mask \\(\\mathcal{S}\\) introduced in Eq. (5), where \\(\\mathbf{y}_{\\text{vis}_{i}}=\\mathcal{S}\\cdot\\mathbf{y}_{i}\\) and \\(\\mathbf{y}_{\\text{shad}_{i}}=(1-\\mathcal{S})\\cdot\\mathbf{y}_{i}\\). Afterwards, all cells with zero entries are removed from the output arrays, such that the number of visible or invisible data points is \\(n_{\\text{vis}_{i}}\\) or \\(n_{\\text{shad}_{i}}\\), respectively, and \\(\\mathbf{y}_{\\text{vis}_{i}}\\), \\(\\mathbf{\\hat{y}}_{\\text{vis}_{i}}\\in\\mathbb{R}^{n_{\\text{vis}_{i}}\\times 1}\\) and \\(\\mathbf{y}_{\\text{shad}_{i}},\\mathbf{\\hat{y}}_{\\text{shad}_{i}}\\in\\mathbb{R}^{n_ {\\text{shad}_{i}}\\times 1}\\) satisfy \\(n_{\\text{vis}_{i}}+n_{\\text{shad}_{i}}=n_{r}=512\\). To conclude, a high value of the ratio indicates that the reconstruction in areas that were shadowed in the input is much worse than in the visible areas. We thus not only strive for low \\(\\text{nL2}_{i}\\) values, but also for low \\(\\frac{\\text{nL2}_{\\text{nhad}_{i}}}{\\text{nL2}_{\\text{viz}_{i}}}\\) ratios to achieve uniform reconstructions. We use a ratio metric only based on the Euclidean distance based \\(\\text{nL2}_{i}\\) and not for the \\(\\text{SSP}_{i}\\), as small sections of \\(\\mathbf{y}_{i}\\) and \\(\\mathbf{\\hat{y}}_{i}\\) cannot be meaningfully considered in Fourier space.
## 4 Results
This work explores the potential of utilizing machine learning for the reconstruction of one-dimensional ocean wave surfaces \\(\\eta\\) from radar measurement surfaces \\(\\xi\\) at a time instance \\(t_{\\text{s}}\\). Therefore, each radar input sample \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times n_{\\text{s}}}\\), with \\(n_{r}=512\\) being the number of spatial grid points in range direction and \\(n_{\\text{s}}\\) being the number of radar snapshots, is acquired according to Section 2. Each input \\(\\mathbf{x}_{i}\\) is to be mapped to the desired wave surface output \\(\\mathbf{y}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\) via a ML model \\(\\mathcal{M}\\). We examine the impact of the inductive bias of the U-Net-based models \\(\\mathcal{M}_{\\text{U},n_{\\text{s}}}\\) and the FNO-based models \\(\\mathcal{M}_{\\text{F},n_{\\text{s}}}\\) proposed in Section 3, as well as the impact of the number of historical radar snapshots \\(n_{\\text{s}}\\) included in each input \\(\\mathbf{x}_{i}\\). We train the models using a total data set of \\(N_{\\text{train}}=2496\\) samples and thus to learn the mapping \\(\\mathcal{M}:\\mathbf{X}\\rightarrow\\mathbf{Y}\\) with \\(\\mathbf{X}\\in\\mathbb{R}^{2496\\times n_{r}\\times n_{\\text{s}}},\\mathbf{Y}\\in \\mathbb{R}^{2496\\times n_{r}\\times 1}\\). Afterwards, we evaluate their performance using the previously excluded test set of \\(N_{\\text{test}}=624\\) samples. The results are summarized in Table 1, and are discussed regarding the pre-stated Hypothesis 1-Hypothesis 3 and Criterion 1-Criterion 3 in detail in the subsequent subsections.
### Performance of the U-Net-based model
In the first step of our investigation, we examine the ability of U-Net-based models \\(\\mathcal{M}_{\\text{U},n_{\\text{s}}}\\) to reconstruct wave surfaces along the full spatial dimension, which covers \\(r_{\\text{max}}-r_{\\text{min}}=1792\\,\\text{m}\\) on \\(n_{r}=512\\) grid points. We use the \\(N_{\\text{train}}=2496\\) samples of single snapshot (\\(n_{\\text{s}}=1\\)) radar input data for training. Afterwards, we utilize the same architecture to determine the best number of historical snapshots \\(n_{\\text{s}}>1\\) required in the radar inputs to achieve the best reconstruction performance. We also visually compare reconstructed wave elevations \\(\\mathbf{\\hat{y}}_{i}\\) of two selected samples from the test set with their corresponding true elevations \\(\\mathbf{y}_{i}\\).
#### 4.1.1 U-Net using single-snapshot radar data
Mapping of single snapshot radar data (\\(n_{\\text{s}}=1\\)) refers to mapping a radar snapshot \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\) to a wave snapshot \\(\\mathbf{y}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\), with \\(n_{r}=512\\) spatial grid points, that are recorded at the same time instant \\(t_{\\text{s}}\\). According to Table 1 the U-Net-based model \\(\\mathcal{M}_{\\text{U},1}\\) trained with the available \\(N_{\\text{train}}=2496\\) samples achieves a reconstruction performance given by a mean loss value of \\(\\text{nL2}=0.329\\) across all \\(N_{\\text{test}}=624\\) test set samples after 150 epochs of training. Afterwards, the model tends to overfit the training data, as shown in the loss curve in Figure 14(a). The observed error corresponds to a mean value of \\(\\text{SSP}=0.171\\) across all test set samples, which fails to satisfy the Criterion 2 of reconstruction errors below \\(\\text{SSP}\\leq 0.1\\).
To identify the origin of reconstruction errors, we employed model \\(\\mathcal{M}_{\\text{U},1}\\) to generate reconstructions \\(\\mathbf{\\hat{y}}_{i}\\) for two exemplary radar input samples \\(\\mathbf{x}_{i}\\) from the test set. Despite the stratified data split ensuring an equal distribution of sea state parameter combinations \\((L_{\\text{p}},\\epsilon)\\) in the training and test set, the errors are unevenly distributed across individual samples \\(i\\), as exemplarily illustrated in Figure 6: The sample in Figure 5(a) corresponds to a peak wavelength \\(L_{\\text{p}}=180\\,\\text{m}\\) and small amplitudes caused by a small steepness of \\(\\epsilon=0.01\\). It exhibits a minor impact from the shadowing modulation mechanism only affecting \\(9.4\\%\\) of the
\\begin{table}
\\begin{tabular}{l l c c c c c c} \\hline \\hline & & & & & & \\multicolumn{3}{c}{mean errors across} \\\\ & & & & & & \\(N_{\\text{test}}=624\\) test set samples \\\\ \\hline name & architecture & \\(n_{\\text{s}}\\) & epochs & investigated in & \\(\\text{nL2}\\) & \\(\\frac{\\text{nL2}_{\\text{nhad}}}{\\text{nL2}_{\\text{viz}}}\\) & SSP \\\\ \\hline \\hline \\(\\mathcal{M}_{\\text{U},1}\\) & U-Net-based & 1 & 150 & Sec. 4.1.1 & 0.329 & 2.679 & 0.171 \\\\ \\(\\mathcal{M}_{\\text{U},10}\\) & U-Net-based & 10 & 592 & Sec. 4.1.2 & 0.123 & 1.755 & 0.061 \\\\ \\(\\mathcal{M}_{\\text{F},1}\\) & FNO-based & 1 & 721 & Sec. 4.2.1 & 0.242 & 1.886 & 0.123 \\\\ \\(\\mathcal{M}_{\\text{F},9}\\) & FNO-based & 9 & 776 & Sec. 4.2.2 & 0.153 & 1.381 & 0.077 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Reconstruction results averaged across the entire test set evaluated with different metrics for the U-Net-based models \\(\\mathcal{M}_{\\text{U},n_{\\text{s}}}\\) and FNO-based models \\(\\mathcal{M}_{\\text{F},n_{\\text{s}}}\\) trained with either one or multiple radar snapshots \\(n_{\\text{s}}\\) in each sample’s input.
total radar-illuminated surface \\(\\mathbf{x}_{i}\\) in the top panel. The corresponding surface reconstruction \\(\\mathbf{\\hat{y}}_{i}\\) generated by \\(\\mathcal{M}_{\\mathrm{U,1}}\\) in the bottom panel closely approximates the true wave elevation \\(\\mathbf{y}_{i}\\), as evidenced by the sample-specific error of \\(\\mathrm{nL2}_{i}=0.152\\) or \\(\\mathrm{SSP}_{i}=0.076\\). In contrast, the second sample in Figure (b)b with the same \\(L_{\\mathrm{p}}=180\\,\\mathrm{m}\\) but increased \\(\\epsilon=0.10\\) shows \\(71.5\\%\\) of the spatial \\(r\\)-domain being affected from shadowing modulation causing zero-valued intensities. This results in a high reconstruction error of \\(\\mathrm{nL2}_{i}=0.541\\) or \\(\\mathrm{SSP}_{i}=0.311\\). Particularly the shadowed areas seem to contribute to the poor reconstruction, as their error is \\(2.69\\) times higher than in the visible areas, indicated by \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}_{i}}}{\\mathrm{nL2}_{\\mathrm{vis}_{i}}}\\).
#### 4.1.2 U-Net using spatio-temporal radar data
To improve the reconstruction quality of the U-Net-based architecture, especially for high wave steepness, we took inspiration from classical spectral-analysis- and optimization-based reconstruction approaches (cf. Borge et al., 2004; Wu, 2004). These approaches use spatio-temporal radar data by considering temporal sequences of \\(n_{\\mathrm{s}}\\) historical radar snapshots for reconstruction. Thus, we use multiple historical radar snapshots \\(n_{\\mathrm{s}}\\) that satisfy Criterion 3 with \\(\\Delta t_{\\mathrm{r}}=1.3\\,\\mathrm{s}\\) for each input sample \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{512\\times n_{\\mathrm{s}}}\\), while the outputs remain single snapshots \\(\\mathbf{y}_{i}\\in\\mathbb{R}^{n_{\\mathrm{r}}\\times 1}\\) at the respective last time instant \\(t_{\\mathrm{s}}\\). We conducted \\(14\\) additional training runs of the same architecture using all \\(N_{\\mathrm{train}}=2496\\) input-output samples of the training set, but with increasing \\(n_{\\mathrm{s}}\\) in the inputs \\(\\mathbf{x}_{i}\\), to determine the best number of snapshots \\(n_{\\mathrm{s}}\\). This procedure is summarized in the boxplot in Figure 7.
The boxplot shows that the model's mean performance across the entire test set significantly improves up to a value of \\(n_{\\mathrm{s}}=10\\), confirming Hypothesis 3 as the reconstruction quality improves by incorporating multiple radar snapshots in the input. Moreover, the sample-specific error values \\(\\mathrm{nL2}_{i}\\) become less scattered around the mean value. The model \\(\\mathcal{M}_{\\mathrm{U,10}}\\), determined by the boxplot analysis, achieves a final mean reconstruction performance of \\(\\mathrm{nL2}=0.123\\) or \\(\\mathrm{SSP}=0.061\\) across the \\(N_{\\mathrm{test}}=624\\) test set samples, as shown in Table 1, now satisfying Criterion 2 of \\(\\mathrm{SSP}\\leq 0.10\\) and thus confirms Hypothesis 1. In addition, it yields a lower ratio of \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}_{i}}}{\\mathrm{nL2}_{\\mathrm{vis}}}=1.755\\) compared to \\(2.679\\) for model \\(\\mathcal{M}_{\\mathrm{U,1}}\\), indicating a more balanced
Figure 6: Two samples from the test set described by the same wavelength \\(L_{\\mathrm{p}}=180\\,\\mathrm{m}\\), but different steepness \\(\\epsilon\\), reconstructed by the U-Net-based architecture \\(\\mathcal{M}_{\\mathrm{U,1}}\\). (a) Small \\(\\epsilon\\) values cause minor impact of the shadowing modulation in the radar input and allow accurate reconstructions. (b) Larger \\(\\epsilon\\) create more extensive shadowed areas and cause higher reconstruction errors.
reconstruction between shadowed and visible areas on average. Moreover, the model does not exhibit early overfitting anymore, achieving the best performance after 592 epochs, shown in Figure B.15b.
Figure 8 further confirms the improvement of the reconstruction, using \\(n_{\\mathrm{s}}=10\\) radar snapshots in each input to train \\(\\mathcal{M}_{\\mathrm{U,10}}\\), by depicting the same two exemplary test set samples reconstructed by \\(\\mathcal{M}_{\\mathrm{U,1}}\\) in Figure 6 before. The top panels display the most recent (\\(t_{\\mathrm{s}}\\)) radar snapshot present in \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times n_{\\mathrm{s}}}\\) in the darkest shading and preceding snapshots at \\(t_{j}=\\{t_{\\mathrm{s}}-j\\Delta t_{\\mathrm{r}}\\}_{j=0,\\ldots,n_{\\mathrm{s}}-1}\\) in increasingly lighter shades. Compared to Figure 6, the sample with small \\(\\epsilon=0.01\\) in Figure 8 experiences only a slight reduction in reconstruction error, while the sample with \\(\\epsilon=0.10\\) in Figure 8 exhibits a substantial reduction around one-third of the previous sample-specific nL\\(2_{i}\\) or SSP\\({}_{i}\\) value. The improved performance seems mainly attributable to the enhanced reconstruction of shadowed areas.
Figure 8: Two samples from the test set described by same wavelength \\(L_{\\mathrm{p}}=180\\) m, but different wave steepness \\(\\epsilon\\), reconstructed by the U-Net-based architecture trained with \\(n_{\\mathrm{s}}=10\\) historical snapshots in the radar input \\(\\mathcal{M}_{\\mathrm{U,10}}\\). Compared to \\(\\mathcal{M}_{\\mathrm{U,1}}\\), a strong reconstruction improvement is observed, especially for the sample with high \\(\\epsilon=0.10\\) in (b).
Figure 7: Boxplot depicting the error distribution on test set, depending on the number of historical radar snapshots \\(n_{\\mathrm{s}}\\) provided to train U-Net-based architectures \\(\\mathcal{M}_{\\mathrm{U,n_{s}}}\\). The best model performance is achieved for \\(n_{\\mathrm{s}}=10\\).
### Performance of the FNO-based model
The U-Net-based model \\(\\mathcal{M}_{\\mathrm{U,10}}\\) already supported Hypothesis 1 and Hypothesis 3 by demonstrating the potential to reconstruct wave surface elevations from radar data in general and improving the reconstruction quality by including additional historical radar data in the input. However, we also hypothesized that the FNO-based architecture may outperform CNN-based methods, such as the U-Net, due to its global inductive bias (Hypothesis 2), which may be beneficial for the wave data structure. To investigate this, we again use the entire set of \\(N_{\\mathrm{train}}=2496\\) samples to train FNO-based models \\(\\mathcal{M}_{\\mathrm{F},n_{\\mathrm{s}}}\\) with \\(n_{\\mathrm{s}}=1\\) radar snapshot in each of the inputs \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{\\mathrm{r}}\\times n_{\\mathrm{s}}}\\) first. Subsequently, we determine the number \\(n_{\\mathrm{s}}>1\\) to achieve the best reconstruction performance. Both investigations again are conducted on the entire domain of \\(1792\\,\\mathrm{m}\\) (\\(n_{r}=512\\)) and we compare true and reconstructed elevations \\(\\mathbf{y}_{i}\\) and \\(\\mathbf{\\hat{y}}_{i}\\in\\mathbb{R}^{n_{\\mathrm{r}}\\times 1}\\) of two exemplary samples.
#### 4.2.1 FNO using single-snapshot radar data
The FNO-based model \\(\\mathcal{M}_{\\mathrm{F,1}}\\) trained with \\(n_{\\mathrm{s}}=1\\) snapshot in each input, attains its best performance \\(\\mathrm{nL2}=0.240\\) after \\(721\\) training epochs, as shown Table 1 and demonstrated in the loss curve in Figure (a)a. Although the corresponding mean \\(\\mathrm{SSP}=0.123\\) across all \\(N_{\\mathrm{test}}=624\\) samples in the test set does not attain the Criterion 2, the error still presents a notable improvement compared to the SSP value of \\(0.171\\) previously obtained by the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U,1}}\\). Moreover, \\(\\mathcal{M}_{\\mathrm{F,1}}\\) not only reduces the mean nL2 or SSP error but also reconstructs the waves more uniformly between shadowed and visible areas compared to \\(\\mathcal{M}_{\\mathrm{U,1}}\\). This is evident by the decrease in the mean \\(\\frac{\\mathrm{nL2sh}}{\\mathrm{nL2vis}}\\)-ratio from \\(2.679\\) to \\(1.886\\).
This improved wave reconstruction can be illustrated by comparing the reconstructions of the same two exemplary test set samples generated by \\(\\mathcal{M}_{\\mathrm{F,1}}\\) in Figure 9 to \\(\\mathcal{M}_{\\mathrm{U,1}}\\) in Figure 6. As depicted in Figure (a)a, the sample-specific \\(\\mathrm{nL2}_{i}\\) or \\(\\mathrm{SSP}_{i}\\) metrics are only slightly improved, but the ratio \\(\\frac{\\mathrm{nL2}_{\\mathrm{shad}_{i}}}{\\mathrm{nL2vis}_{i}}\\) is substantially smaller than observed using \\(\\mathcal{M}_{\\mathrm{U,1}}\\) before. These observations are even more pronounced for the sample with high \\(\\epsilon\\) in Figure (b)b. The \\(\\mathcal{M}_{\\mathrm{F,1}}\\) reduces the error in terms of \\(\\mathrm{nL2}_{i}\\) or \\(\\mathrm{SSP}_{i}\\) by almost half and also produces a more uniform reconstruction between shadowed and visible areas.
Figure 9: Two samples from the test set described by the same wavelength \\(L_{\\mathrm{p}}=180\\,\\mathrm{m}\\), but different wave steepness \\(\\epsilon\\) reconstructed by the FNO-based architecture \\(\\mathcal{M}_{\\mathrm{F,1}}\\). The \\(\\mathcal{M}_{\\mathrm{F,1}}\\) outperforms the \\(\\mathcal{M}_{\\mathrm{U,1}}\\) in reconstructing the shadowed areas, especially noticeable for the sample with large \\(\\epsilon=0.10\\) in (b).
#### 4.2.2 FNO using spatio-temporal radar data
Although the FNO-based model \\(\\mathcal{M}_{\\mathrm{F},1}\\) outperforms the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U},1}\\), it does not achieve the desired reconstruction quality of SSP \\(\\leq 0.10\\) (Criterion 2). To enhance the model performance we analyze the effect of including multiple historical snapshots in each input \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{512\\times n_{\\mathrm{s}}}\\) for the training of this architecture. Again, 14 additional training runs were conducted, each with an increasing number of \\(n_{\\mathrm{s}}\\). The results, depicted in Figure 10, demonstrate an initial improvement in performance for the models \\(\\mathcal{M}_{\\mathrm{F},n_{\\mathrm{s}}}\\) with increasing \\(n_{\\mathrm{s}}\\) which is slightly less notable than that observed for the U-Net-based models \\(\\mathcal{M}_{\\mathrm{U},n_{\\mathrm{s}}}\\) in Figure 7 before. The FNO-based models achieve the best performance for \\(n_{\\mathrm{s}}=9\\) input snapshots, beyond which the mean error slightly increases.
According to Table 1, the model \\(\\mathcal{M}_{\\mathrm{F},9}\\) attains a mean performance of \\(\\mathrm{nL2}=0.153\\) on the test set, after 776 training epochs, as depicted by the loss curve in the Figure 16b. This error value corresponds to a mean SSP \\(=0.076\\), fulfilling the Criterion 2 of a SSP \\(\\leq 0.10\\). However, in comparison to the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U},10}\\), which achieved a final mean value of SSP \\(=0.061\\), the performance of \\(\\mathcal{M}_{\\mathrm{F},9}\\) measured in terms of \\(\\mathrm{nL2}\\) or SSP is slightly inferior, even though in the single-snapshot case \\(\\mathcal{M}_{\\mathrm{F},1}\\) outperformed \\(\\mathcal{M}_{\\mathrm{U},1}\\). Nevertheless, compared to all investigated models, \\(\\mathcal{M}_{\\mathrm{F},9}\\) on average achieves the best reconstruction uniformity between shadowed and visible areas indicated by a mean \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}}}{\\mathrm{nL2}_{\\mathrm{vis}}}=1.381\\) on test
Figure 11 shows the reconstructions \\(\\mathbf{\\hat{y}_{i}}\\) for the same two exemplary radar inputs \\(\\mathbf{x}_{i}\\) from the test set used before, now generated by the trained FNO-based model \\(\\mathcal{M}_{\\mathrm{F},9}\\). Compared to \\(\\mathcal{M}_{\\mathrm{F},1}\\) in Figure 9 both samples experience an almost similar increase in reconstruction quality measured in terms of the sample-specific SSP\\({}_{i}\\) and \\(\\mathrm{nL2}_{i}\\) errors. In addition, these values are comparable to that achieved by \\(\\mathcal{M}_{\\mathrm{U},10}\\) in Figure 8. However, for the sample with small \\(\\epsilon=0.01\\) in Figure 11a, \\(\\mathcal{M}_{\\mathrm{F},9}\\) generates a more balanced reconstruction than \\(\\mathcal{M}_{\\mathrm{U},10}\\), as reflected by the reduction of \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}_{i}}}{\\mathrm{nL2}_{\\mathrm{vis}_{i}}}\\) from 2.201 to 1.665 for this individual sample. For the higher-steepness sample in Figure 11b, the increase of reconstruction uniformity given by \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}_{i}}}{\\mathrm{nL2}_{\\mathrm{vis}_{i}}}\\) is less significant but still present.
### Comparative discussion
The aforementioned visual observations described for Figures 6, 8, 9 and 11 have been limited to the examination of only two exemplary samples from the test set, both described by peak wavelength \\(L_{\\mathrm{p}}=180\\,\\mathrm{m}\\) and either steepness \\(\\epsilon=0.01\\) or \\(\\epsilon=0.10\\). To avoid any possible incidental observations, the generalization of the error values needs to be examined. This can be achieved by plotting sample-specific error values such as \\(\\mathrm{nL2}_{i}\\) against each sample's describing combination of peak wavelength \\(L_{\\mathrm{p}}\\) and steepness \\(\\epsilon\\) for all \\(N_{\\mathrm{test}}=624\\) test set samples reconstructed using the U-Net-based models \\(\\mathcal{M}_{\\mathrm{U},1}\\) and \\(\\mathcal{M}_{\\mathrm{U},10}\\) or the FNO-based models \\(\\mathcal{M}_{\\mathrm{F},1}\\) and \\(\\mathcal{M}_{\\mathrm{F},9}\\).
Figure 10: Boxplot depicting the error distribution on the test set, depending on the number of historical radar snapshots \\(n_{\\mathrm{s}}\\) provided to train FNO-based architectures \\(\\mathcal{M}_{\\mathrm{F},n_{\\mathrm{s}}}\\). The best model performance is achieved for \\(n_{\\mathrm{s}}=9\\). Afterwards, the errors slightly increase again.
#### 4.3.1 Discussion of overall reconstruction quality
Figure 12 illustrates the reconstruction error as the mean \\(\\text{nL2}_{i}\\) value across 4-5 samples available for each specific \\(L_{\\text{p}}\\)-\\(\\epsilon\\)-combination included in the test set. Additionally, red dots in the cell centers indicate the combinations that achieved a mean \\(\\text{SSP}_{i}\\leq 0.10\\) (Criterion 2).
Subfigure 12 confirms the findings presented in Section 4.1.1 for the U-Net-based model \\(\\mathcal{M}_{\\text{U},1}\\) trained with one radar snapshot (\\(n_{\\text{s}}=1\\)) in each input \\(\\mathbf{x}_{i}\\). The errors between the true \\(\\mathbf{y}_{i}\\) and reconstructed wave output \\(\\mathbf{\\hat{y}}_{i}\\) increase with increasing steepness \\(\\epsilon\\) and thus with increasing wave height. Moreover, we now observe that this effect occurs almost independent of the peak wavelength \\(L_{\\text{p}}\\) of each sample. For samples described by \\(\\epsilon>0.02\\), \\(\\mathcal{M}_{\\text{U},1}\\) fails to meet the Criterion 2 as the corresponding errors exceed \\(\\text{SSP}_{i}\\) values of \\(0.10\\). This is attributable to the geometrical radar imaging problem demonstrated in Figure 2, showing that the increase in wave height caused by increased \\(\\epsilon\\) results in more and larger shadowed areas. Figure 13 demonstrates that the occurrence of shadowing mainly increases with increasing \\(\\epsilon\\) and is less influenced by \\(L_{\\text{p}}\\). While \\(\\epsilon=0.01\\) on average only causes around \\(10\\%\\), \\(\\epsilon=0.10\\) instead causes approximately \\(70-75\\%\\) of each input \\(\\mathbf{x}_{i}\\) being affected by shadowing modulation. This results in areas along the spatial range \\(r\\) containing zero-valued intensities that complicate the radar inversion task.
Understanding the challenges faced by model \\(\\mathcal{M}_{\\text{U},1}\\) in reconstructing shadowed areas, requires revisiting the U-Net's local mode of operation, outlined in Section 3.1, and the exemplary radar input depicted in the upper panel of Figure 5(b). Due to shadowing, numerous local areas exhibit zero-intensities covering up to approximately \\(200\\,\\text{m}\\), especially for greater distances from the radar system. However, the kernels in the first convolutional layer with a kernel size of \\(s_{\\text{k}}=5\\) only cover a domain of \\(s_{\\text{k}}\\cdot\\Delta r=17.5\\,\\text{m}\\) while being shifted across the input feature map in a step-wise manner. While the U-Net's translational equivariance property is useful for translating radar intensities to wave surface elevation regardless of their spatial location, it thus also causes kernels to be shifted across large areas with zero input only, which cannot be processed in a meaningful way. Although the pooling layers subsequently reduce the dimension of feature maps, resulting
Figure 11: Two samples from the test set described by the same wavelength \\(L_{\\text{p}}=180\\,\\text{m}\\), but different wave steepness \\(\\epsilon\\), reconstructed by the FNO-based architecture trained with \\(n_{\\text{s}}=9\\) historical snapshots in the radar input \\(\\mathcal{M}_{\\text{F},9}\\). Compared to \\(\\mathcal{M}_{\\text{F},1}\\) a reconstruction improvement is visible for both samples. Moreover, the reconstruction quality on the entire \\(r\\)-domain is almost equivalent to the results of \\(\\mathcal{M}_{\\text{U},10}\\), but especially for the small steepness sample in (a) the error ratio between shadowed and visible areas is remarkably smaller using \\(\\mathcal{M}_{\\text{F},9}\\) which indicates the potential of a more uniform reconstruction.
Figure 12: Error surfaces generalizing the previous observations for sample-specific errors \\(\\text{nL2}_{i}\\) of the four investigated models \\(\\mathcal{M}\\) depending on the \\(L_{\\text{p}}\\epsilon\\)-combination of the samples from the test set. Red dots indicate parameter combinations that meet the Criterion 2 of reconstruction errors \\(\\text{SSP}_{i}\\leq 0.10\\). The upper subplots illustrate the result of (a) the U-Net-based model and (b) the FNO-based model, both trained with only one radar snapshot (\\(n_{\\text{s}}=1\\)) in each input \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times 1}\\). The same architectures were trained with multiple historic radar snapshots in each input \\(\\mathbf{x}_{i}\\in\\mathbb{R}^{n_{r}\\times n_{\\text{s}}}\\), as demonstrated in the lower subplots, where (c) shows the U-Net-based model trained with \\(n_{\\text{s}}=10\\) and (d) the FNO-based model trained with \\(n_{\\text{s}}=9\\).
Figure 13: Graphs visualizing the average proportion of each input \\(\\mathbf{x}_{i}\\) being affected by shadowing modulation in dependency of the samples wave steepness values \\(\\epsilon=0.01-0.10\\) for the shortest, one medium and the longest peak wavelength \\(L_{\\text{p}}\\) occurring in the test set.
in an increased ratio of kernel size to feature size, the problem of radar inversion can be assumed to be based on the mapping of individual pixel values, known as low-level features. These features are learned in the early layers of a CNN-based network (Zeiler and Fergus, 2014). Accordingly, the initial stages of the U-Net-based architecture are more important for our task than for its original purpose of image segmentation (Ronneberger et al., 2015) that is based on mid- to high-level features extracted in the later layers. For this reason, we face problems applying \\(\\mathcal{M}_{\\mathrm{U,1}}\\) for reconstruction, as important kernels in the early layers receive a significant amount of sparse, not valuable content. Although increasing the kernel size \\(s_{\\mathrm{k}}\\) is a theoretically possible solution, doing so would compromise the U-Net's local key property. Moreover, when processing two-dimensional surfaces with 2D convolutional kernels in future research, it would result in a quadratic increase in the number of weights, leading to computational issues.
For this reason, the approach of providing \\(n_{\\mathrm{s}}=10\\) consecutive radar snapshots governed according to Criterion 3 for the training of U-Net-based model \\(\\mathcal{M}_{\\mathrm{U,10}}\\) in Section 4.1.2 more effectively accounts for the sparsity in the input data. The upper panel of Figure 7(b) demonstrated the presence of input information across the majority of the \\(r\\)-domain. The wave surfaces undergo shape variations while travelling towards the radar due to differing phase velocities of their components caused by dispersion. This results in a different part of the radar surface being shadowed or visible at each time step and seem to allow to capture more information about the wave on average, as the reconstruction quality significantly improves compared to \\(\\mathcal{M}_{\\mathrm{U,1}}\\). Therefore we infer that the spatial and temporal shifts of the additional radar intensities acquired at \\(t_{j}=\\sum_{j=0}^{n_{\\mathrm{s}}-1}t_{\\mathrm{s}}-j\\Delta t_{\\mathrm{r}}\\) can be compensated successfully. This may be attributed to the fact that each kernel applied to the input has its own channel for each snapshot, allowing for separate processing to counterbalance the shift first, followed by the addition of results to one feature map utilized as part of the input for the next layer. The improved reconstruction observed for \\(\\mathcal{M}_{\\mathrm{U,10}}\\) is further supported by its performance generalization shown in Figure 11(c). Compared to Figure 11(a), the mean nL2 error is substantially smaller and sample-specific reconstruction errors nL2\\({}_{i}\\) are more evenly distributed across the \\(L_{\\mathrm{p}}\\)-\\(\\epsilon\\)-space, resulting in a satisfactory SSP\\({}_{i}\\) value (Criterion 2) for almost all samples. Although there is still a slight increase in the error for samples with higher \\(L_{\\mathrm{p}}\\) and \\(\\epsilon\\), the proposed model \\(\\mathcal{M}_{\\mathrm{U,10}}\\) can accurately reconstruct samples with varying wave characteristics and degrees of shadowing, thus supporting Hypothesis 1 and Hypothesis 3.
Motivated by the inherent patterns in wave data and the successful application of the Fourier neural operator (FNO) to systems exhibiting certain periodic properties, we conducted a comparative analysis of the global inductive bias of this network architecture with the local inductive bias of the CNN-based U-Net. As discussed in Section 4.2.1, our observations indicate that the FNO-based model \\(\\mathcal{M}_{\\mathrm{F,1}}\\) trained with only one snapshot (\\(n_{\\mathrm{s}}=1\\)) outperforms the U-Net-based \\(\\mathcal{M}_{\\mathrm{U,1}}\\) in reconstructing shadowed areas in the input, as evidenced for example by comparing the reconstruction in Figure 8(b) to 8(b). This observation is generalizable to the entire test data set, as shown in Figure 11(b). Although errors in the FNO error surface still increase with higher steepness \\(\\epsilon\\) and consequently with an increase in the percentage of shadowing according to Figure 12, the increase is much less severe than that obtained by \\(\\mathcal{M}_{\\mathrm{U,1}}\\) shown in Figure 11(a).
The improved ability of the FNO-based model \\(\\mathcal{M}_{\\mathrm{F,1}}\\) in reconstructing shadowed areas from a single-snapshot input can be attributed to its mode of operation outlined in Section 3.2. Although the latent representation \\(v_{0}\\) in Figure 5 is usually not explicitly known, we can infer that the layer \\(P\\) with \\(n_{\\mathrm{s}}=1\\) input nodes and \\(n_{\\mathrm{w}}\\) output nodes only performs linear transformations to each radar input \\(\\mathbf{x}_{i}\\). As the radar inputs exhibit kinks at the transitions from visible to shadowed areas, \\(v_{0}\\) will have similar characteristics along the range direction. These transitions result in peaks for specific wavenumbers \\(k\\) in the spectrum \\(F(k)\\). However, the desired wave outputs \\(\\mathbf{y}_{i}\\) of the training data samples possess smooth periodic properties, without peaks at the kink-related wavenumbers in \\(F_{\\mathbf{y}}(k)\\). Since the \\(R_{j}\\) matrices in the Fourier layers scale the radar input spectrum to the wave output spectrum, they learn small coefficients for the corresponding entries to reduce the peaks. Therefore, the FNO's global inductive bias, combined with the data structure of wave surfaces, can efficiently correct sparse, shadowed regions in spectral space, resolving the issue of insufficient local information for reconstruction that arises with the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U,1}}\\) in Euclidean space. Thus it can be also stated that the FNO explicitly hard-encodes prior knowledge about physical wave properties through its network structure and thus can be assumed to be a _physics-guided design of architecture_(cf. Willard et al., 2022; Wang and Yu, 2023) for our problem.
Despite the better performance of the FNO-based model \\(\\mathcal{M}_{\\mathrm{F},1}\\) compared to \\(\\mathcal{M}_{\\mathrm{U},1}\\) in reconstructing shadowed radar inputs that already supports Hypothesis 2, the red dots in Figure 11(b) still reveal that most of the test set samples fail to meet the Criterion 2 of SSP\\({}_{i}\\leq 0.10\\). However, this issue was resolved by training a FNO-based model \\(\\mathcal{M}_{\\mathrm{F},9}\\) with \\(n_{\\mathrm{s}}=9\\) historical radar snapshots in each input \\(\\mathbf{x}_{i}\\). This was demonstrated for the two test set examples in Figure 11 and is generalized in Figure 11(d). We observe from that Figure, that the slightly higher mean error across the entire test set of \\(\\mathcal{M}_{\\mathrm{F},9}\\) compared to \\(\\mathcal{M}_{\\mathrm{U},10}\\), is primarily caused by the individual errors nL2\\({}_{i}\\) of samples with low steepness \\(\\epsilon\\) or short wavelengths \\(L_{\\mathrm{p}}\\). It is worth noting, that the observed minimal increase in errors for short wavelengths cannot be attributed to a truncation at an insufficient number of Fourier series modes \\(n_{\\mathrm{m}}\\) in the Fourier layers. In this work, \\(n_{\\mathrm{m}}\\) is determined as 64 and the spectral representation is discretized by \\(\\Delta k=\\frac{2\\pi}{n_{r}\\cdot\\Delta r}=0.00351\\,\\mathrm{m}^{-1}\\). The highest peak wavenumber of \\(k_{\\mathrm{p}}=0.0785\\,\\mathrm{m}^{-1}\\) in our data set is reached for samples with \\(L_{\\mathrm{p}}=80\\,\\mathrm{m}\\). The spectral density around \\(k_{\\mathrm{p}}\\) has decayed almost completely at \\(k_{\\mathrm{fit}}=n_{\\mathrm{m}}\\cdot\\Delta k=0.2264\\,\\mathrm{m}^{-1}\\), such that no important wave components are filtered out, as is visualized in the Figure 16. Therefore, the small unequal tendency in error distribution achieved by \\(\\mathcal{M}_{\\mathrm{F},9}\\) in Figure 11(d) for samples described by different \\(L_{\\mathrm{p}}\\)-\\(\\epsilon\\), is likely caused by other factors than by an unsuitable network hyperparameter \\(n_{\\mathrm{m}}\\). Moreover, we observed in the loss curve shown in Figure 15(b) that further training for more than 800 epochs could potentially improve the model's performance, whereas the best performance on the test set for \\(\\mathcal{M}_{\\mathrm{U},10}\\) seems to be already reached, as the model begins to overfit the training data, as depicted in Figure 14(a).
#### 4.3.2 Discussion of reconstruction uniformity
So far the generalization of the reconstruction quality has been evaluated based on samples-specific nL2\\({}_{i}\\) or SSP\\({}_{i}\\) values across the entire spatial \\(r\\)-domain. However, Table 1 indicates that the FNO-based model \\(\\mathcal{M}_{\\mathrm{F},9}\\) achieves a more uniform reconstruction between shadowed and visible areas. This is demonstrated by the mean ratio of \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}}}{\\mathrm{nL2}_{\\mathrm{vis}}}=1.381\\) across all samples in the test set, while the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U},10}\\) still struggles with reconstructing shadowed areas as inferred by its \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}}}{\\mathrm{nL2}_{\\mathrm{vis}}}=1.755\\). Therefore, the \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}}}{\\mathrm{nL2}_{\\mathrm{vis}}}\\)-ratio error distribution is displayed in Figure 15 for each test set sample based on their \\(L_{\\mathrm{p}}\\)-\\(\\epsilon\\)-combination. The model \\(\\mathcal{M}_{\\mathrm{U},10}\\) generates an error surface shown in Figure 15(a) that exhibits broadly varying levels of uniformity in the reconstruction even for samples with neighbouring \\(L_{\\mathrm{p}}\\)-\\(\\epsilon\\)-combinations. In some cases, the reconstruction errors in shadowed areas exceed those in visible areas by more than 2.5 times. This undesired effect is much less pronounced for the FNO-based model \\(\\mathcal{M}_{\\mathrm{F},9}\\), as a comparison with Figure 15(b) reveals.
Figure 15: Error surfaces depicting the ratio \\(\\frac{\\mathrm{nL2}_{\\mathrm{shod}_{i}}}{\\mathrm{nL2}_{\\mathrm{vis}}}\\) between the reconstruction quality achieved on shadowed and visible areas depending on the specific \\(L_{\\mathrm{p}}\\)-\\(\\epsilon\\)-combination of the samples from the test set. The individual cell entries display the mean ratio across the 4-5 samples available for each specific parameter combination. The uniformity of the reconstructions achieved by the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U},10}\\) in (a) thus is compared to the one achieved by the FNO-based model \\(\\mathcal{M}_{\\mathrm{F},9}\\) in (b).
#### 4.3.3 Final comparison
For a final evaluation, either the general reconstruction quality nL2 can be chosen as the main performance criterion, which in our case would argue for the selection of the U-Net-based model \\(\\mathcal{M}_{\\mathrm{U,10}}\\), or instead the uniformity of the reconstruction indicated by \\(\\frac{\\mathrm{nL2}_{\\mathrm{had}}}{\\mathrm{nL2}_{\\mathrm{vis}}}\\), which would argue for the FNO-based model \\(\\mathcal{M}_{\\mathrm{F,9}}\\). This decision should be made based on the application case. If the ML-reconstructed wave surface is intended to be used as an initial condition for subsequent prediction with the HOS method, we would expect a more uniform reconstruction to represent a more physical result, and consequently, the FNO-based reconstruction to be less likely to affect the subsequent wave prediction in a negative way. Moreover, we observed that the global approach of the FNO-based models would allow for a reasonably more meaningful reconstruction of shadowed areas even with fewer historical radar snapshots \\(n_{\\mathrm{s}}\\) contained in each input \\(\\mathbf{x}_{i}\\). This is not necessarily the case using the U-Net-based models.
Besides, the FNO-based model \\(\\mathcal{M}_{\\mathrm{F,9}}\\) in this work allows for much faster inference speed than the U-Net-based \\(\\mathcal{M}_{\\mathrm{U,10}}\\), even though \\(\\mathcal{M}_{\\mathrm{F,9}}\\) is constructed as a custom implementation and contains more weights compared to \\(\\mathcal{M}_{\\mathrm{U,10}}\\), which uses standard layers from the PyTorch library that are in addition probably optimized. More specifically, using the hardware specifications outlined in Section 3.3, our \\(\\mathcal{M}_{\\mathrm{F,9}}\\) is able to generate reconstructions \\(\\mathbf{\\hat{y}}_{i}\\) for an input sample \\(\\mathbf{x}_{i}\\) in an average time of \\(1.9\\cdot 10^{-3}\\,\\mathrm{s}\\) which is approximately 20 times faster than the average time of \\(3.7\\cdot 10^{-2}\\,\\mathrm{s}\\) required by \\(\\mathcal{M}_{\\mathrm{U,10}}\\) for the same task.
## 5 Conclusion
This work introduces a novel machine learning-based approach for the phase-resolved reconstruction of ocean wave surface elevations from sparse radar measurements. To evaluate the performance of our approach, we generate synthetic nonlinear wave surface data for a wide range of sea states and corresponding radar surface data by incorporating both tilt- and shadowing modulation mechanisms. Two neural network architectures based on the U-Net or the Fourier neural operator are trained, both provided with varying amounts of spatio-temporal radar surface measurement input.
Our results and discussion indicate that both models are capable of producing high-quality wave surface reconstructions with average errors below \\(\\mathrm{SSP}\\leq 0.10\\) when trained with a sufficient amount of \\(n_{\\mathrm{s}}=10\\) or 9 consecutive radar snapshots. Furthermore, both models generalize well across different sea states. On average, the U-Net-based model achieves slightly smaller errors across the entire spatial domain of each reconstructed wave sample, while the FNO-based model produces a more uniform wave reconstruction between areas that were shadowed and visible in the corresponding radar input. This observation is further confirmed by the edge case of instantaneous inversion, i.e. if the networks are trained with only a single radar snapshot in each input. The weakness in the reconstruction of shadowing-affected areas of the U-Net-based model can be attributed to the local operation of the network architecture, where its small convolutional kernels do not receive processable information when shifted across shadowed input areas with zero intensities only. The problem can be circumvented using the FNO-based network that learns a global mapping between radar input and wave output in the Fourier space. This network structure already encodes prior physical knowledge about the periodic data structure apparent in ocean waves and is therefore possibly better suited for our use case.
Our findings suggest that the FNO-based network may offer additional advantages, especially concerning smaller training datasets and noisy input radar data. Furthermore, future research could delve into the reconstruction of two-dimensional ocean wave surfaces, as the FNO network can also be implemented using 2D-FFTs. However, due to the different propagation directions of the component waves in short-crested, two-dimensional sea states, we anticipate a potential degradation in the reconstruction performance compared to the one-dimensional scenario explored in this study. This performance degradation could be mitigated through appropriate countermeasures, such as employing FNOs with increased capacity, conducting longer training runs, or applying suitable regularization techniques.
Moreover, the current methodology solely relies on synthetic radar input and the corresponding wave output data. Although it can be presumed that the HOS method generates wave surfaces that exhibit a reasonable degree of realism, radar imaging mechanisms for marine X-Band radar are not yet fully understood, such that state-of-the-art radar models are associated with higher uncertainties. Consequently, on theone hand, a model trained solely on synthetic radar-wave data pairs cannot be applied for inference using real-world radar data. On the other hand, the acquisition of real-world radar-wave pair samples to train the neural networks is associated with high operational costs due to the necessity of deploying a dense grid of buoys for capturing wave snapshots. These data issues currently limit the application of the developed machine-learning-based reconstruction approach for real-world applications. In future research, we endeavour to tackle this issue through two opportunities: Firstly, we aim to enhance the realism of synthetic radar data models to improve their accuracy. Alternatively, we intend to investigate the feasibility of physics-informed learning approaches as a tool to overcome the challenges associated with measuring real-world high-resolution wave output data.
## Appendix A Influence of neural network hyperparameters
To mitigate the high cost of obtaining a larger data set, a four-fold cross-validation approach with an independent test set was utilized for finding the network hyperparameters, as recommended for example by Raschka (2018). The data set of \\(N=3120\\) samples was divided into a fixed and independent test set comprising \\(20\\%\\) or \\(N_{\\text{test}}=624\\) samples, with the remaining \\(2496\\) samples partitioned into four equal-sized parts based on the governing sea state parameters \\((L_{\\text{p}},\\epsilon)\\) using a stratified data split technique to ensure equal representation of each wave characteristic in the resulting subsets. During each cross-validation step, one part with \\(N_{\\text{val}}=624\\) samples was used as the validation set, and the remaining three parts with \\(N_{\\text{train}}=1872\\) samples constituted the training set.
Tables 2 and 3 present the results of the four-fold cross-validation hyperparameter studies for the U-Net- and FNO-based architectures. For both network types, the same fixed test set was excluded from this investigation. The metrics (nL2, \\(\\frac{\\text{nL}_{\\text{
## Appendix B Loss curves
After determining appropriate hyperparameters for the U-Net-based and FNO-based models in Appendix A, the train and validation data from the four-fold cross-validation were merged. This combined data set was then used to train the models \\(\\mathcal{M}_{\\text{U},n_{\\text{s}}}\\) and \\(\\mathcal{M}_{\\text{F},n_{\\text{s}}}\\), with one radar snapshot in each samples input (\\(n_{\\text{s}}=1\\)) or either \\(n_{\\text{s}}=9\\) or \\(n_{\\text{s}}=10\\) historical radar snapshots in each input. The performance evaluation of these models was conducted on the previously excluded test set of \\(N_{\\text{test}}=624\\) samples. The loss curves depicted in Figures 14(a)-15(b) illustrate the model performance and the impact of different values for \\(n_{\\text{s}}\\) throughout the training epochs. Deviation between the train and test loss curves indicates overfitting, characterized by excessive adaptation to the training data, resulting in poor generalization to new samples. Consequently, the best models \\(\\mathcal{M}\\) were selected based on the lowest test loss within the 800 training epochs.
Figure 16: Loss curves for training of the FNO-based model. Subfigure (a) depicts the loss of model \\(\\mathcal{M}_{\\text{F},1}\\) trained with one one snapshot \\(n_{\\text{s}}=1\\) in the radar input, where the best performance \\(\\text{nL2}=0.242\\) on test set for model evaluation is reached after 721 epochs. Compared to the U-Net based model \\(\\mathcal{M}_{\\text{U},1}\\), \\(\\mathcal{M}_{\\text{F},1}\\) does not seem to be susceptible to overfitting. Subfigure (b) depicts model \\(\\mathcal{M}_{\\text{F},9}\\) trained with \\(n_{\\text{s}}=9\\) instead, which increases performance, resulting in \\(\\text{nL2}=0.153\\) after 776 epochs of training. It can be expected that training beyond 800 epochs would further slightly increase the best performance on test set.
Figure 15: Loss curves for training of the U-Net-based model. Subfigure (a) depicts the loss of model \\(\\mathcal{M}_{\\text{U},1}\\) trained with one one snapshot \\(n_{\\text{s}}=1\\) in the radar input, where the best performance \\(\\text{nL2}=0.329\\) on test set for model evaluation is reached after 150 epochs. Afterwards the model would tend to overfit the training data. Subfigure (b) depicts model \\(\\mathcal{M}_{\\text{U},10}\\) trained with \\(n_{\\text{s}}=10\\) instead, which strongly increases performance, resulting in \\(\\text{nL2}=0.123\\) after 592 epochs of training.
## Appendix C Visualization of spectral representation
During the investigations on the FNO models (see Figure 5), a concern arose regarding the chosen number of Fourier series modes \\(n_{\\mathrm{m}}=64\\) in the \\(R_{i}\\)-matrices, which might lead to the omission of significant frequency components in the wave data. To address this concern, we visualized the JONSWAP spectra employed to initialize the HOS wave simulation for a specific steepness value \\(\\epsilon\\) (since different \\(\\epsilon=0.08\\) just scale the amplitude of spectral density) and all peak wavelengths \\(L_{\\mathrm{p}}\\in\\{80,90,\\ldots,190,200\\}\\,\\mathrm{m}\\), each corresponding to a specific \\(\\omega_{\\mathrm{p}}\\) and \\(k_{\\mathrm{p}}\\). Based on the findings depicted and explained in Figure C.17, we conclude that this assumption is invalid.
## Acknowledgements
This work was supported by the Deutsche Forschungsgesellschaft (DFG - German Research Foundation) [project number 277972093: Excitability of Ocean Rogue Waves]
## Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Declaration of Generative AI and AI- assisted technologies in the writing process
The manuscript was completely written by the authors. Once the manuscript was completed, the authors used ChatGPT in order to improve its grammar and readability. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
Figure C.17: JONSWAP spectra used in the data generation for one exemplary steepness value \\(\\epsilon\\), but varying peak wavelengths \\(L_{\\mathrm{p}}=80-200\\,\\mathrm{m}\\). The shortest peak wavelength of \\(L_{\\mathrm{p}}=80\\,\\mathrm{m}\\) corresponds to the highest peak wavenumber of \\(k_{\\mathrm{p}}=0.079\\,\\mathrm{m}^{-1}\\). The filtering wavenumber of \\(k_{\\mathrm{fit}}=n_{\\mathrm{m}}\\cdot\\Delta k=64\\cdot 0.00351\\,\\mathrm{m}^{-1}=0.2246 \\,\\mathrm{m}^{-1}\\), which is indicated by the dotted red line and defined by the Fourier layers in this work, consequently does not truncate important wave components in our data set-up.
## References
* Airy (1849) Airy, G., 1849. Tides and Waves. Encyclopaedia metropolitana, John Joseph Griffin and Company.
* Asma et al. (2012) Asma, S., Sezer, A., Ozdemir, O., 2012. Mlr and ann models of significant wave height on the west coast of india. Computers and Geosciences 49, 231-237. doi:10.1016/j.cageo.2012.05.032.
* Battaglia et al. (2018) Battaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V.F., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., 2018. Relational inductive biases, deep learning, and graph networks. arXiv arXiv:1806.01261.
* Bertero et al. (2022) Bertero, M., Boccacci, P., de Mol, C., 2022. Introduction to Inverse Problems in Imaging. CRC Press.
* Blondel-Couprie (2009) Blondel-Couprie, E., 2009. Reconstruction et prevision deterministe de houle a partir de donnees mesures. Ph.D. thesis. Ecole Centrale de Nantes (ECN).
* Borge et al. (1999) Borge, J.C.N., Reichert, K., Dittmer, J., 1999. Use of nautical radar as a wave monitoring instrument. Coastal Engineering 37, 331-342. doi:10.1016/S0378-3839(99)00032-0.
* 1300. doi:10.1175/1520-0426(2004)021<1291:IOMRIF>2.0.CO;2.
* Bouws et al. (1985) Bouws, E., Gunther, H., Rosenthal, W., Vincent, C.L., 1985. Similarity of the wind wave spectrum in finite depth water: 1. spectral form. Journal of Geophysical Research 90, 975-986.
* Chen and Huang (2022) Chen, X., Huang, W., 2022. Spatial-temporal convolutional gated recurrent unit network for significant wave height estimation from shipborne marine radar data. IEEE Transactions on Geoscience and Remote Sensing 60, 1-11. doi:10.1109/TGRS.2021.3074075.
* Dankert and Rosenthal (2004) Dankert, H., Rosenthal, W., 2004. Ocean surface determination from x-band radar-image sequences. Journal of Geophysical Research: Oceans 109. doi:10.1029/2003J0002130.
* Deo et al. (2001) Deo, M., Jha, A., Chaphekar, A., Ravikant, K., 2001. Neural networks for wave forecasting. Ocean Engineering 28, 889-898. doi:10.1016/S0029-8018(00)00027-5.
* Desmars (2020) Desmars, N., 2020. Real-time reconstruction and prediction of ocean wave fields from remote optical measurements. Theses. Ecole centrale de Nantes.
* Desmars et al. (2021) Desmars, N., Hartmann, M., Behrendt, J., Klein, M., Hoffmann, N., 2021. Reconstruction of Ocean Surfaces From Randomly Distributed Measurements Using a Grid-Based Method. International Conference on Offshore Mechanics and Arctic Engineering Volume 6: Ocean Engineering. doi:10.1115/OMAE2021-62409.
* Desmars et al. (2022) Desmars, N., Hartmann, M., Behrendt, J., Klein, M., Hoffmann, N., 2022. Nonlinear Reconstruction and Prediction of Regular Waves. International Conference on Offshore Mechanics and Arctic Engineering Volume 5B: Ocean Engineering; Honoring Symposium for Professor Gunther F. Clauss on Hydrodynamics and Ocean Engineering. doi:10.1115/OMAE2022-78988.
* Desouky and Abdelkhalik (2019) Desouky, M.A., Abdelkhalik, O., 2019. Wave prediction using wave rider position measurements and narx network in wave energy conversion. Applied Ocean Research 82, 10-21. doi:10.1016/j.apor.2018.10.016.
* Dormermuth and Yue (1987) Dormermuth, D.G., Yue, D.K.P., 1987. A high-order spectral method for the study of nonlinear gravity waves. Journal of Fluid Mechanics 184, 267-288. doi:10.1017/S002211208700288X.
* Duan et al. (2020a) Duan, W., Ma, X., Huang, L., Liu, Y., Duan, S., 2020a. Phase-resolved wave prediction model for long-crest waves based on machine learning. Computer Methods in Applied Mechanics and Engineering 372, 113350. doi:10.1016/j.cma.2020.113350.
* Duan et al. (2020b) Duan, W., Yang, K., Huang, L., Ma, X., 2020b. Numerical investigations on wave remote sensing from synthetic x-band radar sea clutter images by using deep convolutional neural networks. Remote Sensing 12. doi:10.3390/rs12071117.
* Ducrozet et al. (2007) Ducrozet, G., Bonnefoy, F., Le Touze, D., Ferrant, P., 2007. 3-d hoos simulations of extreme waves in open seas. Natural Hazards and Earth System Sciences 7, 109-122. doi:10.5194/nhess-7-109-2007.
* Eichinger et al. (2022) Eichinger, M., Heinlein, A., Klawonn, A., 2022. Surrogate convolutional neural network models for steady computational fluid dynamics simulations. Electronic Transactions on Numerical Analysis 56, 235-255.
* Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., Courville, A., 2016. Deep Learning. MIT Press. [http://www.deeplearningbook.org](http://www.deeplearningbook.org).
* Hasselmann et al. (1973) Hasselmann, K., Barnett, T., Bouws, E., Carlson, H., Cartwright, D., Enke, K., Ewing, J., Gienapp, H., Hasselmann, D., Kruseman, P., Meerburg, A., Muller, P., Olbers, D., Richter, K., Sell, W., Walden., H., 1973. Measurements of wind-wave growth and swell decay during the joint north sea wave project (jonswap). Erganzung zur Deutschen Hydrographischen Zeitschrift, Reine A (8) 1, 1-95.
* Hendrycks and Gimpel (2016) Hendrycks, D., Gimpel, K., 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR abs/1606.08415. arXiv:1606.08415.
* MTS/IEEE Washington, 1-7.
* Isola et al. (2016) Isola, P., Zhu, J., Zhou, T., Efros, A.A., 2016. Image-to-image translation with conditional adversarial networks. arXiv arXiv arXiv:1611.07004.
* James et al. (2018) James, S.C., Zhang, Y., O'Donncha, F., 2018. A machine learning framework to forecast wave conditions. Coastal Engineering 137, 1-10. doi:10.1016/j.coastaleng.2018.03.004.
* Kagemoto (2020) Kagemoto, H., 2020. Forecasting a water-surface wave train with artificial intelligence- a case study. Ocean Engineering 207, 107380. doi:10.1016/j.coaneng.2020.107380.
* Kharif et al. (2009) Kharif, C., Slunyaev, A., Pelinovsky, E., 2009. Rogue Waves in the Ocean. 1 ed., Springer Berlin Heidelberg. doi:10.1007/978-3-540-88419-4_5.
* Kingma and Ba (2014) Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. doi:10.48550/ARXIV.1412.6980.
* Klein et al. (2020) Klein, M., Dudek, M., Clauss, G.F., Ehlers, S., Behrendt, J., Hoffmann, N., Onorato, M., 2020. On the deterministic prediction of water waves. Fluids 5. doi:10.3390/fluids5010009.
* Klein et al. (2022) Klein, M., Stender, M., Wedler, M., Ehlers, S., Hartmann, M., Desmars, N., Pick, M.A., Seifried, R., Hoffmann, N., 2022. Application of Machine Learning for the Generation of Tailored Wave Sequences. International Conference on OffshoreMechanics and Arctic Engineering Volume 5B: Ocean Engineering; Honoring Symposium for Professor Gunther F. Claus on Hydrodynamics and Ocean Engineering. doi:10.1115/OMAE2022-78601.
* Kollisch et al. (2018) Kollisch, N., Behrendt, J., Klein, M., Hoffmann, N., 2018. Nonlinear real time prediction of ocean surface waves. Ocean Engineering 157, 387-400. doi:10.1016/j.oceaneng.2018.03.048.
* Law et al. (2020) Law, Y., Santo, H., Lim, K., Chan, E., 2020. Deterministic wave prediction for unidirectional sea-states in real-time using artificial neural network. Ocean Engineering 195, 106722. doi:10.1016/j.oceaneng.2019.106722.
* LeCun et al. (1989) LeCun, Y., et al., 1989. Generalization and network design strategies. Connectionism in perspective 19, 18.
* Li et al. (2018) Li, H., Xu, Z., Taylor, G., Studer, C., Goldstein, T., 2018. Visualizing the loss landscape of neural nets. Advances in neural information processing systems 31.
* Li et al. (2020) Li, Z., Kovachik, N.B., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A.M., Anandkumar, A., 2020. Fourier neural operator for parametric partial differential equations. arXiv arXiv:2010.08895.
* Li et al. (2022) Li, Z., Peng, W., Yuan, Z., Wang, J., 2022. Fourier neural operator approach to large eddy simulation of three-dimensional turbulence. Theoretical and Applied Mechanics Letters 12.
* Liu et al. (2018) Liu, D., Wen, B., Liu, X., Wang, Z., Huang, T.S., 2018. When image denoising meets high-level vision tasks: A deep learning approach, in: Proceedings of the 27th International Joint Conference on Artificial Intelligence, AAAI Press. p. 842-848.
* Liu et al. (2022) Liu, Y., Zhang, X., Chen, G., Dong, Q., Guo, X., Tian, X., Lu, W., Peng, T., 2022. Deterministic wave prediction model for irregular long-crested waves with recurrent neural network. Journal of Ocean Engineering and Science doi:10.1016/j.joes.2022.08.002.
* Lunser et al. (2022) Lunser, H., Hartmann, M., Desmars, N., Behrendt, J., Hoffmann, N., Klein, M., 2022. The influence of characteristic sea state parameters on the accuracy of irregular wave field simulations of different complexity. Fluids 7. doi:10.3390/fluids7070243.
* Mitchell (1980) Mitchell, T.M., 1980. The need for biases in learning generalizations. Citeseer.
* Mohaghegh et al. (2021) Mohaghegh, F., Murthy, J., Alam, M.R., 2021. Rapid phase-resolved prediction of nonlinear dispersive waves using machine learning. Applied Ocean Research 117, 102920. doi:10.1016/j.apor.2021.102920.
* Morris et al. (1998) Morris, E., Zienkiewicz, H., Belmont, M., 1998. Short term forecasting of the sea surface shape. International shipbuilding progress 45, 383-400.
* Naiaijen and Wijaya (2014) Naiaijen, P., Wijaya, A., 2014. Phase resolved wave prediction from synthetic radar images, in: International Conference on Offshore Mechanics and Arctic Engineering, American Society of Mechanical Engineers. p. V08AT06A045.
* in situ and remote methods for resource characterization, in: Neill, S.P., Hashemi, M.R. (Eds.), Fundamentals of Ocean Renewable Energy. Academic Press. E-Business Solutions, pp. 157-191. doi:10.1016/B978-0-12-810448-4.00007-0.
* Niekamp et al. (2023) Niekamp, R., Niemann, J., Schroder, J., 2023. A surrogate model for the prediction of permeabilities and flow through porous media: a machine learning approach based on stochastic Brownian motion. Computational Mechanics 71, 563-581. doi:10.1007/s00466-022-02250-2.
* Ongie et al. (2020) Ongie, G., Jalal, A., Metzler, C.A., Baraniuk, R.G., Dimakis, A.G., Willett, R., 2020. Deep learning techniques for inverse problems in imaging. IEEE Journal on Selected Areas in Information Theory 1, 39-56.
* Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E.Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S., 2019. Pytorch: An imperative style, high-performance deep learning library. arXiv arXiv:1912.01703.
* Pathak et al. (2016) Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A., 2016. Context encoders: Feature learning by inpainting, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Los Alamitos, CA, USA. pp. 2536-2544. doi:10.1109/CVPR.2016.278.
* Pathak et al. (2022) Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z., Azizzadenesheli, K., et al., 2022. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214.
* Peng et al. (2022) Peng, W., Yuan, Z., Wang, J., 2022. Attention-enhanced neural network models for turbulence simulation. Physics of Fluids 34, 025111. doi:10.1063/5.0079302.
* Perlin and Bustamante (2014) Perlin, M., Bustamante, M., 2014. A robust quantitative comparison criterion of two signals based on the sobolev norm of their difference. Journal of Engineering Mathematics 101, 115-124.
* Raschka (2018) Raschka, S., 2018. Model evaluation, model selection, and algorithm selection in machine learning. arXiv arXiv:1811.12808.
* Rashid et al. (2022) Rashid, M.M., Pittie, T., Chakraborty, S., Krishnan, N.A., 2022. Learning the stress-strain fields in digital composites using fourier neural operator. Sicience 25, 105452. doi:10.1016/j.isci.2022.105452.
* MICCAI 2015. Springer International Publishing, pp. 234-241. doi:10.1007/978-3-319-24574-4_28.
* Salcedo-Sanz et al. (2015) Salcedo-Sanz, S., Nieto Borge, J., Carro-Calvo, L., Cuadra, L., Hessner, K., Alexandre, E., 2015. Significant wave height estimation using svr algorithms and shadowing information from simulated and real measured x-band radar images of the sea surface. Ocean Engineering 101, 244-253. doi:10.1016/j.oceaneng.2015.04.041.
* Stender et al. (2023) Stender, M., Ohlsen, J., Geisler, H., Chabchouh, A., Hoffmann, N., Schlaefer, A., 2023. U\\({}^{p}\\)-net: a generic deep learning-based time stepper for parameterized spatio-temporal dynamics. Computational Mechanics doi:10.1007/s00466-023-02295-x.
* Stoian et al. (2019) Stoian, A., Poulain, V., Inglada, J., Poughon, V., Derksen, D., 2019. Land cover maps production with high resolution satellite image time series and convolutional neural networks: Adaptations and limits for operational systems. Remote Sensing 11.
* Valenzuela (1978) Valenzuela, G.R., 1978. Theories for the interaction of electromagnetic and oceanic waves -- a review. Boundary-Layer Meteorology 13, 61-85. doi:10.1007/BF00913863.
* Vieen-Bueno et al. (2012) Vieen-Bueno, R., Lido-Muela, C., Nieto-Borge, J.C., 2012. Estimate of significant wave height from non-coherent marine radar images by multilayer perceptrons. EURASIP Journal on Applied Signal Processing 2012. doi:10.1186/1687-6180-2012-84.
Wang, F., Eljarrat, A., Muller, J., Henninen, T.R., Erni, R., Koch, C.T., 2020. Multi-resolution convolutional neural networks for inverse problems. Scientific reports 10, 1-11.
* Wang (2023) Wang, R., Yu, R., 2023. Physics-guided deep learning for dynamical systems: A survey. arXiv:2107.01272.
* Wedler et al. (2022) Wedler, M., Stender, M., Klein, M., Ehlers, S., Hoffmann, N., 2022. Surface similarity parameter: A new machine learning loss metric for oscillatory spatio-temporal data. Neural Networks 156, 123-134. doi:10.1016/j.neunet.2022.09.023.
* Wedler et al. (2023) Wedler, M., Stender, M., Klein, M., Hoffmann, N., 2023. Machine learning simulation of one-dimensional deterministic water wave propagation. Ocean Engineering 284, 115222. doi:10.1016/j.oceaneng.2023.115222.
* Wen et al. (2022) Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., Benson, S.M., 2022. U-fno--an enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources 163, 104180. doi:10.1016/j.adwatres.2022.104180.
* West et al. (1987) West, B.J., Brueckner, K.A., Janda, R.S., Milder, D.M., Milton, R.L., 1987. A new numerical method for surface hydrodynamics. Journal of Geophysical Research: Oceans 92, 11803-11824. doi:10.1029/JC092iC11p11803.
* Wijaya et al. (2015) Wijaya, A., Naaijen, P., Andonowati, van Groesen, E., 2015. Reconstruction and future prediction of the sea surface from radar observations. Ocean Engineering 106, 261-270. doi:10.1016/j.oceaneng.2015.07.009.
* Willard et al. (2022) Willard, J., Jia, X., Xu, S., Steinbach, M., Kumar, V., 2022. Integrating scientific knowledge with machine learning for engineering and environmental systems. arXiv:2003.04919.
* Williamson et al. (2022) Williamson, J., Brigido, H., Ladeira, M., Souza, J.C.F., 2022. Fourier neural operator for image classification, in: 2022 17th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1-6. doi:10.23919/CISTI54924.2022.9820128.
* Wu (2004) Wu, G., 2004. Direct simulation and deterministic prediction of large-scale nonlinear ocean wave-field. Phd. thesis. Massachusetts Institute of Technology. Department of Ocean Engineering.
* Wu et al. (2020) Wu, M., Stefanakos, C., Gao, Z., 2020. Multi-step-ahead forecasting of wave conditions based on a physics-based machine learning (pbm) model for marine operations. Journal of Marine Science and Engineering 8. doi:10.3390/jmse8120992.
* Yan et al. (2022) Yan, B., Chen, B., Robert Harp, D., Jia, W., Pawar, R.J., 2022. A robust deep learning workflow to predict multiphase flow behavior during geological co2 sequestration injection and post-injection periods. Journal of Hydrology 607, 127542. doi:10.1016/j.jhydrol.2022.127542.
* Yang et al. (2021) Yang, Y., Gao, A.F., Castellanos, J.C., Ross, Z.E., Azizzadenesheli, K., Clayton, R.W., 2021. Seismic wave propagation and inversion with neural operators. The Seismic Record 1, 126-134.
* 2537. doi:10.1175/JPO-D-21-0280.1.
* You et al. (2022) You, H., Zhang, Q., Ross, C.J., Lee, C.H., Yu, Y., 2022. Learning deep implicit fourier neural operators (ifnos) with applications to heterogeneous material modeling. Computer Methods in Applied Mechanics and Engineering 398, 115296. doi:10.1016/j.cma.2022.115296.
* ECCV 2014, Springer International Publishing, Cham. pp. 818-83.
* Zhang et al. (2022) Zhang, J., Zhao, X., Jin, S., Greaves, D., 2022. Phase-resolved real-time ocean wave prediction with quantified uncertainty based on variational bayesian machine learning. Applied Energy 324, 119711. doi:10.1016/j.apenergy.2022.119711.
* Zhang et al. (2017) Zhang, K., Zuo, W., Gu, S., Zhang, L., 2017. Learning deep cnn denoiser prior for image restoration, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3929-3938. | Accurate short-term predictions of phase-resolved water wave conditions are crucial for decision-making in ocean engineering. However, the initialization of remote-sensing-based wave prediction models first requires a reconstruction of wave surfaces from sparse measurements like radar. Existing reconstruction methods either rely on computationally intensive optimization procedures or simplistic modelling assumptions that compromise the real-time capability or accuracy of the subsequent prediction process. We therefore address these issues by proposing a novel approach for phase-resolved wave surface reconstruction using neural networks based on the U-Net and Fourier neural operator (FNO) architectures. Our approach utilizes synthetic yet highly realistic training data on uniform one-dimensional grids, that is generated by the high-order spectral method for wave simulation and a geometric radar modelling approach. The investigation reveals that both models deliver accurate wave reconstruction results and show good generalization for different sea states when trained with spatio-temporal radar data containing multiple historic radar snapshots in each input. Notably, the FNO demonstrates superior performance in handling the data structure imposed by wave physics due to its global approach to learn the mapping between input and output in Fourier space.
keywords: deep operator learning, Fourier neural operator, nonlinear ocean waves, phase-resolved surface reconstruction, X-band radar images, radar inversion | Provide a brief summary of the text. | 246 |
arxiv-format/2407_10427v1.md | # Transformer for Multitemporal Hyperspectral Image Unmixing
Hang Li, Qiankun Dong, Xueshuo Xie, Xia Xu*, Tao Li*,
Zhenwei Shi,
This work was supported by the National Natural Science Foundation of China under the Grant 62125102, the Natural Science Foundation of Tianjin under the Grant 23JC9X1001001 and Natural Science Foundation of Tianjin under Grant 23JC9X101050, (_\"Corresponding author: Xia Xu and Tao Li.)_Hang Li, Qiankun Dong and Tao Li are with the College of Computer Science, Nankai University, Tianjin 300071, China (e-mail: [email protected]; [email protected]; [email protected]).Xueshuo Xie is with the Haule Lab of TIAI Street, Tianjin, 300350, China (e-mail: [email protected]).Xia Xu is with the School of Computer Science and Technology, Tiangong University, Tianjin 300387, China (e-mail: [email protected]).Zhenwei Shi is with the Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China (e-mail: [email protected]).
## I Introduction
Hyperspectral remote sensing images have a diverse range of applications and offer consequtive spectral information for Earth observation missions. However, the acquired hyperspectral images (HSIs) are often influenced by sensor limitations, atmospheric conditions, illumination variations, and other factors. Consequently, each pixel in the image may encompass mixed spectral information from multiple ground objects. Hyperspectral image unmixing is conducted to extract endmembers and abundances in HSIs, where endmembers represent pure spectra while abundances denote their respective proportions [1, 2]. According to different types of mixtures, the assumptions of mixture models can be categorized into linear and nonlinear forms [3, 4]. The linear mixture model (LMM) is more straightforward to implement as it assumes that the spectrum of each observed pixel can be accurately represented by a linear combination of spectral components from multiple ground objects [5, 6].
Due to the simplicity and efficiency of LMM, a lot of studies have been conducted utilizing this methodology, which can be broadly categorized into the following five domains, including geometrical, statistical, nonnegative matrix factorization, sparse regression, and deep learning methods [1, 7]. Geometrical methods usually use the simplex set or positive cone of the data to expand the study [8, 9, 10]. Statistical methods rely on parameter estimates and probability distributions for unmixing [11, 12]. Nonnegative matrix factorization methods estimate endmembers and abundances by factorizing the HSI into two low-rank matrices [13, 14, 15]. Sparse regression methods define unmixing as a linear sparse regression problem, which is usually studied by using a prior spectral library [16, 17]. Deep learning approaches primarily employ deep neural networks to extract feature from HSI for unmixing [18, 19, 20].
In recent years, deep learning methods have made remarkable achievements in the field of hyperspectral image unmixing [7, 18, 21]. As a classical neural network architecture, the autoencoder can map high-dimensional spectral data into a low-dimensional latent space [22]. Autoencoder cascade comprehensively considers noise and prior sparsity for unmixing through a cascade structure [23]. Adding constraints through optimizing the loss function of the autoencoder is a commonly employed technique in unmixing [24]. Some studies use the advantage of variational autoencoders to learn data distribution for unmixing [25, 26]. Other researchers use autoencoders to extract spatial and spectral information of hyperspectral images for unmixing [19, 20, 27]. Spectral variability has also attracted extensive research [28, 29]. The deep learning-based method demonstrates significant efficacy in hyperspectral image unmixing, prompting this paper to also explore research utilizing the deep learning approach.
Current research primarily concentrates on single-phase hyperspectral image unmixing [4, 30]. However, considering the lengthy revisit period of satellites and their capacity to collect remote sensing data spanning extended durations, leveraging long-term series data proves more beneficial for surface change monitoring. Consequently, harnessing long-term series data for multitemporal hyperspectral image unmixing holds significant value. Compared to change detection of two time phases [31, 32, 33], multitemporal unmixing can make use ofrich time information to explore the feature correlation of different time points and track the change of endmember and abundance. This is crucial for dynamic earth resource detection, land cover analysis, disaster prevention, and more [34]. Yet, accurately capturing endmember changes and understanding temporal, spatial, and spectral data pose greater challenges in multitemporal hyperspectral image unmixing.
At present, statistical-based studies are being conducted on the unmixing of multitemporal hyperspectral images. A linear mixing model across time points is explored in [35]. An online unmixing algorithm, solved via stochastic approximation, is introduced in [36]. The alternating direction multiplier method is applied in [37]. KalmanEM employs parametric endmember estimation and Bayesian filtering [38]. A hierarchical Bayesian model optimizes spectral variability and outliers in [39]. A Bayesian model considering spectral variability addresses endmember reflectance changes in [11]. In [40], a spectral-temporal Bayesian unmixing method incorporates prior information to mitigate noise effects. An unsupervised unmixing algorithm based on a variational recurrent neural network is proposed in [41].
However, the above methods often require precise definition of prior distributions, which becomes difficult when there are significant changes in endmembers over time. The dynamic nature of land cover and surface conditions leads to varying endmember spectra, making it hard to specify accurate prior distributions for all scenarios. Furthermore, the use of Monte Carlo sampling to estimate posterior distributions in statistical methods adds computational complexity, especially in high-dimensional spaces. Generating samples from the posterior distribution involves iterative procedures, which become more computationally demanding with higher data dimensionality.
The transformer, known for its powerful attention mechanism, has been widely successful across various fields [34, 42, 43, 44]. In the realm of unmixing tasks, researchers have explored its potential. For instance, a transformer-based model introduced in [45] enhances spectral and abundance maps by capturing patch correlations. In [46], a window-based transformer convolutional autoencoder tackles unmixing. Another approach in [47] utilizes a double-aware transformer to exploit spatial-spectral relationships. Meanwhile, [48] proposes a transformer-based generator for spatial-spectral information. Moreover, a U-shaped transformer network [49] and methods using spatial or spectral attention mechanisms [50, 51, 52] are employed for hyperspectral image unmixing. However, these methods primarily focus on single-phase unmixing, lacking the capacity to incorporate time information, which leads to sub-optimal performance with multitemporal data. Furthermore, adapting to dynamic relationship changes between phases poses a challenge for traditional transformer models.
To address the challenge of effectively modeling multitemporal hyperspectral image unmixing and adaptively processing the dynamic changes between adjacent phases, we propose the **M**ulti-temporal **I**mayer** hyperspectral **I**mayer **U**mayer **U**rformer is an end-to-end model based on transformer, which consists of two main modules: the Global Awareness Module (GAM) and the Change Enhancement Module (CEM). The GAM synthesizes a comprehensive understanding of the hyperspectral image sequence by computing attention weights for spatial, temporal, and spectral dimensions from a global perspective. This allows for the fusion of multi-dimensional information across the entire image sequence. Conversely, the CEM focuses on capturing nuanced changes in endmember abundances between adjacent phases with high granularity. By assigning adaptive weights to different temporal phases, the sensitivity of the model to time dynamics is enhanced. Through the seamless integration of these modules, MUFormer effectively captures rich multi-dimensional information in multitemporal hyperspectral images. It has adaptability to varying time intervals within multitemporal image sequences, thereby facilitating precise unmixing across different temporal contexts. The main contributions of this paper are as follows:
* We propose an end-to-end transformer-based multitemporal hyperspectral image unmixing framework **M**U**-**Former**, which can achieve effective and efficient unmixing of multitemporal hyperspectral images.
* We propose a novel Change Enhancement Module to obtain feature information at different scales and highlight fine changes by multi-scale convolution of adjacent temporal hyperspectral images.
* We propose a Global Awareness Module to extract and deeply fuse multitemporal hyperspectral image features from a global perspective, which can better use multi-source domain information to promote the unmixing effect.
The remaining of this paper is organized as follows: Section II focuses on the definition of the multitemporal hyperspectral image unmixing task and we mainly introduce our proposed model framework as well as the Change Enhancement Module and the Global Awareness Module. In Section III, we will carry out experiments on a real dataset and two simulated datasets, and conduct ablation experimental research on the proposed module. Finally, we summarize the overall work of this paper in Section IV.
## II Methodology
### _Multitemporal Hyperspectral Image Unmixing_
For multitemporal hyperspectral images, it is represented as \\(\\mathbf{Y}\\in\\mathbb{R}^{\\mathrm{T}\\times\\mathrm{L}\\times\\mathrm{N}}\\), Where \\(\\mathrm{T}\\) represents the number of time phases, \\(\\mathrm{L}\\) represents the number of bands of the hyperspectral image, and \\(\\mathrm{N}\\) represents the total number of the hyperspectral image pixles, respectively. For linear unmixing of multitemporal hyperspectral images, the formula for time \\(t\\) is as follows.
\\[\\mathbf{Y_{t}}=\\mathbf{M_{t}}\\mathbf{A_{t}}+\\varepsilon_{t} \\tag{1}\\]
Where \\(\\mathbf{M_{t}}\\in\\mathbb{R}^{\\mathrm{L}\\times\\mathrm{P}}\\) represents the matrix of endmembers at time \\(t\\), while \\(\\mathbf{A_{t}}\\in\\mathbb{R}^{\\mathrm{P}\\times\\mathrm{N}}\\) denotes the abundance matrix corresponding to the same timeframe, and \\(\\varepsilon_{t}\\in\\mathbb{R}^{\\mathrm{L}\\times\\mathrm{N}}\\) represents the additional random noise added to simulate the real scenario, where \\(\\mathrm{P}\\) denotes the number of endmembers. (1) can also be refined to the pixel-level form, which satisfies the linear unmixing model for every pixel in every time phase.
For the hyperspectral image at each time phase, the abundance nonnegativity constraint (ANC) and the abundance sum-to-one constraint (ASC) are satisfied.
\\[\\mathbf{A_{t}}\\geq\\mathbf{0},\\mathbf{1^{T}}\\mathbf{A_{t}}=\\mathbf{1^{T}} \\tag{2}\\]
In the multitemporal hyperspectral image unmixing task, the hyperspectral images in different time phases have spatial-temporal correlation, and the contribution of hyperspectral bands in each time phase to unmixing is not the same, so it is necessary to jointly consider the temporal-spatial-spectral information to design the unmixing model.
### _Overall structure_
In this section, we introduce the overall structure of the proposed MUFormer. The framework of the MUFormer model is shown in Fig. 1.
First, we input a sequence of hyperspectral images \\(\\mathbf{Y_{in}}\\) to the encoder, where \\(\\mathbf{Y_{in}}\\in\\mathbb{R}^{\\mathrm{T}\\times\\mathrm{L}\\times\\mathrm{H} \\times\\mathrm{W}}\\), \\(\\mathrm{H}\\) and \\(\\mathrm{W}\\) represent the height and width of the images, the sequence represents the hyperspectral images acquired at the same location at different time instants. The encoder is mainly composed of convolutional layer, BatchNorm layer, and LeakyReLU layer. In order to prevent the model from overfitting, we add Dropout layer, which makes the neural network more robust and generalization ability [45]. Compared with transformer, CNN structure has more advantages in extracting local image features, and multitemporal feature maps containing richer semantic information can be obtained through multiple convolution operations [53, 54, 55]. At this point, the model outputs the multitemporal feature map \\(\\mathbf{Y_{conv}}\\), where \\(\\mathbf{Y_{conv}}\\in\\mathbb{R}^{\\mathrm{T}\\times\\mathrm{C}\\times\\mathrm{H} \\times\\mathrm{W}}\\), \\(\\mathrm{C}\\) represents the number of channels retained after convolution processing, and it is a hyperparameter that can be set. Compared with the hundreds of channels of the original hyperspectral image, the channels retained here contain more important band information.
Then, we send the obtained multitemporal feature map into the GAM. With the advantages of transformer in sequence information modeling, we divide the feature map into patch blocks of the same size to calculate attention of each dimension and fuse it. Then, we use the proposed CEM module to refine the changes of adjacent phases, and send the processed feature maps into softmax to obtain the abundance maps of each phase.
In the decoder part, we establish a linear decoder for each phase, incorporating a linear layer. The weight of this linear layer is initialized using the VCA algorithm. Upon completion of the training iteration, the weight of the decoder represents the endmember matrix of the phase. Leveraging the linear decoder, we achieve the realization of the multitemporal linear unmixing model.
### _Global Awareness Module_
In order to calculate temporal-spatial-spectral attention from a global perspective and to deeply integrate information from multiple dimensions, we design the GAM. This module mainly contains temporal attention, spatial attention, spectral attention and forward parts. Feature maps processed by convolutional layers remove redundant bands and contain more critical information. Following the design of ViT [42], the feature map sequence is divided into non-overlapping patches \\(\\mathbf{Y_{patch}}\\), where \\(\\mathbf{Y_{patch}}\\in\\mathbb{R}^{\\mathrm{p}\\times\\mathrm{p}\\times\\mathrm{C}}\\), \\(\\mathrm{p}\\) represents the side length of the patch and \\(\\mathrm{C}\\) represents the number of channels processed by the encoder. We concatenate the \\(\\mathbf{Y_{cls}}\\) with the original patch
Fig. 1: Illustration of the structure of the MUFormer model. Among them, GAM stands for Global Awareness Module, and CBM stands for Change Enhancement Module, as shown in the lower left corner of the illustration.
matrix, \\(\\mathbf{Y_{cls}}\\) aggregates global features to avoid bias toward specific tokens in the sequence, and then add the position embedding vector \\(\\mathbf{Y_{pos}}\\), \\(\\mathbf{Y_{pos}}\\in\\mathbb{R}^{\\mathrm{D}}\\) represents the temporal-spatial position information of the patch that can be learned. The new matrix \\(\\mathbf{Y^{{}^{\\prime}}}\\) representation is given in 3.
\\[\\mathbf{Y^{{}^{\\prime}}}=(\\mathbf{Y_{cls}}||\\mathbf{Y_{patch}})+\\mathbf{Y_{pos}} \\tag{3}\\]
After obtaining \\(\\mathbf{Y^{{}^{\\prime}}}\\), inspired by the work of [56], we treat the hyperspectral image sequence as frames with different time intervals, and then send them to the attention module for processing. Firstly, it is sent to the temporal attention module. In the temporal attention module, \\(\\mathbf{Y}\\) will be divided into \\(\\mathbf{Q},\\mathbf{K},\\mathbf{V}\\) three matrices after the linear layer, which represent query, key and value respectively, where \\(\\mathbf{Q}\\in\\mathbb{R}^{\\mathrm{h_{dim}}\\times\\mathrm{h}}\\), \\(\\mathrm{h_{dim}}\\) represents the dimension of each head, \\(h\\) represents the number of heads,the dimensions of \\(\\mathbf{K}\\) and \\(\\mathbf{V}\\) are the same as \\(\\mathbf{Q}\\). After obtaining the above three matrices, the self-attention weight size is calculated through the softmax function, and the calculation formula is as shown in 4.
\\[\\mathbf{attn_{(1,\\mathrm{t})}^{\\mathrm{time}}}=\\mathrm{Softmax}\\left(\\frac{ \\mathbf{q}_{(1,\\mathrm{t})}}{\\sqrt{\\mathbf{D_{h}}}}\\cdot\\left[\\mathbf{k}_{( 0,0)}\\left\\{\\mathbf{k}_{(1,\\mathrm{t^{\\prime}})}\\right\\}_{t^{\\prime}=1, ,T} \\right]\\right) \\tag{4}\\]
Where \\(\\mathrm{l}\\) represents the \\(\\mathrm{l}\\mathrm{th}\\) patch block and \\(\\mathrm{t}\\) represents the hyperspectral image at time \\(\\mathrm{t}\\). By computing attention in the time dimension, we can assign reasonable weights to patches at the same location but at different moments, this enables the model to identify regions that have undergone changes at different times. By allocating higher attention to patches that exhibit significant changes throughout the time series, the model can concentrate more effectively on these key areas of change. Similarly, for patches at different positions at the same time, we also use the same way to calculate the attention weights of spatial dimensions, as shown in 5.
\\[\\mathbf{attn_{(1,\\mathrm{t})}^{\\mathrm{space}}}=\\mathrm{Softmax}\\left(\\frac{ \\mathbf{q}_{(1,\\mathrm{t})}}{\\sqrt{\\mathbf{D_{h}}}}\\cdot\\left[\\mathbf{k}_{( 0,0)}\\left\\{\\mathbf{k}_{(1^{\\prime},\\mathrm{t})}\\right\\}_{t^{\\prime}=1, ,N} \\right]\\right) \\tag{5}\\]
In order to solve the problem of gradient disappearance and gradient explosion in deep networks, we adopt residual connection [57]. The feature maps processed by temporal self-attention and spatial self-attention are fed into the forward module, which contains an MLP. This module can be set to multiple continuous modules, in order to prevent overfitting of the model here the model depth is set to 2. Different channels in hyperspectral images may correspond to different ground object components or features, and some channels may be affected by noise, atmospheric interference and other factors, resulting in low information quality. At the same time, the contribution of different channels to the unmixing task is not consistent [58, 59]. Combined with the above reasons, we send it to the spectral self-attention calculation module. In the spectral self-attention module, we respectively use Average pooling and Max pooling to process the channels, and then generate the channel attention vector through the full connection operation, which can adaptively adjust the weight of each channel to highlight important channel information. The ability to focus on channels with higher unmixing contributions is enhanced to improve the unmixing accuracy.The spectral self-attention mechanism is calculated as shown in 6. \\(\\mathbf{Y_{map}}\\) represents the feature map outputted by FF, and \\(\\sigma\\) represents the sigmoid function. The schematic diagram of GAM is shown in Fig. 2.
\\[\\mathbf{Y^{\\prime\\prime}}=\\sigma(\\mathrm{MLP}(\\mathbf{Avgpool}(\\mathbf{Y_{map} })+\\mathrm{Maxpool}(\\mathbf{Y_{map}}))) \\tag{6}\\]
Through the GAM, we dynamically assign weights to the significance of the time, space, and spectral dimensions,
Fig. 2: Global Awareness Module, Step1 represents the temporal self-attention calculation process, Step2 represents the spatial self-attention calculation process, Step3 represents the shared MLP module, and Step4 represents the spectral self-attention calculation process. Specifically, in Step1 and Step2, red represents the current query patch, blue represents the patches that participate in the attention computation, and yellow represents the patches that do not participate in the computation.
enabling the more precise capture of complex features within image data. This holistic attention mechanism aids models in more effectively comprehending the temporal evolution of objects or phenomena, alongside their unique spatial distributions and spectral characteristics. Concurrently, the robustness of the model is enhanced as it emphasizes crucial features while diminishing the impact of irrelevant or disruptive information. This enhancement makes our model highly skilled in dealing with the complex and fluctuating environmental conditions prevalent in hyperspectral images.
### _Change Enhancement Module_
Through the above GAM, we can obtain the temporal-spatial-spectral feature information from a global perspective. However, the acquisition time interval of hyperspectral images in different time phases is not certain. Therefore, how to adaptively deal with the feature changes of hyperspectral images between adjacent time phases is also an important issue. In order to solve this problem, we propose the CEM from the feature-level perspective, and its structure diagram is shown in the bottom left corner of Fig. 1. The CEM input is the feature map of two adjacent time phases after GAM processing, which size is \\(\\mathbb{R}^{\\mathrm{C}\\times\\mathrm{H}\\times\\mathrm{W}}\\). At this time, the feature map has gone through the weight allocation of global attention. In order to accurately capture the difference between adjacent temporal hyperspectral images from the feature level better, we propose to perform multi-scale convolution on the feature map at time \\(\\mathrm{T}_{\\mathrm{k}+1}\\), where the convolution kernel size is \\(3\\times 3\\) and \\(7\\times 7\\), respectively. Then the feature maps after convolution are subtracted from the feature graphs at \\(\\mathrm{T}_{\\mathrm{k}}\\) time. The subtracted feature maps are fed into the convolution layer and processed by the \\(Sigmoid\\) function, where the kernel size is set to \\(1\\times 1\\). The weight vector \\(\\mathbf{A}\\) is obtained by multiplying the obtained weights \\(\\mathbf{A1}\\) and \\(\\mathbf{A2}\\), and the final weight vector is obtained by multiplying \\(\\mathbf{A}\\) by \\(\\alpha\\). We add the obtained weight vector to \\(\\mathbf{Y_{cls}}\\), \\(\\mathbf{Y_{cls}}\\) tags can provide a high-level representation of the entire sequence to facilitate learning and inference by the model. It allows the model to extract important global information from the input sequence rather than just focusing on local segments [60]. We modify \\(\\mathbf{Y_{cls}}\\) by combining the output of CEM, so that the model not only allocates global information weights through the attention mechanism, but also uses convolutional neural network to reasonably deal with the change regions of adjacent time phases in the local information extraction. By this way of \"coarse tuning\" (GAM) plus \"fine tuning\" (CEM), we can reasonably simulate the changes of multitemporal features, so that our model can adaptively unmix multitemporal hyperspectral images.
\\[\\mathbf{A1}=\\sigma(\\mathrm{f}_{1\\times 1}(\\mathrm{f}_{3\\times 3}(\\mathbf{T}_{ \\mathbf{k}+1})-\\mathbf{T}_{\\mathbf{k}})),1\\leq\\mathrm{k}\\leq\\mathrm{T}-1 \\tag{7}\\]
\\[\\mathbf{A2}=\\sigma(\\mathrm{f}_{1\\times 1}(\\mathrm{f}_{7\\times 7}(\\mathbf{T}_{ \\mathbf{k}+1})-\\mathbf{T}_{\\mathbf{k}})),1\\leq\\mathrm{k}\\leq\\mathrm{T}-1 \\tag{8}\\]
\\[\\mathbf{Y_{cls}}=(\\mathbf{1}+\\alpha\\times(\\mathbf{A1}\\times\\mathbf{A2})) \\times\\mathbf{Y_{cls}} \\tag{9}\\]
Where \\(\\mathrm{f}_{3\\times 3}\\) and \\(\\mathrm{f}_{7\\times 7}\\) represent convolution kernel sizes of 3 and 7, respectively, followed by BN layer and LeakyReLU layer. \\(\\mathrm{f}_{1\\times 1}\\) represents a convolutional layer with a kernel size of 1 and \\(\\sigma\\) represents the \\(Sigmoid\\) function, where \\(\\alpha\\) is a hyperparameter that can be set empirically. We perform multi-scale convolution on the feature map, which can capture features at different scales. The smaller \\(3\\times 3\\) kernels help to capture local details and edge features, while the larger \\(7\\times 7\\) kernels are more suitable to capture larger contextual information. By using convolution kernels at both scales, a more comprehensive multi-scale feature representation can be obtained. In the process of feature fusion, the information loss that may exist in a single scale can be reduced and the expression ability of the model can be improved. Moreover, the large-scale convolution kernel may be more effective for removing some noise and unimportant detail information in the image, while the small-scale convolution kernel helps to retain important local features, which also suppresses noise to a certain extent. The ablation study on multi-scale CEM can be seen in Section IV.
### _Loss Function_
In the design of loss function, we choose the weighted sum of multiple loss functions as the total loss function to better train our proposed model. Reconstruction loss and Spectral Angle Distance (SAD) loss are commonly used loss functions in unmixing tasks. At the same time, it was demonstrated in [10] that the data simplex loss plays an important role in endmember estimation, the endmembers can be further constrained by adding the data simplex loss. The total training loss function is shown in 13, including \\(\\mathrm{L}_{\\mathrm{RE}}\\), \\(\\mathrm{L}_{\\mathrm{SAD}}\\) and \\(\\mathrm{L}_{\\mathrm{E}}\\), where \\(\\beta\\), \\(\\gamma\\) and \\(\\lambda\\) are artificially set hyperparameters to balance each loss function. We adopt Adam algorithm for optimization and set scheduler to dynamically adjust the learning rate.
\\[\\mathrm{L}_{\\mathrm{RE}}(\\mathbf{Y},\\hat{\\mathbf{Y}})=\\left(\\frac{1}{\\mathrm{ H}\\cdot\\mathrm{W}\\cdot\\mathrm{T}}\\sum_{\\mathrm{i}=1}^{\\mathrm{H}}\\sum_{ \\mathrm{j}=1}^{\\mathrm{W}}\\sum_{\\mathrm{t}=1}^{\\mathrm{T}}\\left(\\hat{\\mathbf{ Y}}_{\\mathrm{jit}}-\\mathbf{Y}_{\\mathrm{jit}}\\right)^{2}\\right)^{\\frac{1}{2}} \\tag{10}\\]
\\[\\mathrm{L}_{\\mathrm{SAD}}(\\mathbf{Y},\\hat{\\mathbf{Y}})=\\frac{1}{\\mathrm{R}} \\frac{1}{\\mathrm{T}}\\sum_{\\mathrm{i}=1}^{\\mathrm{R}}\\sum_{\\mathrm{t}=1}^{ \\mathrm{T}}\\arccos\\left(\\frac{\\left\\langle\\mathbf{Y}_{\\mathrm{it}},\\hat{ \\mathbf{Y}}_{\\mathrm{it}}\\right\\rangle}{||\\mathbf{Y}_{\\mathrm{it}}||_{2}||\\hat{ \\mathbf{Y}}_{\\mathrm{it}}||_{2}}\\right) \\tag{11}\\]
\\[\\mathrm{L}_{\\mathrm{E}}=\\sum_{\\mathrm{t}=1}^{\\mathrm{T}}\\left\\|\\mathbf{E}_{ \\mathrm{t}}-\\mathbf{m}_{\\mathrm{t}}\\mathbf{l}_{\\mathrm{r}}^{\\mathrm{T}}\\right\\|_ {2}^{\\mathrm{F}} \\tag{12}\\]
\\[\\mathrm{L}=\\beta\\mathrm{L}_{\\mathrm{RE}}+\\gamma\\mathrm{L}_{\\mathrm{SAD}}+\\lambda \\mathrm{L}_{\\mathrm{E}} \\tag{13}\\]
## III Experimental Results
To verify the effectiveness of MUFormer, we conducted experiments on one real dataset and two synthetic datasets. We compare our method with fully constrained least squares (FCLS), online unmixing (OU) [36], KalmanEM based on Kalman filter and maximum expectation strategy [38], DeepTrans based on transformer for single phase unmixing [45], and ReSUDNN based on variational RNN for dynamic unmixing [41]. For the single-phase unmixing algorithm, we unmix the hyperspectral images of each phase, and then merge each phase to get multitemporal results. The initialization endmember of the above method is obtained by the VCA algorithm.
In order to accurately measure the results of the experiment, we choose two main evaluation metrics. The first one is the normalized root mean square error (NRMSE), which we calculate for the abundance map, endmembers and reconstructed hyperspectral images. The second is the spectral angle mapper (SAM), whose SAM we computed for the endmember. Where a represents the true abundance value of the nth pixel at the tth time, \\(\\mathbf{\\hat{a}_{n,t}}\\) represents the estimated abundance value, \\(\\mathbf{\\hat{M}_{n,t}}\\) represents the estimated endmember value, \\(\\mathbf{m_{n,t,p}}\\) represents the \\(\\mathrm{pth}\\) endmember value in the \\(\\mathrm{nth}\\) pixel at the \\(\\mathrm{tth}\\) time, similarly, \\(\\mathbf{\\hat{m}_{n,t,p}}\\) represents its estimated value.
For MTHU task, we don't consume too much computing power, we use i7-11800H CPU and RTX 3060 GPU to complete the task efficiently. For these sets of experiments, we set the number of transformer blocks to 2, and too deep networks will cause overfitting or performance degradation of the model. We set the number of attention heads to 8, and the number of epochs during the model training to 1000, so that the model is fully trained.
\\[\\mathrm{NRMSE}_{\\mathbf{A}}=(\\frac{1}{\\mathrm{T}}{\\sum_{t=1}^{\\mathrm{T}}{ \\sum_{n=1}^{\\mathrm{N}}{\\frac{{\\left\\|{\\mathbf{a}_{n,t}-\\mathbf{\\hat{a}_{n,t}} \\right\\|}^{2}}}{{{\\left\\|{\\mathbf{a}_{t}}\\right\\|}^{2}}}}}})^{\\frac{1}{2}} \\tag{14}\\]
\\[\\mathrm{NRMSE}_{\\mathbf{M}}=(\\frac{1}{\\mathrm{NT}}{\\sum_{t=1}^{\\mathrm{T}}{ \\sum_{n=1}^{\\mathrm{N}}{\\frac{{\\left\\|{\\mathbf{M}_{n,t}-\\mathbf{\\hat{M}_{n,t}} \\right\\|}^{2}}_{\\mathbf{F}}}{{{\\left\\|{\\mathbf{M}_{n,t}}\\right\\|}^{2}_{\\mathbf{ F}}}}}})^{\\frac{1}{2}} \\tag{15}\\]
\\[\\mathrm{NRMSE}_{\\mathbf{Y}}=(\\frac{1}{\\mathrm{T}}{\\sum_{t=1}^{\\mathrm{T}}{ \\sum_{n=1}^{\\mathrm{N}}{\\frac{{\\left\\|{\\mathbf{y}_{n,t}-\\mathbf{\\hat{M}_{n,t} \\mathbf{\\hat{a}_{t}}\\right\\|}^{2}}}}{{{\\left\\|{\\mathbf{y}_{n,t}}\\right\\|}^{2}} }}}})^{\\frac{1}{2}} \\tag{16}\\]
\\[\\mathrm{SAM}_{\\mathbf{M}}=\\frac{1}{\\mathrm{TNP}}{\\sum_{t=1}^{\\mathrm{T}}{ \\sum_{n=1}^{\\mathrm{N}}{\\sum_{p=1}^{\\mathrm{P}}{\\arccos}{(\\frac{{\\mathbf{m}_{n,t,p}^{\\mathrm{T}}\\hat{m}_{n,t,p}}}{{{\\left\\|{\\mathbf{m}_{n,t,p}}\\right\\|}{{ \\left\\|{\\mathbf{\\hat{m}_{n,t,p}}\\right\\|}^{2}}}}}})}}}}) \\tag{17}\\]
### _Dataset Description_
In order to accurately test the effect of our proposed model, we expand the test on three datasets, including one real dataset and two synthetic datasets, which will be described in the following expansion.
1. Lake Tahoe: The dataset was acquired by the airborne Visible Infrared Imaging Spectrometer (AVIRIS) between 2014 and 2015 and contains hyperspectral images of six time phases. The size of the hyperspectral image in each time phase is \\(150\\times 110\\) pixels, and there are 173 bands after removing the water absorption band. The image contains three types of endmembers: water, soil, and vegetation, each of which produces distinct changes over time [36]. This sequence of images is shown in Fig. 3.
2. Synthetic data1: The dataset contains six temporal hyperspectral synthetic images, each of size \\(50\\times 50\\) pixels, containing three endmembers, and three features of \\(\\mathrm{L=224}\\) bands randomly sampled from the USGS library as the reference endmember matrix. To model the endmember variability, the endmembers of each pixel are obtained by multiplying the reference signatures with a piecewise linear random scaling factor of the amplitude interval [0.85,1.15]. Local pixel mutation is added to \\(t\\in\\{2,3,4,5\\}\\) to better match the endmember changes of real scene, To simulate realistic scenarios, Gaussian noise with a SNR of 30dB was added [41].
3. Synthetic data2: The second synthetic dataset contains 15 temporal hyperspectral images, each \\(50\\times 50\\) pixels in size. In order to introduce realistic spectral variability, the endmember features of each pixel and phase are randomly selected from the artificially extracted pure pixels of water, vegetation, soil, and road in Jasper Ridge HI, which contains 198 bands. To simulate realistic scenarios, Gaussian noise with a SNR of 30dB was added [41].
### _Lake Tahoe Results_
Due to the lack of real comparison results for the Lake Tahoe dataset, we only present its abundance map qualitatively, and the results are shown in Fig. 4. We use show the results of multitemporal endmember estimation results in Fig. 7. It can be seen that the MUFormer proposed by us achieves better results in the unmixing of real multitemporal hyperspectral datasets. We can see from the results that FCLS has the worst performance, and the endmember is missing in multiple phases. There is also confusion between different endmembers. KalmanEM and OU have similar effects, but there are large errors in the abundance estimation of water and soil endmembers. Kalman filter has shortcomings in a large range of abundance changes and there are a lot of artifacts, which is because the Kalman filter sets the abundance to be continuous. When there is a large change in abundance, the effect will be affected. At the same time, in order to form a reasonable comparison, we add the DeepTrans model, which has superior performance on single-temporal hyperspectral image unmixing, as one of the comparison models. We can
Fig. 3: Lake Tahoe hyperspectral image sequence, acquisition time from left to right are 04/10/2014, 06/02/2014, 09/19/2014, 11/17/2014, 04/29/2015, 10/13/2015, respectively.
see that DeepTrans performs well in some individual temporal phases, but it does not consider the temporal information of multitemporal hyperspectral images, resulting in more erroneous unmixing regions. Finally, by comparing our proposed method with ReSUDNN, it can be seen that on the whole, both of them have achieved good results, and they are meticulous in the edge processing of endmembers. However, ReSUDNN has the phenomenon of misjudgment of endmembers in some phases, especially in the estimation of soil and vegetation.
### _Synthetic Datasets Results_
Due to the lack of ground truth in Lake Tahoe, we can only qualitatively compare the abundance map results. In order to compare their unmixing accuracy more precisely, we also tested them on two synthetic datasets. We performed a quantitative comparison on synthetic datasets, the results of which are shown in Table I. The results of the abundance maps for each model are shown in Fig. 8. Since the synthetic dataset 2 contains 15 phases, we only compare the MUFormer estimated abundance plot with the real one, taking into account space constraints. The comparison results are shown in Fig. 9,
Fig. 4: Multitemporal abundance map of Lake Tahoe HIs, with water, soil, and vegetation endmembers from left to right.
Fig. 5: The endmember estimation results for synthetic data 1, where the first row represents the true endmember results and the second row is the result of our model.
so the effect is poor for the latter moments, and there is a case of wrong estimation. As a whole, ReSUDNN and our method are closer to the true abundance map. We can see that for long time sequence hyperspectral images, our model shows excellent results, which is because transformer has a good effect on long sequence information processing. At the same time, we also combine the GAM to process the abundance transformation information of adjacent time phases, which further enhances the effect of our model in processing multitemporal hyperspectral images. In Table I and II we compare the quantitative results of the individual algorithms, it can be seen that our proposed MUFormer achieves SOTA results in three indicators of \\(\\mathbf{NRMSE_{A}}\\), \\(\\mathbf{NRMSE_{M}}\\) and \\(\\mathbf{SAM_{M}}\\). Although OU has achieved good results on \\(\\mathbf{NRMSE_{Y}}\\), the reconstruction loss cannot explain the effect of abundance and endmember estimation, so we see that his results of \\(\\mathbf{NRMSE_{A}}\\), \\(\\mathbf{NRMSE_{M}}\\) and \\(\\mathbf{SAM_{M}}\\) are poor, and the abundance map obtained is also affected by a lot of noise. and at the same time, the running speed is several times higher than that of KalmanEM and ReSUDNN. It is worth mentioning that synthetic dataset 2 contains 15 temporal phases, which is more challenging than synthetic dataset 1. We can see that our method has a substantial improvement, which also highlights the potential of our model in processing long sequences of multitemporal hyperspectral images. In Fig. 5 and Fig. 6, we show the results of endmember estimation of synthetic dataset 1 and 2. It can be seen that our estimation results are very close to the real endmember, but we still need to further enhance the perception ability of the model for the change of phase at different times of the same endmember.
### _Ablation study_
To fully verify the role played by our proposed CEM and GAM in the model, we launched an ablation study, the results of which are shown in Table III. From Table III, we can see that the model performance is improved to varying degrees when the CEM module and GAM are added separately, and the model effect is the best when the two modules are added at the same time. This is because through the GAM, we extract the spatial, global time and spectral information features of the multitemporal hyperspectral image. Combined with the CEM module to capture the changes between adjacent time phases, it is more conducive to our model to deal with multitemporal tasks. At the same time, as mentioned above, we processed images at different scales in CEM. In order to explore the influence of different scale processing on the unmixing effect, we carried out ablation experiments on the size of the convolution kernel, and the experimental results are shown in Table IV. From Table IV, we can see that the addition of convolution modules of different scales to \\(cls\\) has different
Fig. 9: The results of abundance estimation of the synthesized dataset 2. Comparison of the true abundance maps of four endmembers with the estimated abundance maps of the model, where each two-row abundance maps represent a set of comparison results for the same endmember.
degrees of improvement, and the fusion of features at different scales can improve the robustness of the model. We compare the two fusion methods of addition and multiplication, and it can be seen that the multiplication effect of A1 and A2 is more obvious. We believe that multiplying them enables the model to pay more attention to the relationship between features at different scales, which helps to capture the information in the image more comprehensively, especially in the presence of multiple scale structures. Moreover, the correlation between two features can be emphasized by multiplication, and the weighted attention to different regions can also be realized, making the model more flexible to learn different degrees of attention to different locations.
## IV Conclusion
This paper proposes a multitemporal hyperspectral image unmixing method MUFormer. Different from previous methods, we propose Change Enhancement Module and Global Awareness Module to extract image information from three dimensions of time, space and spectra, and dynamically adapt to the changes of adjacent phases from the perspective of deep learning by using the advantages of transformer in processing long sequence information. The results of multiple datasets show that our model has achieved excellent results in multitemporal hyperspectral image endmember and abundance map estimation, and our method has also achieved a great advantage in running speed. We believe that the problem to be further solved for multitemporal hyperspectral image unmixing is the subtle endmember changes and abundance changes between different phases. At the same time, it is also a feasible new idea to integrate the denoising module into the unmixing, because we find that the multitemporal hyperspectral image is more susceptible to noise in the unmixing process. We hope that our model can shed new light on the task of multitemporal hyperspectral image unmixing.
## Acknowledgments
We would like to acknowledge Ricardo Borsoi for providing the dataset and its accompanying documentation.
## References
* [1]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
[MISSING_PAGE_POST]
. Lu, Z. Wu, Q. Du, J. Chanussot, and Z. Wei (2022) Bayesian unmixing of hyperspectral image sequence with composite priors for abundance and endmember variability. IEEE Transactions on Geoscience and Remote Sensing60, pp. 1-15. Cited by: SSI.
* [20]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [21]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [22]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [23]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [24]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [25]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [26]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [27]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [28]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [29]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [30]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [31]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [32]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [33]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [34]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [35]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [36]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE journal of selected topics in applied earth observations and remote sensing5 (2), pp. 354-379. Cited by: SSI.
* [37]J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot (2012) Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based* [22] Y. Su, A. Marinoni, J. Li, J. Plaza, and P. Gamba, \"Stacked nonnegative sparse autoencoders for robust hyperspectral unmixing,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 9, pp. 1427-1431, 2018.
* [23] L. Gao, Z. Han, D. Hong, B. Zhang, and J. Chanussot, \"Cycu-net: Cycle-consistency unmixing network by learning cascaded autoencoders,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-14, 2021.
* [24] Y. Qu and H. Qi, \"udas: An untel denoising autoencoder with sparsity for spectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 57, no. 3, pp. 1698-1712, 2018.
* [25] S. Shi, M. Zhao, L. Zhang, and J. Chen, \"Variational autoencoders for hyperspectral unmixing with endmember variability,\" in _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2021, pp. 1875-1879.
* [26] K. Mantripagrada and F. Z. Qureshi, \"Hyperspectral pixel unmixing with latent dirichlet variational autoencoder,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2024.
* [27] T. Ince and N. Dobigeon, \"Spatial-spectral multiscale sparse unmixing for hyperspectral images,\" _IEEE Geoscience and Remote Sensing Letters_, 2023.
* [28] T. Uezato, N. Yokoya, and W. He, \"Illumination invariant hyperspectral image unmixing based on a digital surface model,\" _IEEE Transactions on Image Processing_, vol. 29, pp. 3652-3664, 2020.
* [29] R. A. Borsoi, T. Imibrba, and J. C. M. Bermudez, \"A data dependent multiscale model for hyperspectral unmixing with spectral variability,\" _IEEE Transactions on Image Processing_, vol. 29, pp. 3638-3651, 2020.
* [30] R. Li, B. Pan, X. Xu, T. Li, and Z. Shi, \"Towards convergence: A gradient-based multiobjective method with greedy hash for hyperspectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2023.
* [31] M. Wang, B. Zhu, J. Zhang, J. Fan, and Y. Ye, \"A lightweight change detection network based on feature interleaved fusion and bi-stage decoding,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 2023.
* [32] J. Wang, Y. Zhong, and L. Zhang, \"Contrastive scene change representation learning for high-resolution remote sensing scene change detection,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2024.
* [33] M. Hu, C. Wu, B. Du, and L. Zhang, \"Binary change guided hyperspectral multiclass change detection,\" _IEEE Transactions on Image Processing_, vol. 32, pp. 791-806, 2023.
* [34] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, \"Attention is all you need,\" _Advances in neural information processing systems_, vol. 30, 2017.
* [35] S. Henrot, J. Chanussot, and C. Jutten, \"Dynamical spectral unmixing of multitemporal hyperspectral images,\" _IEEE Transactions on Image Processing_, vol. 25, no. 7, pp. 3219-3232, 2016.
* [36] P.-A. Thouvenin, N. Dobigeon, and J.-Y. Tourneret, \"Online unmixing of multitemporal hyperspectral images accounting for spectral variability,\" _IEEE Transactions on Image Processing_, vol. 25, no. 9, pp. 3979-3990, 2016.
* [37] J. Sigurdsson, M. O. Ulfarsson, J. R. Sveinsson, and J. M. Bioucass-Dias, \"Sparse distributed multitemporal hyperspectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 55, no. 11, pp. 6069-6084, 2017.
* [38] R. A. Borsoi, T. Imibrba, P. Clossas, J. C. M. Bermudez, and C. Richard, \"Kalman filtering and expectation maximization for multitemporal spectral unmixing,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 19, pp. 1-5, 2022.
* [39] P.-A. Thouvenin, N. Dobigeon, and J.-Y. Tourneret, \"A hierarchical bayesian model accounting for endmember variability and abrupt spectral changes to unmix multitemporal hyperspectral images,\" _IEEE Transactions on Computational Imaging_, vol. 4, no. 1, pp. 32-45, 2017.
* [40] R. Zhuo, Y. Fang, L. Xu, Y. Chen, Y. Wang, and J. Peng, \"A novel spectral-temporal bayesian unmixing algorithm with spatial prior for sentinel 2- time series,\" _Remote Sensing Letters_, vol. 13, no. 5, pp. 522-532, 2022.
* [41] R. A. Borsoi, T. Imibrba, and P. Clossas, \"Dynamical hyperspectral unmixing with variational recurrent neural networks,\" _IEEE Transactions on Image Processing_, vol. 32, pp. 2279-2294, 2023.
* [42] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly _et al._, \"An image is worth 16x16 words: Transformers for image recognition at scale,\" _arXiv preprint arXiv:2010.11929_, 2020.
* [43] S. Jia, Y. Wang, S. Jiang, and R. He, \"A center-masked transformer for hyperspectral image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2024.
* [44] C. Zhang, J. Su, Y. Ju, K.-M. Lam, and Q. Wang, \"Efficient inductive vision transformer for oriented object detection in remote sensing imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2023.
* [45] P. Ghosh, S. K. Roy, B. Koirala, B. Rasti, and P. Schenuders, \"Hyperspectral unmixing using transformer network,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 60, pp. 1-16, 2022.
* [46] F. Kong, Y. Zheng, D. Li, Y. Li, and M. Chen, \"Window transformer convolutional autoencoder for hyperspectral sparse unmixing,\" _IEEE Geoscience and Remote Sensing Letters_, 2023.
* [47] Y. Duan, X. Xu, T. Li, B. Pan, and Z. Shi, \"Undat: Double-aware transformer for hyperspectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 61, pp. 1-12, 2023.
* [48] L. Wang, X. Zhang, J. Zhang, H. Dong, H. Meng, and L. Jiao, \"Pixel-to-abundance translation: Conditional generative adversarial networks based on patch transformer for hyperspectral unmixing,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 17, pp. 5734-5749, 2024.
* [49] Z. Yang, M. Xu, S. Liu, H. Sheng, and J. Wan, \"Ust-net: A u-shaped transformer network using shifted windows for hyperspectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2023.
* [50] X. Tao, M. E. Paoleti, Z. Wu, J. M. Haut, P. Ren, and A. Plaza, \"An abundance-guided attention network for hyperspectral unmixing,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2024.
* [51] L. Qi, X. Qin, F. Gao, J. Dong, and X. Gao, \"Sawu-net: Spatial attention weighted unmixing network for hyperspectral images,\" _IEEE Geoscience and Remote Sensing Letters_, 2023.
* [52] B. Wang, H. Yao, D. Song, J. Zhang, and H. Gao, \"Ssf-net: A spatial-spectral features integrated autoencoder network for hyperspectral unmixing,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 2023.
* [53] X. Chen, X. Zheng, Y. Zhang, and X. Lu, \"Remote sensing scene classification by local-global mutual learning,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 19, pp. 1-5, 2022.
* [54] J. Yue, L. Fang, and M. He, \"Spectral-spatial latent reconstruction for open-set hyperspectral image classification,\" _IEEE Transactions on Image Processing_, vol. 31, pp. 5227-5241, 2022.
* [55] R. Xu, X.-M. Dong, W. Li, J. Peng, W. Sun, and Y. Xu, \"Dbctnet: Double branch convolution-transformer network for hyperspectral image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, 2024.
* [56] G. Bertasius, H. Wang, and L. Torresani, \"Is space-time attention all you need for video understanding?\" in _ICML_, vol. 2, no. 3, 2021, p. 4.
* [57] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 770-778.
* [58] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, \"Cbam: Convolutional block attention module,\" in _Proceedings of the European conference on computer vision (ECCV)_, 2018, pp. 3-19.
* [59] M. Zhu, L. Jiao, F. Liu, S. Yang, and J. Wang, \"Residual spectral-spatial attention network for hyperspectral image classification,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 59, no. 1, pp. 449-462, 2020.
* [60] G. Zhao, Q. Ye, L. Sun, Z. Wu, C. Pan, and B. Jeon, \"Joint classification of hyperspectral and lidar data using a hierarchical cnn and transformer,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 61, pp. 1-16, 2022. | Multitemporal hyperspectral image unmixing (MTHU) holds significant importance in monitoring and analyzing the dynamic changes of surface. However, compared to single-temporal unmixing, the multitemporal approach demands comprehensive consideration of information across different phases, rendering it a greater challenge. To address this challenge, we propose the Multitemporal Hyperspectral Image Unmixing Transformer (MUFormer), an end-to-end unsupervised deep learning model. To effectively perform multitemporal hyperspectral image unmixing, we introduce two key modules: the Global Awareness Module (GAM) and the Change Enhancement Module (CEM). The Global Awareness Module computes self-attention across all phases, facilitating global weight allocation. On the other hand, the Change Enhancement Module dynamically learns local temporal changes by comparing endmember changes between adjacent phases. The synergy between these modules allows for capturing semantic information regarding endmember and abundance changes, thereby enhancing the effectiveness of multitemporal hyperspectral image unmixing. We conducted experiments on one real dataset and two synthetic datasets, demonstrating that our model significantly enhances the effect of multitemporal hyperspectral image unmixing.
hyperspectral image unmixing, multitemporal, transformer, neural network. | Condense the content of the following passage. | 255 |
arxiv-format/1502_03553v2.md | # An integrated quantum photonic sensor based on
Hong-Ou-Mandel interference
Sahar Basiri-Esfahani\\({}^{1}\\)1, Casey R. Myers\\({}^{1}\\), Ardalan Armin\\({}^{2}\\), Joshua Combes\\({}^{1,3}\\) and Gerard J. Milburn\\({}^{1}\\)
\\({}^{1}\\)ARC Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St Lucia, QLD 4072, Australia \\({}^{2}\\)Centre for Organic Photonics & Electronics (COPE), School of Mathematics and Physics and School of Chemistry and Molecular Biosciences, The University of Queensland, St Lucia, QLD 4072, Australia \\({}^{3}\\)Center for quantum information and control, University of New Mexico, Albuquerque, New Mexico 87131-0001, USA
Footnote 1: [email protected]
######
Introduction
Integrated photonics based on photonic crystal (PhC) structures provides a path to extremely small optical sensors with applications to biology [1; 2], chemistry [3] and engineering [4]. PhC devices with various geometries and structures such as hollow core PhC fibers [5], 1D and 2D waveguides [6; 7] and nano-cavities [8; 9; 10; 11; 12] have been fabricated and used for sensing applications. Among these devices, PhC cavity-based sensors offer important advantages over PhC waveguide sensors since they can be made much smaller, thus reducing vulnerability from impurities and losses. Moreover, exploiting high Q cavities with large mode volume are advantageous for sensors based on refractive index (RI) changes, for example in bio-pathogen detection [13], chemical sensing [14] and single particle detection [15]. All these schemes make use of second order interference of coherent states of the optical field and are thus basically classical phenomenon.
In parallel to integrated optical sensors, considerable progress has been in using integrated optical systems for single photon optical quantum computing using linear optics, so called LOQC [16]. These schemes are enabled by the uniquely quantum-optical phenomenon of Hong-Ou-Mandel (HOM) interference [17]. This opens up a new perspective for optical quantum metrology that combines ideas from photonic crystal sensors with linear optical quantum information processing using single photon states. We make a first step in this direction by presenting a scheme that uses HOM interference to make sensors from optical pulses prepared in single photon pulses. As HOM interference does not arise for coherent optical pulses, our proposal is a true quantum metrology scheme and realizes the gain in sensitivity such schemes offer. Furthermore, it opens up the prospect of using LOQC protocols to construct more sophisticated quantum metrology protocols that are compatible with integrated optical systems.
Recent demonstrations of cutting edge sensors that exploit quantum mechanics have been shown to outperform their classical counterparts in achieving higher sensitivities [18; 19; 20]. Many applications, e.g. biological sensing [21], require low power to preserve delicate samples destroyed by: photo-decomposition, photo-thermal effects, and photon pressure for example. This requirement is in addition to the usual requirements of high input-output gain (responsivity), low noise and high bandwidth. In that regard, weak coherent light offers a route to low power sensing. However, the use of weak coherent pulses lowers a sensor'sbandwidth. Consider for example a series of weak coherent pulses with on average one photon per pulse, in this case roughly 37% of pulses have no photons at all and 26% have more than one photon per pulse. Clearly the ultimate low pulse power limit is achieved by single photon pulses with only one photon per pulse. A sensor operating with single photon states offers low power suitable for deployment in _lab on a chip_ applications [22] and compatible with attojoule all-optical switching [23] and opto-mechanical devices for strain sensors [24] and accelerometers [25].
While single photon states are not easy to make there is a very large research effort underway driven by their potential application in quantum information processing [16]. For our purposes it suffices to note that PhC devices are compatible with a number of quantum dot single photon sources [26] and that technological advances in integrated multiplexed single photon sources in PhCs are very encouraging [27]. The fundamental quantum nature of photons is usually observed through the HOM effect which has now been demonstrated in a variety of physical systems such as evanescently coupled optical waveguides [28] and microwave devices [29]. In the HOM effect indistinguishable photons simultaneously arrive at each of the two input ports of a 50/50 beam splitter, after which the photons \"bunch\" together so that both photons are either in one output port or the other. Never will you observe one photon in both outputs.
The use of dual Fock states was proposed in 1993 in the context of quantum metrology to reduce the uncertainty of phase measurements [30]. In this article, we propose a novel scheme for a quantum photonic sensor based on coupled PhC cavities that exploits the HOM effect, shown in Fig. 1. The coupled PhC cavities form an \"effective beam splitter\" for two incident photons. The central idea is that a parameter to be estimated, call it \\(\\psi\\), modulates the coupling between the optical cavities, \\(g\\). This can be done by changing the distance between the cavities through compressing or stretching the dielectric material (e.g. for force and strain sensing) or by changing the refractive index of the media between the two cavities (e.g. for RI, temperature and single particle sensing). The change in \\(g\\) modifies the reflection and transmission of the effective beam splitter which changes the visibility of HOM interference. Therefore, by measuring the change in HOM visibility, we can sense the variation in \\(g\\) and thus estimate \\(\\psi\\). This scheme is independent of transmission/reflection spectra normally used for classical cavity-based sensors [31] and neither a dispersive element nor spectral resolution for the measurement is required.
First, we characterize the proposed sensor in terms of its performance metrics, the responsivity and minimum detectable value for the parameter to be estimated. This characterization in terms of the working parameters of the sensor is expressed in a general way, with no assumptions set for the values for the cavity damping rate, cavities coupling strength, PhC refractive index, etc. Then, a more specific example is provided by considering this scheme with previously reported experimental parameters for GaAs/AlGaAs PhC structures. We theoretically predict that such a system can measure one part per million change in refractive index as well as forces on the order of \\(10^{-7}\\) N. These results are not obtained by using experimental values specifically optimized for our scheme. However, the results obtained for refractive index and force sensing are promising for integrated on-chip sensing.
## II Hong-ou-mandel sensor
As HOM interference is a uniquely quantum mechanical phenomenon we must necessarily proceed with a full quantum description. Consider the double optical resonator scheme
Figure 1: Schematic of quantum PhC sensor. Coupled PhC resonators implement an effective beam splitter interaction between the input single photon fields resulting in HOM interference effect observed in detected output fields. \\(G^{(2)}(\\tau)\\) is a measure of the number of coincidences which is a function of the time shift between the photons entering the beam splitter. By compressing or stretching the distance between optical resonators or through changes in refractive index of the medium between the resonators, the coupling between the cavities changes. This results in a change in transmission and reflection of the beam splitter and therefore results in a change in the measured HOM visibility.
need not and should not be infinite as it sets the time interval between successive pulses. In fact the integration time needs to be of the order \\(\\tau_{\\rm rep}\\sim\\max\\{1/\\kappa,1/\\gamma\\}\\). In what follows we work in regimes where \\(\\frac{\\kappa}{\\gamma}>1\\), which is compatible with available experimental realizations, so we have \\(\\tau_{\\rm rep}\\sim 1/\\gamma\\). Through equations (1), (2) and (3) the explicit dependence of \\(G^{(2)}(\\tau)\\) on \\(g\\) can be seen. By monitoring changes in \\(G^{(2)}(\\tau)\\) we can infer changes in \\(g\\). In the ideal case, we would like to detect both photons.
In practice, either one or two photons could fail to be counted at the detectors due to optical losses, imperfect single photon sources or non-unit detection efficiency. The case of photon loss before detection (or detector inefficiency) can be modelled as usual [34] by inserting a beam splitter with transmissivity amplitude \\(t_{k}\\) in the path of the output fields \\(a_{k,{\\rm out}}(t)\\). The field that reaches an ideal detector is given by the transformation \\(a_{1,{\\rm out}}(t)\\to t_{k}a_{1,{\\rm out}}(t)+r_{k}v_{1,{\\rm out}}(t)\\) where \\(v(t)\\) is a vacuum field mode annihilation operator. Substitution into the equation (4) shows that the only term that contributes to the both the numerator and denominator are multiplied by the factor \\(T_{1}.T_{2}\\), where \\(T_{k}=|t_{k}|^{2}\\) is the conditional probability for a single photon in the output field to reach an ideal detector. This is because this average is normally ordered. Thus \\(G^{(2)}(\\tau)\\) is unchanged by loss since we have normalised it by the intensity that actually reaches the detector from each mode: in effect \\(G^{(2)}(\\tau)\\) is a conditional probability conditioned on only those detection events that give two counts, one at each detector. Single counts and no counts are discarded. These cases should be considered as failed, but heralded trials in which we discard and simply run again with another two single photons. However, this lowers the sensor's bandwidth. In section III, we explicitly include an optical loss factor, \\(\\varepsilon\\), which is defined as the number of failed trials over the total number of trials, to take the effect of these imperfections into account. As photon loss is heralded, LOQC error correction techniques might be employed to mitigate the loss of signal on such events.
Photon loss before the device, or failure of a source to produce a photon can likewise be modelled by inserting a beam splitter into the path of \\(a_{\\rm in}(t)\\). This changes the input state to a mixed state as follows. The input state is a two photon state of the form \\(a^{\\dagger}_{1_{\\rm in}}(t)a^{\\dagger}_{2_{\\rm in}}(t)|0\\rangle\\). We then transform \\(a_{1,{\\rm in}}(t)\\rightarrow\\bar{a}_{1,{\\rm in}}(t)=t_{k}a_{1,{\\rm in}}(t)+r_ {k}v_{1,{\\rm in}}(t)\\) where \\(v(t)\\) is a vacuum field mode annihilation operator. Thus the total state after the beam splitter is given by \\(\\bar{a}^{\\dagger}_{1,{\\rm in}}(t)\\bar{a}^{\\dagger}_{2,{\\rm in}}(t)|0\\rangle\\) but the actual input state to the device is given by tracing out over the two vacuum modes. This gives the input state as a mixed state of the form
\\[\\rho_{\\rm in} = T_{1}T_{2}|1\\rangle_{1}\\langle 1|\\otimes|1\\rangle_{2}\\langle 1|+T_{1} R_{2}|1\\rangle_{1}\\langle 1|\\otimes|0\\rangle_{2}\\langle 0|\\] \\[+R_{1}T_{2}|0\\rangle_{1}\\langle 0|\\otimes|1\\rangle_{2}\\langle 1|+R_{1 }R_{2}|0\\rangle_{1}\\langle 0|\\otimes|0\\rangle_{2}\\langle 0|.\\]
This also indicates that loss at the input is detected as the conditional input state, conditioned on counting two photons in total at ideal detectors. This is simply the first term in the above sum. This is the same pure state as for the case of perfect sources. The coefficient \\(T_{1}T_{2}\\) is simply the conditional probability that two input photons in each of the inputs enter the device. Input loss or source inefficiency is also heralded in the detectors and those trials can be discarded. The fraction discarded in total including input inefficiency and detectors inefficiency is simply \\(T_{1,{\\rm in}}T_{2,{\\rm in}}T_{1,{\\rm out}}T_{2,{\\rm out}}\\), where \\(T_{k,{\\rm in}}\\) and \\(T_{k,{\\rm out}}\\) are the conditional probabilities that a single photon enters the device in mode-k and that a single photon for output mode-k is detected.
In Fig. 1, the HOM dip for our system is depicted for particular values of \\(\\kappa/\\gamma\\) and \\(g/\\gamma\\). For \\(\\tau=0\\), where input photons are indistinguishable, quantum interference results in photon bunching, or photon pairs, and we see the minimum of the coincidence probability i.e. the HOM dip. As \\(\\tau\\) increases or decreases the coincidence probability increases.
We define the responsivity of the sensor to detect the changes in \\(g\\) as
\\[R_{g}(g_{0},\\kappa)=\\left|\\frac{dG^{(2)}(0)}{dg}\\right|. \\tag{6}\\]
Operating at \\(\\tau=0\\) is optimal for most combinations of \\(\\gamma\\) and \\(\\kappa\\) and maximizes the responsivity. We then optimize the values of \\(\\kappa/\\gamma\\) and \\(g/\\gamma\\) so that the derivative of \\(G^{(2)}(0)\\) with respect to \\(g\\) is maximized. By maximizing the responsivity over our device parameters, \\(g_{0}\\) the initial beam splitter coefficient and the cavity damping rate \\(\\kappa\\), we can optimize the performance of our sensor. Due to the fact that our sensor is a linear quantum system, we can analytically calculate \\(G^{(2)}(\\tau)\\) and its derivative for the initial state \\(|\\psi(0)\\rangle=|1_{a_{1},\\xi},1_{a_{2},\\eta}\\rangle\\), the full expression for \\(G^{2}(\\tau)\\) is given in appendix A. Figure 2 can serve as a guide for experimental implementations and device fabrication. Figure 2(a) shows \\(G^{(2)}(0)\\) as a function of \\(g/\\gamma\\) and \\(\\kappa/\\gamma\\). Figure 2(b) shows the behaviour of the system response for different operating points \\(g/\\gamma\\) and \\(\\kappa/\\gamma\\). The dashed line on Fig. 2(b) demonstrates the operating points at which \\(\\frac{dR_{g}}{dg}=0\\) where we can take advantage of maximum sensor response. In addition,at this maximum sensor response, the estimator \\(G^{2}(0)\\) behaves linearly with small signal variations as will be described below.
## III Noise Characteristics
Another important measure in characterizing the sensor performance is the Linear Dynamic Range (LDR) which is related to the estimation error and the sensor linearity which we now explore. The error in estimating \\(\\delta g\\) is related to the error in estimating \\(\\delta G^{(2)}(0)\\) in a finite number of samples
\\[\\delta g_{\\rm noise}=\\left|\\frac{dG^{(2)}(0)}{dg}\\right|^{-1}\\delta G^{(2)}(0 )_{\\rm noise}, \\tag{7}\\]
where
\\[\\delta G^{(2)}(0)_{\\rm noise}=\\frac{\\sqrt{G^{(2)}(0)(1-G^{(2)}(0))}}{\\sqrt{N( 1-\\varepsilon)}}, \\tag{8}\\]
is the standard deviation of a Bernoulli distribution with \\(N\\) trials. The loss factor, \\(\\varepsilon\\), has
Figure 2: Sensor response to variations in g. (a) Shows the behavior of the estimator, coincidence detection probability \\(G^{(2)}(0)\\), for indistinguishable input photons versus \\(g/\\gamma\\) and \\(\\kappa/\\gamma\\). (b) Shows how responsivity of the sensor varies by operating the sensor at different regimes of \\(g/\\gamma\\) and \\(\\kappa/\\gamma\\). The white dashed lines show the operating points for which sensor response is maximum and linear over the range of small changes in signal.
already been introduced in section II. The minimum detectable shift in \\(g\\) from the bias \\(g_{0}\\) should be larger than this error, i.e. \\(\\delta g_{\\rm min}>\\delta g_{\\rm noise}\\), so that we are able to measure it. For a large number of samples (\\(N\\rightarrow\\infty\\)), \\(\\delta g_{\\rm noise}\\) is negligible (up to accidental coincidences caused by dark counts or stray light). This result is useful for the estimation of a static or quasi static parameter.
We now give an order of magnitude estimate for \\(\\delta g_{\\rm min}\\) when the parameter is time varying. If \\(T_{\\rm meas}=\\tau_{\\rm rep}N\\) is the time between our samples of \\(g(t)\\), naive arguments from the Nyquist
Figure 3: Linear dynamic range. (a) Shows how sensor response changes at different operating points. If we operate the sensor on a bias \\(g_{0}\\) where sensor response is maximum, we can take advantage of the sensor linear response, up to small variations in \\(g\\). (b) Shows LDR for bias \\(\\frac{g_{0}}{\\gamma}=1.8\\) shown in (a) for different detection frequency bandwidths over \\(\\gamma\\), \\(\\frac{f}{\\gamma}\\). The red star shows the upper LDR limit that is the point up to which sensor responds linearly within 1% variation.
Shannon sampling theorem imply that we can not determine frequency components of \\(g(t)\\) greater than \\(f=1/(2T_{\\rm meas})\\), which is called the detection frequency bandwidth. For a one-sigma level of confidence we should have \\(N\\geq\\min\\{\\gamma,\\kappa\\}/(2f)\\) and the noise equivalent \\(\\delta g\\), given in equation (7), becomes
\\[\\delta g_{\\rm min}>\\frac{\\sqrt{2fG^{(2)}(0)(1-G^{(2)}(0))}}{R_{g}\\sqrt{\\min\\{ \\gamma,\\kappa\\}(1-\\varepsilon)}}. \\tag{9}\\]
Now we can calculate the LDR which is defined as
\\[{\\rm LDR}=20\\log\\frac{\\delta g_{\\rm max}}{\\delta g_{\\rm min}}, \\tag{10}\\]
where \\(\\delta g_{\\rm max}\\) is the point bellow which the sensor response is linear within 1% variation, i.e. \\(R_{g}(g)=R_{g}^{\\rm max}-0.01R_{g}^{\\rm max}\\) in which \\(R_{g}^{\\rm max}\\equiv\\max\\limits_{g_{0},\\kappa}R_{g}(g_{0},\\kappa)\\).
In Fig. 3(a), responsivity is plotted with respect to \\(g\\) for an arbitrary value of \\(\\kappa\\). We bias the initial coupling between the optical resonators (\\(g_{0}\\)) where the responsivity peaks. Therefore, there is a range of \\(\\delta g=g-g_{0}\\) for which the sensor behaves linearly. LDR is shown in Fig. 3(b) for some arbitrary detection bandwidth in units of \\(\\gamma\\). For smaller choices of \\(f/\\gamma\\), \\(\\delta g_{\\rm noise}\\) will be decreased, so the sensor can resolve smaller shifts in \\(g\\).
## IV HOM sensor implementation
We now consider specific physical applications for our sensor, first as a force sensor and then as a refractive index sensor, employing coupled L3 PhC cavities [8; 9; 10; 11; 12] experimental data to estimate its responsivity and minimum resolvable shift in signal for each case. By examining the normal mode splitting reported in these references we infer the coupling strength \\(g\\) between the PhC resonators is of the order of \\(10^{11}-10^{15}\\) Hz. The evanescent coupling strength between the resonators and waveguides \\(\\kappa\\) can be tailored, so that \\(\\kappa\\sim g\\) for example. Operating as a force sensor, the measured signal is the shift in cavity separation induced by an applied force or a strain, while operating as a refractive index sensor, the signal to be measured is a change in refractive index induced by the presence of a molecule dropped on the air holes between the PhC resonators, for a constant bias cavity separation. A shift in either cavity separation, call it \\(x\\), or refractive index, call it \\(n\\), modifies the coupling strength between the resonators which will be detected by measuring \\(G^{2}(0)\\). Therefore, to give an order of magnitude estimation of the responsivity and minimum detectable signal in each case we need to investigate the dependence of \\(g\\) as a function of \\(x\\) and \\(n\\). To do this we used a 1D model analysed by the transfer matrix method [35] (see Appendix B) to investigate the dependence of the cavity normal mode splitting on the change in cavity separation or refractive index.
In the case of identical resonators, \\(\\omega_{1}=\\omega_{2}=\\omega\\) and \\(\\kappa_{1}=\\kappa_{2}\\), the splitting in frequencies of the symmetric and asymmetric normal cavity modes is \\(\\Delta\\Omega=2g\\)[8; 36]. Therefore, we can write \\(g=\\pi c\\Delta\\lambda/\\lambda^{2}\\), where \\(c\\) is the speed of light and \\(\\lambda\\) is the cavity mode wavelength. Since \\(\\pi c/\\lambda^{2}\\) is a constant, to find the functionality of \\(g\\) with \\(x\\) and \\(n\\), we need to find the functionality of \\(\\Delta\\lambda\\) with those parameters. Numerics show that an exponential function of the form \\(g=ae^{-bx}\\) fits very well on data achieved for normal mode splitting change versus different cavity separations (see Appendix B) and an exponential of the form \\(g=ae^{bn^{2}}\\) can describe the changes with respect to refractive index (see Appendix B). Hence, we can generally write \\(g(x,n)=ae^{-bx+dn^{2}}\\). We extract the coefficients \\(a,b\\) and \\(d\\) by fitting data from figure 2 of citation [12] for a PhC made of GaAs/AlGaAs (see Appendix C). According to their data \\(g\\) is on the order of \\(10^{12}-10^{13}\\) Hz for this range of \\(x_{\\rm bias}\\) that is shown in Fig. 4. We have chosen \\(\\kappa\\) of the order of \\(10^{13}\\) Hz.
### HOM force sensor
First we investigate the efficiency of our system operating as a force sensor. In this case \\(n=1\\), so by substituting \\(g(x,1)\\) into equation (4) we can see how the probability of joint detections changes for different operating points \\(x_{\\rm bias}\\) (Fig. 4(a)). The sensor response to changes in \\(x\\) is calculated as \\(R_{x}(x_{0},\\kappa)=|\\frac{dG^{2}(0)}{dx}|_{x=x_{0}}\\). Figure 4(b) shows that for an input photon bandwidth on the order of \\(\\gamma=1\\)GHz, which is experimentally feasible at the moment [37; 38], sensor response to shifts in \\(x\\) is of the order of \\(10^{-3}(\\rm nm)^{-1}\\). Minimum detectable \\(x\\) can be easily related to \\(\\delta g_{\\rm min}\\) as \\(\\delta x_{\\rm min}=\\frac{-1}{bg}\\delta g_{\\rm min}\\). Figure 4(c) shows this noise equivalent \\(x\\) is of the order of \\(10^{-3}(\\rm nm/\\sqrt{Hz})\\). Young's modulus for GaAs is \\(E=85.5\\)GPa [39]. Therefore, for the given lattice with a thickness of \\(t\\simeq 1\\mu\\)m the stiffness of GaAs is \\(k=\\frac{E}{t}\\simeq 85.5\\frac{\\rm kN}{\\rm m}\\). Minimum detectable force is shown in Fig. 4(d) and is of the order of \\(10^{-7}\\)N which compares rather well with the high resolution PhC force sensors [40; 41] exploiting coherent light. However, these schemes use significantly larger input power while in our results not only is the pulse power (1 ph/pulse) low but also the average power (Figure 4: Performance of HOM sensor as a force sensor. This figure is a fabrication guide to building a HOM force sensor with maximum performance. (a) Shows how the estimator evolves by changing the operating point \\(x_{\\text{bias}}\\) for an input photon bandwidth of \\(\\gamma=1\\)GHz. (b) For the given \\(\\gamma\\) and \\(\\kappa\\), the responsivity is of the order of \\(10^{-3}\\)(nm)\\({}^{-1}\\). The white dashed lines show the operating points where system response to displacement shift is linear. (c) Shows that the minimum detectable change in distance is of the order of \\(10^{-3}\\)nm\\(/\\sqrt{\\text{Hz}}\\)). (d) For PhC made of GaAs/AlGaAs, the given value for minimum detectable x corresponds to minimum detectable forces of the order of \\(10^{-7}\\)N. Our calculations show that as we reduce kappa, gradually we loose the linear behaviour of the sensor (white dashed lines) for smaller \\(x_{\\text{bias}}\\) as the best bias points shifts towards larger \\(x\\) or smaller g without improving or decreasing the sensor resolution.
W), which is defined by the emission rate of the current single photon sources (\\(\\sim\\) GHz), is also low.
Importantly, fabricating the cavities with a smaller \\(\\kappa\\) does not affect the sensor resolution but shifts the optimum operating points at which the sensor behaves linearly (white dashed lines in Fig. 4) towards larger \\(x_{\\text{bias}}\\).
### HOM refractive index sensor
To operate the system as a refractive index sensor we operate at a fixed \\(x_{\\text{bias}}\\), so \\(g(n)=ae^{-bx_{\\text{bias}}+dn^{2}}\\). System response to refractive index shift is \\(R_{n}(x_{0},\\kappa)=|\\frac{dG^{2}(0)}{dn}|_{n=1}\\) and the minimum detectable refractive index shift is calculated as \\(\\delta n_{\\text{min}}=(\\delta g_{\\text{min}}/2dgn)|_{n=1}\\). Figure 5(a) is a fabrication guide for \\(\\gamma=1\\)GHz to find the best operating points to achieve maximum responsivity together with linear response. Figure 5(b) predicts a resolution of the
Figure 5: Performance of HOM sensor as a refractive index sensor. This figure is a fabrication guide to building a HOM refractive index sensor with maximum performance. (a) Shows responsivity of the refractive index sensor for different operating points \\(x_{\\text{bias}}\\) for an input photon bandwidth of \\(\\gamma=1\\)GHz. The white dashed lines show the bias points where sensor response changes linearly for very small changes in refractive index. Our theory predicts that responsivity does not depend on single photon band width \\(\\gamma\\). (b) Predicts that for \\(\\gamma=1\\)GHz the minimum detectable refractive index shift is of the order of \\(10^{-6}\\)RIU\\(/\\sqrt{\\text{Hz}}\\).
order of \\(10^{-6}\\) refractive index unit (RIU) per \\(\\sqrt{\\text{Hz}}\\) for single photon bandwidth of \\(\\gamma=1\\)GHz. Up to the best of our knowledge the best resolution achieved in schemes [42; 43] is of the order of \\(10^{-7}\\)RIU per \\(\\sqrt{\\text{Hz}}\\), however these use more input power.
## V Conclusion
In conclusion we have described a uniquely quantum protocol for a PhC sensor based on two coupled cavities. Our proposal uses single photon states, not coherent states, and operates on Hong-Ou-Mandel interference; a fourth order interference effect. The visibility for such a HOM proposal is dependent on changes in the coupling between the cavities, which is in turn dependent on shifts in the cavity separation distance and/or refractive index of the medium in between the two cavities. Very small changes in such parameters result in modulation of the cavity coupling rate that can be observed by measuring the resulting change in the HOM dip. Our results predict minimum detectable values for refractive index and force changes, \\(10^{-6}\\) RIU per \\(\\sqrt{\\text{Hz}}\\) and \\(10^{-7}\\) N, respectively. This estimation is based on the parameters obtained form the current experimental implementations of coupled L3 PhC cavities in GaAs/AlGaAs [12] and is not specifically optimized for our sensor. Further development of PhC technology for the sensor presented here could offer significant improvements in the performance. Our results show that high sensitivity can be reached upon achieving high repetition rate single photon sources.
The advantages of the presented scheme are as follows. This scheme can be implemented on chip and fabricated in micro-scale dimensions. Moreover, unlike sensing approaches based on transmission spectrum of a L3 cavity coupled to a waveguide, this approach does not require spectral resolution that reduces the bandwidth. Additionally, this scheme can be a multi-purpose sensor. In this article, we have discussed force (strain) and refractive index sensing. With minor modifications, it can be used for other targets such as local temperature, pressure and particle detection and analysis.
A \\(\\sqrt{2}\\) improvement in estimation accuracy over schemes using coherent light and a \\(\\sqrt{2}\\) improvement in bandwidth over schemes using serial single photons can be achieved. The improvement of \\(\\sqrt{2}\\) in estimation accuracy may sound underwhelming. But a different accounting philosophy shows why this factor is important (see Appendix D for further details). Suppose we want our estimate of \\(g\\) to have a mean square error of order \\(10^{-4}\\) and we ask how many experiments, on average, we must perform to acheive this precision. Then coherent light would require \\(50\\times 10^{6}\\) experiments; a serial single photon approach requires \\(24\\times 10^{6}\\) experiments; our HOM approach requires \\(12.5\\times 10^{6}\\) experiments. That is we have quartered the number of required experiments relative to the coherent light case and halved it relative to the serial single photon case. Due to this the reduction in samples the bandwith of our sensor relative to both cases is increased.
The disadvantage of this single-photon-based scheme compared to those using coherent light is the difficulty in building reliable single photon sources and detectors.
To summarise, the key point we are making in the paper is that if one has already made a commitment to single photonics in order to gain access to the quantum information processing that this provides, single-photon metrology can also be added to the suite of tools. HOM interference is the key phenomena that enables scalable quantum information processing in single photonics with linear optics.
## VI Appendix A
The solutions to the quantum Langevin equations (3) are given by
\\[a_{1}(t) = \\sqrt{\\kappa}\\Big{[}A(t)\\int_{0}^{t}dt^{\\prime}\\Big{(}C(t^{ \\prime})a_{1,\\rm in}(t^{\\prime})+D(t^{\\prime})a_{2,\\rm in}(t^{\\prime})\\Big{)}\\] \\[+B(t)\\int_{0}^{t}dt^{\\prime}\\Big{(}D(t^{\\prime})a_{1,\\rm in}(t^{ \\prime})+C(t^{\\prime})a_{2,\\rm in}(t^{\\prime})\\Big{)}\\Big{]},\\] \\[a_{2}(t) = \\sqrt{\\kappa}\\Big{[}B(t)\\int_{0}^{t}dt^{\\prime}\\Big{(}C(t^{ \\prime})a_{1,\\rm in}(t^{\\prime})+D(t^{\\prime})a_{2,\\rm in}(t^{\\prime})\\Big{)} \\tag{11}\\] \\[+A(t)\\int_{0}^{t}dt^{\\prime}\\Big{(}D(t^{\\prime})a_{1,\\rm in}(t^{ \\prime})+C(t^{\\prime})a_{2,\\rm in}(t^{\\prime})\\Big{)}\\Big{]},\\]
where \\(A(t)=e^{-\\kappa t/2}\\cos(gt)\\), \\(B(t)=-ie^{-\\kappa t/2}\\sin(gt)\\), \\(C(t)=e^{\\kappa t/2}\\cos(gt)\\) and \\(D(t)=ie^{\\kappa t/2}\\sin(gt)\\). By using the above solutions for the cavity mode in the input-output relation (2), we can analytically calculate the joint detection probability as
\\[G^{(2)}(\\tau)=\\frac{\\frac{3}{2}\\tau^{(\\kappa+\\gamma)}}{A}\\left(Be^{\\frac{3}{2 }\\tau^{(\\kappa+\\gamma)}}+Ce^{\\frac{1}{2}\\tau^{(3\\kappa+\\gamma)}}\\right.\\left.+ De^{\\frac{1}{2}\\tau^{(\\kappa+3\\gamma)}}+Ee^{\\tau^{(\\kappa+\\gamma)}}\\right), \\tag{12}\\]
where
\\[A=(4g^{2}+\\kappa^{2})^{2}\\bigg{(}16g^{4}+(\\gamma^{2}-\\kappa^{2})^{2}+8g^{2}( \\gamma^{2}+\\kappa^{2})\\bigg{)}^{2},\\]\\[B=(4g^{2}+(\\gamma-\\kappa)^{2})^{2}\\bigg{(}256g^{8}+\\kappa^{4}(\\gamma+\\kappa)^{4}+8 g^{2}(\\gamma^{2}-2\\kappa^{2})(16g^{4}+\\kappa^{2}(\\gamma+\\kappa)^{2})\\]
\\[+16g^{4}(\\gamma^{4}+2\\gamma^{2}\\kappa^{2}+20\\gamma\\kappa^{3}+22\\kappa^{4}) \\bigg{)},\\]
\\[C=-32g^{2}\\kappa^{2}(4g^{2}+\\gamma^{2}-\\kappa^{2})^{2}(4g^{2}+\\kappa^{2})^{2},\\]
\\[D=-32g^{2}\\gamma^{2}\\kappa^{2}F^{2},\\]
\\[E=-64g^{2}\\gamma\\kappa^{2}(4g^{2}+\\gamma^{2}-\\kappa^{2})(4g^{2}+\\kappa^{2})F,\\]
and
\\[F=\\kappa(-12g^{2}-\\gamma^{2}+\\kappa^{2})\\cos(g\\tau)+2g(4g^{2}+\\gamma^{2}-3 \\kappa^{2})\\sin(g\\tau).\\]
## VII Appendix B
To find the functionality of coupling strength \\(g\\) with separation distance between the cavities and refractive index of the media in between the two cavities, we can use the analogy of the coupled cavities with a quantum double-well problem with a potential barrier in between, where we need to find the tunnelling rate \\(g\\). Solving this problem shows the functionality of \\(g\\) with the width of the barrier \\(x\\) and the height of the barrier \\(V\\) scales as \\(g\\propto\\exp\\{-bx-dV\\}\\). According to citations [44; 45], which introduce the optical equivalence of the Scrodinger equation, \\(V\\propto-n^{2}\\). Therefore, we expect the coupling strength between the cavities to scale as \\(g\\propto\\exp\\{-bx+dn^{2}\\}\\). To be more precise a one dimensional optical modelling simulation has been performed to find the functionality of coupling strength \\(g\\) with \\(x\\) and \\(n\\), which suggest a very good fit can be achieved with the above given functionality. The simulations are done by the transfer matrix method described in [35]. The results are shown in Fig. 6.
Figure 6: Approximating the optical coupling between two GaAs cavities (L = 450 nm) placed in the middle of a distributed Bragg reflector (DBR) stack comprising GaAs (d=225 nm) and air (d=112 nm) pairs versus cavities separation (a,c) and air holes refractive index (b,d). (a) Normalised transmission of the stack showing the normal modes of the two cavities for different x. The dashed line corresponds to the unperturbed single cavity mode confined by the DBR stack. (b) Normal modes shown for x=449 nm (two air and one GaAs layers) and varying the refractive index of the air layers between the cavities. By increasing the air hole refractive index the mode separation increases which corresponds to stronger coupling between the cavities due to the decrease in the refractive index offset of the DBR layers. (c) coupling frequency calculated from (a) versus cavity separation. The dashed line corresponds to an exponential decay fitting. (d) coupling frequency as a function of air hole refractive index calculated from figure (b). The dashed curve corresponds to an exponential fitting of \\(ae^{bn^{2}}\\).
## VIII Appendix C
We choose experimental data for a PhC lattice made of GaAs/AlGaAs given in [12] as an example to find the functionality of coupling strength \\(g\\) in terms of cavities separation \\(x\\) and the refractive index of the dielectric material \\(n\\). Figure 7 shows the numbers for the best found fitted function.
## IX Appendix D
Here we back up the claims made in the conclusion, further details will be available in a future publication. Here we assume a mean flux greater than two photons may damage a sample. We model our coupled cavity sensor as an effective beam splitter between the input modes such that the beam splitter reflectivity \\(g\\) is a function of the parameter to be estimated, \\(x\\), i.e \\(g(x)\\). In the case of very small changes in \\(x\\), this functional relationship can be linearized about an operating point \\(x_{0}\\) so that changes in \\(\\theta\\) are linearly proportional to changes in \\(g\\).
Now we ask what is the ultimate limit imposed by quantum theory on the precision of ourestimate \\(g_{\\rm est}\\) of \\(g\\). This, of course depends on the state used to probe the beam-splitter. If the mean squared error (MSE) i.e. \\(\\mathbb{E}[(g-g_{\\rm est})^{2}]\\) quantifies the performance of our estimation scheme then the quantum Cramer-Rao bound provides a method to answer this question. The quantum Cramer-Rao bound states that asymptotically the precision of an unbiased estimation scheme is bounded below by \\(1/\\sqrt{NI(g)}\\) where \\(N\\) is the number of experimental trials and \\(I(g)\\) is the quantum Fisher information with respect to the input state.
If the input state is a coherent state then the quantum Fisher information is generally \\(I_{\\alpha}(g)=|\\alpha|^{4}\\) where \\(|\\alpha|^{2}\\) is the mean photon number at the input. So for the cases of interest \\(I_{\\alpha}(g)=1\\) and \\(I_{\\alpha}(g)=4\\). If the input state is a Fock state then the quantum Fisher information is [34]: \\(I_{F}(g)=4\\) (input photon number = 1), \\(I_{F}(g)=16\\) (input photon number = 2). The numbers quoted in the main text can be arrived at by setting MSE = \\(10^{-4}\\) and solving for \\(N\\).
Loss is easily included using the standard beam splitter model discussed in section 2. Coherent states are not entangled on beam splitters so tracing out the vacuum modes leaves the signal mode in a coherent state with amplitude reduced by \\(\\alpha\\to t\\alpha\\), where \\(|t|^{2}=T\\) is the probability for a photon _not_ to be lost. The Fisher information [46] is then simply rescaled to reflect the loss of amplitude. Single photon product states at the input to a beam splitter do become entangled at the output. However, we can trace out the vacuum modes to give a mixed input state for the case of \\(n=2\\) in a HOM experiment of the form
\\[\\rho_{\\rm in} = T_{1}T_{2}|1\\rangle_{1}\\langle 1|\\otimes|1\\rangle_{2}\\langle 1|+T_{ 1}R_{2}|1\\rangle_{1}\\langle 1|\\otimes|0\\rangle_{2}\\langle 0|\\] \\[+R_{1}T_{2}|0\\rangle_{1}\\langle 0|\\otimes|1\\rangle_{2}\\langle 1|+R_ {1}R_{2}|0\\rangle_{1}\\langle 0|\\otimes|0\\rangle_{2}\\langle 0|.\\]
The Fisher information for a mixed state is more difficult to calculate as it involves the logarithmic derivative. However, because the mixed state above is so simple it is easy to see that, conditioned on detecting two photons for correct heralded operation, the Fisher information is rescaled by \\(T_{1}^{2}T_{2}^{2}\\).
###### Acknowledgements.
We acknowledge the support of the Australian Research Council Centre of Excellence for Engineered Quantum Systems, CE110001013. SBE and AA were funded by the University of Queensland International Scholarship. JC was supported in part by National ScienceFoundation Grant Nos. PHY-1212445 and PHY-1314763. Authors thank Martin Ringbauer, Michael Vanner and Devon Biggerstaff for helpful discussions.
## References
* (1) J. T. Heeres and P. J. Hergenrother, \"High-throughput screening for modulators of protein-protein interactions: use of photonic crystal biosensors and complementary technologies,\" Chem. Soc. Rev. **40**(8), 4398-4410 (2011).
* (2) M. G. Scullion, T. F. Krauss, and A. Di Falco, \"Slotted photonic crystal sensors,\" Sensors **13**(3), 3675-3710 (2013).
* (3) J. T. Zhang, L. Wang, J. Luo, A. Tikhonov, N. Kornienko, and S. A. Asher, \"2-D array photonic crystal sensing motif,\" J. Am. Chem. Soc. **133**(24), 9152-9155 (2011).
* (4) A. M. R. Pinto, J. M. Baptista, J. L. Santos, M. Lopez-Amo, and O. Frazao, \"Microdisplacement sensor based on a hollow-core photonic crystal fiber,\" Sensors **12**(12), 17497-17503 (2012).
* (5) O. Frazao, J. L. Santos, F. M. Arajo, and L. A. Ferreira, \"Optical sensing with photonic crystal fibers,\" Laser Photon. Rev. **2**(6), 449-459 (2008).
* (6) W. C. L. Hopman, P. Pottier, D. Yudistira, J. van Lith, P. V. Lambeck, R. M. De La Rue, A. Driessen, H. J. W. M. Hoekstra, and R. M. de Ridder, \"Quasi-one-dimensional photonic crystal as a compact building-block for refractometric optical sensors,\" IEEE J. Sel. Top. Quantum Electron. **11**(1), 11-16 (2005).
* (7) S. C. Buswell, V. A. Wright, J. M. Buriak, V. Van, and S. Evoy, \"Specific detection of proteins using photonic crystal waveguides,\" Opt. Express **16**(20), 15949-15957 (2008).
* (8) K. A. Atlasov, K. F. Karlsson, A. Rudra, B. Dwir, and E. Kapon, \"Wavelength and loss splitting in directly coupled photonic-crystal defect microcavities,\" Opt. Express **16**(20), 16255-16264 (2008).
* (9) S. Haddadi, A. M. Yacomotti, I. Sagnes, F. Raineri, G. Beaudoin, L. Le Gratiet, and J. A. Levenson, \"Photonic crystal coupled cavities with increased beaming and free space coupling efficiency,\" Appl. Phys. Lett. **102**(1), 011107 (2013).
* (10) K. A. Atlasov, R. Rudra, B. Dwir, and E. Kapon, \"Large mode splitting and lasing in optimally coupled photonic-crystal microcavities,\" Opt. Express **19**(3), 2619-2625 (2011).
* (11) A. R. A. Chalcraft, S. Lam, B. D. Jones, D. Szymanski, R. Oulton, A. C. T. Thijssen, M. S. Skolnick, D. M. Whittaker, T. F. Krauss, and A. M. Fox, \"Mode structure of coupled L3 photonic crystal cavities,\" Opt. Express **19**(6), 5670-5675 (2011).
* (12) S. Lam, A. R. Chalcraft, D. Szymanski, R. Oulton, B. D. Jones, D. Sanvitto, D. M. Whittaker, M. Fox, M. S. Skolnick, D. O'Brien, T. F. Krauss, H. Liu, P. W. Fry, and M. Hopkinson, \"Coupled resonant modes of dual L3-defect planar photonic crystal cavities,\" In Quantum Electronics and Laser Science Conference (p. QFG6), Optical Society of America, (2008). [http://www.opticsinfobase.org/abstract.cfm?URI=QELS-2008-QFG6](http://www.opticsinfobase.org/abstract.cfm?URI=QELS-2008-QFG6)
* (13) S. Chakravarty, A. Hosseini, X. Xu, L. Zhu, Y. Zou, and R. T. Chen, \"Analysis of ultra-high sensitivity configuration in chip-integrated photonic crystal microcavity bio-sensors,\" Appl. Phys. Lett. **104**(19), 191109 (2014).
* (14) A. Di Falco, L. O'Faolain, and T. F. Krauss, \"Chemical sensing in slotted photonic crystal heterostructure cavities,\" Appl. Phys. Lett. **94**(6), 063503 (2009).
* (15) M. R. Lee and P. M. Fauchet, \"Nanoscale microcavity sensor for single particle detection,\" Opt. Lett. **32**(22), 3284-3286 (2007).
* (16) E. Knill, R. Laflamme, and G. J. Milburn, \"A scheme for efficient quantum computation with linear optics,\" Nature **409**(6816), 46-52 (2001).
* (17) C. K. Hong, Z. Y. Ou, and L. Mandel, \"Measurement of subpicosecond time intervals between two photons by interference,\" Phys. Rev. Lett. **59**(18), 2044 (1987).
* (18) S. Kolkowitz, A. C. B. Jayich, Q. P. Unterreithmeier, S. D. Bennett, P. Rabl, J. G. E. Harris, and M. D. Lukin, \"Coherent sensing of a mechanical resonator with a single-spin qubit,\" Science **335**(6076), 1603-1606 (2012).
* (19) E. Gavartin, P. Verlot, and T. J. A. Kippenberg, \"hybrid on-chip optomechanical transducer for ultrasensitive force measurements,\" Nature Nanotechnol. **7**(8), 509-514 (2012).
* (20) R. Maiwald, D. Leibfried, J. Britton, J. C. Bergquist, G. Leuchs, and D. J. Wineland, \"Stylus ion trap for enhanced access and sensing,\" Nature Phys. **5**(8), 551-554 (2009).
* (21) M. A. Taylor, J. Janousek, V. Daria, J. Knittel, B. Hage, H. A. Bachor, and W. P. Bowen, \"Biological measurement beyond the quantum limit,\" Nature Photon. **7**(3), 229-233 (2013).
* (22) P. C. Humphreys, B. J. Metcalf, J. B. Spring, M. Moore, P. S. Salter, M. J. Booth, W. S. Kolthammer, and I. A. Walmsley, \"Strain-optic active control for quantum integrated photonics,\" Opt. Express **22**(18), 21719-21726 (2014).
* (23) K. Nozaki, T. Tanabe, A. Shinya, S. Matsuo, T. Sato, H. Taniyama, and M. Notomi, \"Subfemtojoule all-optical switching using a photonic-crystal nanocavity,\" Nature Photon. **4**(7), 477-483 (2010).
* (24) J. D. Cohen, S. M. Meenehan, and O. Painter, \"Optical coupling to nanoscale optomechanical cavities for near quantum-limited motion transduction,\" Opt. Express **21**(9), 11227-11236 (2013).
* (25) A. G. Krause, M. Winger, T. D. Blasius, Q. Lin, and O. Painter, \"A high-resolution microchip optomechanical accelerometer,\" Nature Photon. **6**(11), 768-772 (2012).
* (26) A. Faraon, A. Majumdar, D. Englund, E. Kim, M. Bajcsy, and J. Vuckovic, \"Integrated quantum optical networks based on quantum dots and photonic crystals,\" New J. Phys. **13**(5), 055025 (2011).
* (27) M. J. Collins, C. Xiong, I.H. Rey, T.D. Vo, J. He, S. Shannia, C. Reardon, T.F. Krauss, M.J. Steel, A.S. Clark, and B.J. Eggleton, \"Integrated spatial multiplexing of heralded single-photon sources,\" Nature Commun. **4**, 3582 (2013).
* (28) A. Politi, M. J. Cryan, J. G. Rarity, S. Yu, S., and J. L. O'Brien, \"Silica-on-silicon waveguide quantum circuits,\" Science **320**(5876), 646-649 (2008).
* (29) C. Lang, C. Eichler, L. Steffen, J. M. Fink, M. J. Woolley, A. Blais, and A. Wallraff, \"Correlations, indistinguishability and entanglement in Hong-Ou-Mandel experiments at microwave frequencies,\" Nature Phys. **9**(6), 345-348 (2013).
* (30) M. J. Holland and K. Burnett, \"Interferometric detection of optical phase shifts at the Heisenberg limit,\" Phys. Rev. Lett. **71**(9), 1355 (1993).
* (31) E. Chow, A. Grot, L. W. Mirkarimi, M. Sigalas, and G. Girolami, \"Ultracompact biochemical sensor built with two-dimensional photonic crystal microcavity,\" Opt. Express **29**(10), 1093-1095 (2004).
* (32) C. W. Gardiner and P. Zoller, _Quantum noise: a handbook of Markovian and non-Markovian quantum stochastic methods with applications to quantum optics_, (Springer Series in Synergetics, Springer, 2004).
* (33) D. F. Walls and G. J. Milburn, _Quantum optics_, (Springer, Berlin, 2008).
* (34) A Datta, L. Zhang, N. Thomas-Peter, U. Dorner, B. J. Smith, and I. A. Walmsley, \"Quantum metrology with imperfect states and detectors,\" Phys. Rev. A **83**(6), 063836 (2011).
* (35) L. A. A. Pettersson, L. S. Roman, and O. Inganas, \"Modeling photocurrent action spectra of photovoltaic devices based on organic thin films,\" J. Appl. Phys. **86**(1), 487-496 (1999).
* (36) C. Cohen-Tannoudji, B. Diu, and F. Laloe, _Quantum mechanics_, (Wiley, New York, 1977).
* (37) E. Stock, W. Unrau, A. Lochmann, J. A. Tofflinger, M. Ozturk, A. I. Toropov, A. K. Bakarov, V. A. Haisler, and D. Bimberg, \"High-speed single-photon source based on self-organized quantum dots,\" Semicond. Sci. Technol. **26**(1), 014003 (2011).
* (38) S. Buckley, K. Rivoire, J. Vuckovic, \"Engineered quantum dot single-photon sources,\" Rep. Prog. Phys. **75**(12), 126503 (2012).
* (39) S. M. Sze, _Semiconductor sensors_, (Wiley, 1994).
* (40) Y. Yang, D. Yang, H. Tian, and Y. Ji, \"Photonic crystal stress sensor with high sensitivity in double directions based on shoulder-coupled aslant nanocavity,\" Sens. Actuators A: Phys. **193**, 149-154 (2013).
* (41) D. Yang, H. Tian, N. Wu, Y. Yang, and Y. Ji, \"Nanoscale torsion-free photonic crystal pressure sensor with ultra-high sensitivity based on side-coupled piston-type microcavity,\" Sens. Actuators A: Phys. **199**, 30-36 (2013).
* (42) D. K. C. Wu, B. T. Kuhlmey, and B. J. Eggleton, \"Ultrasensitive photonic crystal fiber refractive index sensor,\" Opt. Express **34**(3), 322-324 (2009).
* (43) D. K. Wu, K. J. Lee, V. Pureur, and B. T. Kuhlmey, \"Performance of refractive index sensors based on directional couplers in photonic crystal fibers,\" J. Light Technol. **31**(22), 3500-3510 (2013).
* (44) M. M. A. Marte and S. Stenholm, \"Paraxial light and atom optics: the optical Schrodinger equation and beyond,\" Phys. Rev. A **56**(4), 2940 (1997).
* (45) M. Jaaskelainen, M. Lombard, and U. Zulicke, \"Refraction in spacetime,\" Am. J. Phys. **79**(6), 672-677 (2011).
* (46) S. L. Braunstein, C. M. Caves, and G. J. Milburn, \"Generalized uncertainty relations: Theory, examples, and Lorentz invariance,\" Ann. Phys. **247**, 135-173 (1996). | Photonic-crystal-based integrated optical systems have been used for a broad range of sensing applications with great success. This has been motivated by several advantages such as high sensitivity, miniaturization, remote sensing, selectivity and stability. Many photonic crystal sensors have been proposed with various fabrication designs that result in improved optical properties. In parallel, integrated optical systems are being pursued as a platform for photonic quantum information processing using linear optics and Fock states. Here we propose a novel integrated Fock state optical sensor architecture that can be used for force, refractive index and possibly local temperature detection. In this scheme, two coupled cavities behave as an \"effective beam splitter\". The sensor works based on fourth order interference (the Hong-Ou-Mandel effect) and requires a sequence of single photon pulses and consequently has low pulse power. Changes in the parameter to be measured induce variations in the effective beam splitter reflectivity and result in changes to the visibility of interference. We demonstrate this generic scheme in coupled L3 photonic crystal cavities as an example and find that this system, which only relies on photon coincidence detection and does not need any spectral resolution, can estimate forces as small as \\(10^{-7}\\) Newtons and can measure one part per million change in refractive index using a very low input power of \\(10^{-10}\\)W. Thus linear optical quantum photonic architectures can achieve comparable sensor performance to semiclassical devices. | Condense the content of the following passage. | 292 |
arxiv-format/1712_04687v1.md | # Can Balloons Produce Li-Fi?
A Disaster Management Perspective
Atchutananda Surampudi\\({}^{1}\\), Sankalp Sirish Chapalgaonkar\\({}^{1}\\) and Paventhan Arumugam\\({}^{2}\\)
\\({}^{1}\\)Department of Electrical Engineering, Indian Institute of Technology Madras,
\\({}^{2}\\)ERNET, IIT Madras Research Park.
\\({}^{1}\\){ee16s003, ee15b018}@ee.iitm.ac.in, \\({}^{2}\\)[email protected]
## I Introduction
Visible-light-communications (VLC) or Light-Fidelity (Li-Fi) has gained a lot of prominence among the research groups around the world for it's secure, safe, energy efficient and high speed data transfer characteristics over a wireless medium. The use of visible light, which has given life to the planet, is now paving the path for a new form of wireless communications [1]. This technology has both the characteristics of providing data access and illumination at the same time over linear-time-invariant optical wireless channels [2, 3].
Natural disasters like earthquakes or floods, disrupt the available wireless communication infrastructure and as a result the wireless bandwidth becomes scarce. But, in such scenarios, communication becomes an important aspect of the emergency response. A safe, high speed, energy efficient and spectrum efficient wireless access technology is the need of the hour. There have been several technologies in use for emergency communications, like the ham radio [4], cognitive radio [5, 6] etc. But, when the demand becomes large or large number of people have to be saved, these technologies remain sub-optimal due to the limited radio frequency (RF) spectrum available in such a situation. Also, if floods are considered, the people trapped below flood waters have to be saved immediately. Communicating with them, or to at least receive their location using some uplink signatures becomes a priority. RF waves cannot penetrate through flood waters, whereas the light wave can. So, Li-Fi can be a reliable and energy efficient option to improve the communication capacity along with illuminating the affected area. Lately, for practical purposes and outdoor applications, Li-Fi has been implemented using fixed light-emitting-diode (LED) sources [7]. The question that arises is, can these Li-Fi LEDs fly in the air, without any constraints, at a given height near the ground at around \\(10\\) to \\(15\\) metres, to provide the same data access whenever and wherever required? This idea of a mobile Li-Fi unit can be a good solution during such natural calamities. This can be achieved by using a bunch of centrally monitored balloons, which can carry the Li-Fi LEDs operated by low power batteries to illuminate as well as provide data access (or broadcast services) to a given region for a given period of time.
### _Our approach and contributions_
Our contributions are as follows.
* _The Li-Fi Balloon -_ We present an overview of the design for the balloon using the Philips Li-Fi hardware.
* The concept of LiBNet and the overall functioning.
* The mean co-channel interference is derived for homogenous Poisson arrangement of balloons in one and two dimensions.
### _Context of application:_
#### I-B1 Earthquakes
During earthquakes, every other communication infrastructure above or below the ground gets disrupted. Also, the problem deepens when any gas leaks happen and become susceptible to combustion. So, using the LiBNet, both illumination and high speed data services can be provided, in a safe way, without interfering with the RF frequencies.
#### I-B2 Floods
In a flood affected region, LiBNet will be able to send a broadcast inside the water surface and receive any uplink messages from the people or pets trapped below. The balloons can coordinate to provide tracking services as well.
The further organisation of the paper is as follows. In section II, we give a brief overview of the design of the Li-Fi balloonusing the hardware from Philips. In section III, we present the concept of LiBNet and assuming a poisson point process arrangement, we derive the mean co-channel interference in such a scenario. The paper concludes with section IV.
## II The Li-Fi Balloon
The Li-Fi balloon can be designed with the outline presented in Fig. 1.
### _The Base_
A Li-Fi balloon will contain all the necessary communication equipment and the power source. The construction of the balloon is as follows. It consists of a base (closed basket) onto which four downlink LEDs and uplink receivers are placed (shaded area). The shaded areas surround the empty area at the centre, where the power and signal processing modem is placed. Four wireless nodes (with beam forming) are placed in alternate corners of the base, for inter-balloon communication. The material used for the basket can be any strong waterproof and non-conducting fiber [8].
Distribution of weight should be uniform and homogenous to stabilize the balloon at a given height. So, a symmetric arrangement of components is desired.
### _The balloon sheet_
The frame of the balloon's arial sheet, can be of circular or hexagonal shape, appropriate to the aerodynamic balancing aspects [9]. The material used can be a thin rigid polymer as proposed in [10] and [11] or water resistant polymers as in [12]. Light weight composites are preferred.
### _Components and Hardware_
The Li-Fi components used are shown in Table II. From Table II, we have the total payload per balloon as \\(6.6\\)kg. The dimensions in Table I suggest that the balloon will be
Figure 1: This figure is to a scale of 1:30. The front and top views of the Li-Fi balloon have been shown. All dimensions are in centimetres (cm) as given in Table I. The balloon’s aerial structure can be of any shape depending on the aerodynamical aspects. In the top view, the dashed circular boundary depicts the base of the balloon. There are four shaded regions on the base, on each of which a Li-Fi LED and an uplink receiver are placed. The space in the centre of the base is reserved for the LED drivers, constant power source and signal processing modem. For inter-balloon communication, at the corners of the base, four wireless nodes are placed (filled regions), with beamforming capability.
closely packed. We now discuss the use of each component in Table II. The Philips _Luxspace DN561B_ is a Li-Fi downlink transmitter and _LBRD14016-3_ is an uplink infrared receiver. These are used for illumination and data access with various emission properties [13]. In [14], these components have been used to characterize outage in an indoor Li-Fi environment. The _LBRD1514-1_ is a set of LED power drivers which are an integral part for intensity modulation (IM) purposes. The modem board acts as a central switch for all the LEDs and for IM. The modem board, drivers and the DC power supply can be appropriately fitted in the central space provided. The sensors will help in stabilizing the position and the inclination of the balloon. The wireless nodes will be useful to form an interconnected wireless balloon network.
## III The LiBNet
A network of Li-Fi balloons is called LiBNet. Each balloon, using the wireless nodes, forms an interconnected wireless network. The balloons can be assumed to be arranged as a Poisson point process1, as in [15], in both one and two dimensions which is shown in Fig. 2 and Fig. 3 respectively. For any arrangement, we consider all the balloons to be stabilized at a uniform height \\(h\\) and have an effective2 Lambertian emission order \\(m=\\frac{-\\ln(2)}{\\ln(\\cos(\\theta_{h}))}\\). \\(\\theta_{h}\\) is the effective half-power-semi-angle (HPSA) of the balloon. The trapped victim is assumed to be at a distance \\(z\\) from the tagged balloon, which is nearest to the victim.
Footnote 1: This assumption gives the worst case co-channel interference in the network.
Footnote 2: Effective, because each balloon has four downlink LEDs.
### _How does the LiBNet work?_
Each balloon with the necessary Li-Fi equipment will cover a certain affected region via illumination. Uplink signatures from the trapped lives, can be received and downlink broadcast can be done. The sensing of uplink signatures can be done efficiently by using machine learning as in [16] for cognitive radio networks. All the balloons will be interconnected wirelessly and linked to a fusion centre (FC) which is in-turn connected to the Internet. The uplink and downlink data will be routed through the network and through the FC as a gateway to the Internet. This model can be assumed similar to a wireless sensor network (WSN) in a cooperative sensing environment [17]. Now, various routing protocols can be used or devised [18, 19] to route the data efficiently. So, such a network of Li-Fi balloons can be used anywhere in the affected region. Their interconnected network can be used to track the lives of many and save them immediately.
### _The mean co-channel interference_
Let all the points in the given balloon network be arranged as a Poisson point process \\(\\psi\\), in both one and two dimensions. Given an origin, let the balloons be at a distance \\(x\\) from the origin. The user photodiode (PD) is at a distance of \\(z\\) from the origin. Let the tagged balloon be at origin i.e. \\(x=0\\). Now, assuming a wavelength reuse factor of unity, all other balloons become co-channel interferers. We now derive the mean co-channel interference in the Poisson network of balloons with the assumptions in [20]. For a distance \\(D_{x}\\) between the PD and the balloon, the Signal-to-Interference-plus-Noise-Ratio (SINR) for such a network is given as
\\[\\gamma(z)=\\frac{(z^{2}+h^{2})^{-m-3}\\rho(D_{0})}{\\sum_{x\\in\\psi \\setminus 0}((x+z)^{2}+h^{2})^{-m-3}\\rho(D_{x})+\\Omega}, \\tag{1}\\]
where, \\(\\rho(D_{x})\\) is the field-of-view (FOV) \\(\\theta_{f}\\) constraint function of the PD used by the trapped victim, which is defined as
\\[\\rho(D_{x})=\\left\\{\\begin{array}{cc}1,&|D_{x}|\\leq h\\tan(\\theta_{f}),\\\\ 0,&|D_{x}|>h\\tan(\\theta_{f}).\\end{array}\\right.\\]
The co-channel interference \\(\\mathcal{I}_{x}\\) is
\\[\\mathcal{I}_{x}=\\sum_{x\\in\\psi\\setminus 0}f(x),\\]
Figure 3: _(Two dimension model)_ This figure shows the two dimension balloon network. The balloons can be assumed to be arranged deterministically or as a Poisson point process \\(\\psi\\), stabilized at a height \\(h\\) (black circular dots). The rectangular dotted regions on ground depict the attocells corresponding to each balloon above. The trapped victim (small cuboid) at \\((d_{x},d_{y},0)\\) (inside one of the attocell), receives data wirelessly from the tagged-balloon corresponding to the attocell in which it is located. Here, that attocell is highlighted as dash-dot. All other balloons become the co-channel interferers. Here we assume that the victim can be trapped anywhere on the ground plane. Every balloon is interconnected with each other wirelessly.
Figure 2: _(One dimension model)_ This figure shows the one dimension balloon network. The balloons can be assumed to be arranged deterministically or as a Poisson point process \\(\\psi\\), stabilized at a height \\(h\\) (black circular dots). The rectangular dotted regions on ground depict the attocells corresponding to each balloon above. The trapped victim (small cuboid) at \\((z,0)\\) (inside one of the attocell), receives data wirelessly from the tagged-balloon corresponding to the attocell in which he/she is located. Here, that attocell is highlighted as dash-dot. All other balloons become the co-channel interferers. Here, we assume that the victim can be trapped only along the thick line on ground. Every balloon is interconnected with each other wirelessly.
where \\(f(x)\\) is given as
\\[((x+z)^{2}+h^{2})^{-m-3}\\rho(D_{x}).\\]
Hence, we state the theorem for mean interference below.
**Theorem 1**.: _Consider a photodiode, with FOV \\(\\theta_{f}\\) radians, situated at a distance \\(z\\) (inside an attocell) from the origin, in a Poisson distributed balloons of Li-Fi LEDs, emitting light with an effective Lambertian emission order \\(m\\), installed at a height \\(h\\). Then, for a wavelength reuse factor of unity, the mean co-channel interference caused by the Poisson distributed co-channel interferers of intensity \\(\\lambda(x)\\) at the photodiode is_
\\[\\mathbb{E}(\\mathcal{I}_{x})=\\int_{\\mathbb{S}}\\lambda(x)f(x)\\mathrm{d}x,\\]
_where, \\(\\mathbb{E}(.)\\) is the expectation operator over all possible random arrangements in the Poisson point process and \\(\\mathbb{S}\\) is the support of the integration._
Proof.: The proof is provided in Appendix A.
We now extend Thm. 1 to one and two dimension LiBNets in the following Lem. 1 and 2 respectively.
**Lemma 1**.: _The mean co-channel interference in a one dimension LiBNet with homogenously distributed Poisson point balloons is given as_
\\[\\mathbb{E}(\\mathcal{I}_{x})=\\lambda\\bigg{(} h^{1-2\\beta}\\tan(\\theta_{f})_{2}F_{1}\\bigg{(}0.5,\\beta;1.5;-\\tan^{2}( \\theta_{f})\\bigg{)}\\] \\[-zh^{-2\\beta}{}_{2}F_{1}\\bigg{(}0.5,\\beta;1.5;-\\frac{z^{2}}{h^{2} }\\bigg{)}\\bigg{)},\\]
_where, \\({}_{2}F_{1}(.;.;.)\\) is the ordinary hypergeometric function and \\(\\beta=m+3\\)._
Proof.: From Thm. 1, with the support \\(\\mathbb{S}=[z,h\\tan(\\theta_{f})]\\) we have
\\[\\mathbb{E}(\\mathcal{I}_{x})=\\int_{z}^{h\\tan(\\theta_{f})}\\lambda(x)f(x) \\mathrm{d}x, \\tag{2}\\]
because, the interferers are present only after a distance \\(z\\) limited by the FOV of the PD. Now, for a one dimension network
\\[f(x)=\\frac{1}{(x^{2}+h^{2})^{\\beta}}\\]
and \\(\\lambda(x)=\\lambda\\) for a homogenous point process. Hence, integrating the same in (2) derives the result.
**Lemma 2**.: _The mean co-channel interference in a two dimension LiBNet homogenously distributed Poisson point balloons is given as_
\\[\\mathbb{E}(\\mathcal{I}_{x})=\\lambda\\bigg{(}\\frac{\\pi(h^{2}+z^{2})^{1-\\beta}}{ \\beta-1}-\\frac{\\pi h^{2-2\\beta}\\cos^{2\\beta-2}(\\theta_{f})}{\\beta-1}\\bigg{)}.\\]
_where, \\(\\beta=m+3\\)._
Proof.: For a two dimension homogenous Poisson point process of balloons we have \\(\\lambda(x)=\\lambda\\) and \\(f(x,y)=\\frac{1}{(x^{2}+y^{2}+h^{2})}\\). So, from Thm. 1, we can write
\\[\\mathbb{E}(\\mathcal{I}_{x})=\\int\\int_{\\mathbb{S}}\\lambda\\frac{1}{(x^{2}+y^{2} +h^{2})}\\mathrm{d}x\\mathrm{d}y.\\]
Converting to polar coordinates \\((r,\\theta)\\), we have
\\[\\mathbb{E}(\\mathcal{I}_{x})=2\\pi\\int_{z}^{h\\tan(\\theta_{f})}\\frac{1}{(r^{2}+h ^{2})}\\mathrm{d}r.\\]
Integrating further, derives the result.
## IV Conclusion
In this paper, a novel idea of using balloons for Li-Fi has been proposed. A disaster or a natural calamity has been taken as a use case. An overview of the physical design of the balloon has been given. Given the payload, the installation of the communication components has been proposed using the Philips Li-Fi equipments. Symmetry of installation has been kept in mind. Further, the functioning of a network of such Li-Fi balloons has been discussed and coined an acronym of LiBNet. The mean co-channel interference in both one and two dimension LiBNets has been derived, assuming the balloons are arranged as a homogenous Poisson point process. Given the problems of contemporary interest, the possibilities of future work are extensive. Li-Fi, being a safe and high speed alternative to RF emergency communication set-ups, this concept, if implemented can save the lives of many. Tracking and positioning of people / pets becomes easier. But, Li-Fi, requiring too much line of sight, will have to co-exist with blockages. Hence an extension to a blockage model is desired. Also, as a future work, this concept can be further standardised and practically implemented.
## References
* [1] H. Haas, L. Yin, Y. Wang, and C. Chen, \"What is LiFi?\" _Journal of Lightwave Technology_, vol. 34, no. 6, pp. 1533-1544, 2016.
* [2] F. R. Gellef and U. Bapst, \"Wireless In-House Data Communication via Diffuse Infrared Radiation,\" _Proceedings of the IEEE_, vol. 67, no. 11, pp. 1474-1486, 1979.
* [3] J. R. Barry, J. M. Kahn, E. A. Lee, and D. G. Messerschmitt, \"High-Speed Nondirective Optical Communication for Wireless Networks,\" _IEEE Network_, vol. 5, no. 6, pp. 44-54, 1991.
* [4] R. C. Coile, \"The Role of Amateur Radio in Providing Emergency Electronic Communication for Disaster Management,\" _Disaster Prevention and Management: An International Journal_, vol. 6, no. 3, pp. 176-185, 1997.
* [5] E. Hossain, D. Niyato, and Z. Han, _Dynamic Spectrum Access and Management in Cognitive Radio Networks_. Cambridge university press, 2009.
* [6] A. Ghassemi, S. Bavarian, and L. Lampe, \"Cognitive Radio for Smart Grid Communications,\" in _Smart Grid Communications (SmartGridComm), 2010 First IEEE International Conference on_. IEEE, 2010, pp. 297-302.
* [7] H. Haas, \"LiFi is a Paradigm-Shifting 5G Technology,\" _Reviews in Physics_, 2017.
* [8] H. Von Blucher, H. Von Blucher, and E. De Ruiter, \"Waterproof and Moisture-Conducting Fabric Coated with Hydrophilic Polymer,\" Jun. 12 1984, uS Patent 4,454,191.
* [9] K. Kannani and C. Suratkar, \"A Review Paper on Google Loon Technique,\" _International Journal of Research In Science & Engineering_, vol. 1, no. 1, pp. 167-171, 2015.
* [10] P. Podsiadlo, A. K. Kaushik, E. M. Arruda, A. M. Waas, B. S. Shim, J. Xu, H. Nandivada, B. G. Pumplin, J. Lahann, A. Ramamoorthy _et al._, \"Ultrarong and Stiff Layered Polymer Nanocomposites,\" _Science_, vol. 318, no. 5847, pp. 80-83, 2007.
* [11] G. G. Tibbetts, M. L. Lake, K. L. Strong, and B. P. Rice, \"A Review of the Fabrication and Properties of Vapor-Grown Carbon Nanofiber/Polymer Composites,\" _Composites Science and Technology_, vol. 67, no. 7, pp. 1709-1718, 2007.
* [12] J.-W. Rhim, \"Physical and Mechanical Properties of Water Resistant Sodium Alginate Films,\" _LWT-Food Science and Technology_, vol. 37, no. 3, pp. 323-330, 2004.
* [13] P. Lighting, _Lighting manual_. Phillips Lighting Company, 1981.
* [14] A. Surampudi, S. S. Chapalgaonkar, and P. Arumugam, \"Experimental Tests for Outage Analysis in SISO Li-Fi Indoor Communication Environment,\" _Proceedings of the Asia-Pacific Advanced Network_, vol. 44, pp. 54-60, 2017.
* [15] M. M. Azari, Y. Murillo, O. Amin, F. Rosas, M.-S. Alouini, and S. Pollin, \"Coverage Maximization for a Poisson Field of Drone Cells,\" _arXiv preprint arXiv:1708.06598_, 2017.
* [16] A. Surampudi and K. Kalimuthu, \"An Adaptive Decision Threshold Scheme for the Matched Filter Method of Spectrum Sensing in Cognitive Radio using Artificial Neural Networks,\" in _Information Processing (IICIP), 2016 1st India International Conference on_. IEEE, 2016, pp. 1-5.
* [17] J. Yick, B. Mukherjee, and D. Ghosal, \"Wireless Sensor Network Survey,\" _Computer Networks_, vol. 52, no. 12, pp. 2292-2330, 2008.
* [18] A. Surampudi and K. Kalimuthu, \"An Energy Efficient Spectrum Sensing in Cognitive Radio Wireless Sensor Networks,\" _arXiv preprint arXiv:1711.09255_, 2017.
* [19] K. Akkaya and M. Younis, \"A Survey on Routing Protocols for Wireless Sensor Networks,\" _Ad-Hoc Networks_, vol. 3, no. 3, pp. 325-349, 2005.
* [20] C. Chen, S. Videv, D. Tsonev, and H. Haas, \"Fractional Frequency Reuse in DCO-OFDM-Based Optical Atocell Networks,\" _Journal of Lightwave Technology_, vol. 33, no. 19, pp. 3986-4000, 2015.
## Appendix A Proof of Mean Interference
Proof.: The Laplace transform of the Interference \\(\\mathcal{I}_{x}\\) is given as
\\[\\mathcal{L}_{\\mathcal{I}_{x}}(s) =\\mathbb{E}_{\\psi}(e^{-s\\mathcal{I}_{x}}),\\] \\[=\\mathbb{E}_{\\psi}(e^{-s\\sum_{x\\in\\psi\\setminus 0}f(x)}),\\] \\[=\\mathbb{E}_{\\psi}\\bigg{(}\\prod_{x\\in\\psi\\setminus 0}e^{-sf(x)} \\bigg{)},\\] \\[\\stackrel{{(a)}}{{=}}e^{-\\lambda(x)\\int_{\\mathbb{R }}(1-e^{-sf(x)})\\mathrm{d}x}, \\tag{3}\\]
where, (a) follows from the definition of expectation operator. Now, taking the \\(n^{th}\\) order derivative on L.H.S of (3) w.r.t \\(s\\), we have
\\[(-1)^{n}\\frac{\\partial^{n}}{\\partial s^{n}}(\\mathbb{E}(e^{-s\\mathcal{I}_{x}} ))=\\mathbb{E}(\\mathcal{I}_{x}^{n}e^{-s\\mathcal{I}_{x}}).\\]
Using the result in (3), we can write for \\(n=1\\) as
\\[\\mathbb{E}(\\mathcal{I}_{x}e^{-s\\mathcal{I}_{x}}) =(-1)e^{-\\lambda(x)\\int_{\\mathbb{R}}(1-e^{-sf(x)})\\mathrm{d}x}\\] \\[\\qquad(-1)\\big{(}\\lambda(x)\\int_{\\mathbb{R}}(-1)(-f(x))\\mathrm{d} x\\big{)}. \\tag{4}\\]
Substituting \\(s=0\\) in (4) we have
\\[\\mathbb{E}(\\mathcal{I}_{x})=\\int_{\\mathbb{R}}\\lambda(x)f(x)\\mathrm{d}x,\\]
proving the theorem. | Natural calamities and disasters disrupt the conventional communication setups and the wireless bandwidth becomes constrained. A safe and cost effective solution for communication and data access in such scenarios is long needed. Light-Fidelity (Li-Fi) which promises wireless access of data at high speeds using visible light can be a good option. Visible light being safe to use for wireless access in such affected environments, also provides illumination. Importantly, when a Li-Fi unit is attached to an air balloon and a network of such Li-Fi balloons are coordinated to form a Li-Fi balloon network, data can be accessed anytime and anywhere required and hence many lives can be tracked and saved. We propose this idea of a Li-Fi balloon and give an overview of it's design using the Phillips Li-Fi hardware. Further, we propose the concept of a balloon network and coin it with an acronym, the LiBNet. We consider the balloons to be arranged as a homogenous Poisson point process in the LiBNet and we derive the mean co-channel interference for such an arrangement.
_Index Terms-_ Attocell dimension, interference, Li-Fi, light emitting diode, photodiode, rate, time division multiple access. | Give a concise overview of the text below. | 237 |
isprs/f7920187_5975_4da3_aece_d71160585115.md | # An Iterative Pixel-Level Image Matching Method for Mars Mapping Using Approximate Orthophotos
X. Geng\\({}^{\\,\\rm h,\\rm t}\\)
Q. Xu\\({}^{\\,\\rm a}\\)
C.Z. Lan\\({}^{\\,\\rm t}\\)
S. Xing\\({}^{\\,\\rm t}\\)
\\({}^{\\,\\rm a}\\) Zhengzhou Institute of Surveying and Mapping, 450052 Zhengzhou, China \\(-\\) [email protected] \\({}^{\\,\\rm b}\\) Xi'an Information Technique Institute of Surveying and Mapping, 710054 Xi'an, China
######
Mars Mapping, Pixel-level Image Matching, Approximate Orthophoto, Linear Array CCD, Back-projection 2017 2
2. Insufficient feature points. Feature-based matching methods such as Scale-Invariant Feature Transform (SIFT) are more robust than area-based matching methods. Unfortunately, there are insufficient feature points on Martian surface. On the other hand, though a small amount of feature points can be extracted and used as tie points for bundle adjustment, the point density is not enough for surface reconstruction.
3. Poor image quality. Many factors such as imaging instrument, atmospheric environment and illumination conditions have influences on the image quality. In terms of image quality, Martian surface images perform worse than earth observation images.
4. Repetitive pattern. Repetitive pattern is very common on Martian surface. It will result in wrong matched points especially when area-based matching method is used. To address this issue, some constraint conditions such as epipolar line geometry need to be introduced. Meanwhile, precise approximate value of conjugate points is required.
### Advantages
On the other hand, when compared to earth observation images, Mars surface images show some advantages for image matching. Obviously, there are no trees or rivers on Martian surface. Moving objects such as cars are impossible to appear on Martian surface as well. Additionally, occlusion caused by high building and tall trees on earth observation images can be avoided on Martian surface images. In a word, in terms of terrain continuity, Martian surface images perform better than earth observation images. Thus, our image matching method will make full use of this characteristic.
## 3 Pixel-Level Image Matching Method for Mars Mapping
### Basic Principle
A large number of image matching methods have been proposed in the literature. Some practical matching strategies such as hierarchical image matching can be used to perform Martian surface image matching as well. In the recent years, the Semi-Global Matching (SGM) method proposed by Hirschmuller (2008) won great success. SGM was also used to process Mars HRSC images (Hirschmuller, 2006). However, the pixel-level image matching performed with SGM still consumed a lot of processing time, usually several hours for a single orbit. In order to generate high resolution Mars DEM efficiently, targeted image matching method is still required.
Here, we point out that when image matching is performed for DEM generation, the interior orientation (IO) and exterior orientation (EO) data are usually accurately known. Thus, given a stereopair, two orthophotos can be generated through geometric rectification with reference DEM. If the reference DEM is accurate enough, the coordinate displacements of conjugate points at the overlapping area of the two orthophotos will be very small. This interesting and useful information can be used to estimate the approximate value of conjugate points.
In order to introduce the basic principle of our method in detail, image matching on the third pyramid level of the HRSC Level-3 images was performed with SIFT. The results are presented in Figure 2. The HRSC Level-3 images are generated through geometric rectification with a rough DEM, which can be seen as approximate orthophotos. It is observed that most of the pixel coordinates displacements of conjugate points are less than two pixels in \\(X\\) and \\(Y\\) directions. The point 9 (green cross hair) shows the maximum displacements, and results in less than four pixels. It is noted that point 9 is located at the mountain area. This implies that inaccurate DEM will introduce large coordinate displacements at the overlapping area of two approximate orthophotos. Furthermore, it can be inferred that when image matching is performed at the fourth pyramid level, the pixel coordinates displacements of conjugate points will be less than two pixels. Therefore, at the fourth pyramid level, using the identical pixel coordinates on the left image as approximate value, the conjugate point on the right image can be determined with 5-5 search window.
### Data Processing Chain
The main strategies of our pixel-level image matching method include: (1) pixel-level image matching using normalized cross
Figure 1: The histogram of HRSC Level-2 images.
Figure 2: Image matching results on the third pyramid level of HRSC Level-3 images using SIFT method.
correlation coefficient (NCC) on approximate orthophotos; (2) estimating approximate value of conjugate points by using ground point coordinates of orthophotos; (3) hierarchical image matching and DEM generation at each pyramid level; (4) the DEM generated at current pyramid level is used as reference data for the orthophoto generation at the next pyramid level; and (5) fast coordinates transformation from 3D ground points to 2D image points.
Due to that the HRSC images are suitable for the generation of Mars global DEM, the pixel-level image matching method is designed for the HRSC images in this paper. Indeed, the method can be used to process other orbital images such as MOC and HiRISE. The procedure of pixel-level image matching method for Mars mapping is illustrated in Figure 3.
The detailed processing steps are as follows.
1.The Integrated System for Imagers and Spectrometers (ISIS) developed by USGS is used to perform data pre-processing. The HRSC stereo images with raw PDS format are imported into ISIS using 'hrc5zsis', and the kernels related to the images are determined with'spiceinit'.
2. Data pre-processing such as contrast enhancement and image pyramids generation is performed. The optional bundle adjustment process is carried out with 'jigsaw'.
3. The interior orientation (IO) data such as pixel size, focal length and initial exterior orientation (EO) data are extracted from SPICE kernels. The scan line exposure time information is acquired with 'tabledump'. Consequently, the rigorous geometric model for HRSC pushbroom images is constructed.
4. The DEM is generated or refined in an iterative processing way. At the lowest pyramid level, the approximate orthophotos are generated using MOLA DEM. The ground points coordinates are used as approximate value of conjugate points. The conjugate points matched on orthophotos are back-projected to original images. Then, through forward intersection, 3D ground points are generated. The forward intersection residuals are used to eliminate wrong matched points. The DEM generated at current pyramid level is used as reference data to generate approximate orthophotos at the next pyramid level. In order to match more points with pixel-level image matching method, a small NCC threshold is used.
5. Through hierarchical image matching, the generated DEM becomes more and more accurate. At last, the grid spacing of the final DEM product can achieve pixel-level.
### Image Matching Strategies
#### 3.3.1 Image Matching on Approximate Orthophotos:
As illustrated in the top plot of Figure 4, due to the different viewing angles and image scale, it is difficult to perform image matching on the original HRSC Level-2 images. Using IO, EO and rough DEM, approximate orthophotos can be generated. Here, 'approximate' means that the DEM used for geometric rectification is not accurate enough. As shown in the bottom plot of Figure 4, the image distortions are removed through geometric rectification. Moreover, stereo images are resampled with identical Ground Sample Distance (GSD). Therefore, matching on approximate orthophotos is helpful to improve the success rate and accuracy. Furthermore, there is no need to extract feature points due to that at each pyramid level, image matching is performed with pixel-level.
Figure 3: The pixel-level image matching and DEM generation procedure.
#### 3.3.2 Estimating Approximate Value of Conjugate Points:
The traditional image matching methods use constraint conditions such as epipolar line or affine transformation to estimate or predict the approximate value of conjugate points. Using our matching strategies, there is no need to estimate the approximate value of conjugate points by complicated technique. As presented in section 3.1, the ground points coordinates of orthophotos can be used to estimate the approximate value of conjugate points. Due to that the image matching is performed on orthophotos, approximate value of conjugate points is determined with an accuracy of several pixels, depending on the resolution and accuracy of reference DEM used in the geometric rectification process.
Given an image point \\(i\\) on the left image, the pixel coordinates of point \\(i\\) are \\((m,n)\\) and the 2D ground point coordinates of point \\(i\\) are \\((X,Y)\\). Thus, \\((X,Y)\\) can be calculated with equation 1.
\\[\\begin{array}{l}X=X_{0}+m\\times dX\\\\ Y=Y_{0}+n\\times dY\\end{array} \\tag{1}\\]
where \\((X_{0},Y_{0})\\) are the left bottom corner point coordinates of the orthophoto, \\(dX\\) and \\(dY\\) are the pixel resolution in \\(X\\) and \\(Y\\) directions respectively. The pixel coordinates \\((m^{\\prime},n^{\\prime})\\) of the conjugate point \\(i^{\\prime}\\) on the right image can be estimated with equation 2.
\\[\\begin{array}{l}m^{\\prime}=(X-X_{0})/dX\\\\ n^{\\prime}=(Y-X_{0})/dY\\end{array} \\tag{2}\\]
Obviously, the point estimation accuracy is influenced by the reference DEM and EO used in the geometric rectification process, assuming that the IO is accurate enough.
#### 3.3.3 Hierarchical Matching:
Hierarchical matching is widely used in most of the image matching algorithms. Similarly, our method adopts this practical matching strategy as well. At each image pyramid level, after the pixel-level image matching is completed, the DEM is generated and used as reference data for the geometric rectification at the next pyramid level.
Suppose the original image resolution of HRSC Level-2 images is 25m and image pyramids are generated with four levels. Thus, the image resolution for 1-4 pyramid levels is 50m, 100m, 200m and 400m respectively. At the highest pyramid level, pixel-level image matching can generate DEM with grid spacing of 400m, which is still higher than that of MOLA DEM. The DEM derived at the fourth pyramid level is used as reference data to generate orthophotos at the third pyramid level. Consequently, through hierarchical image matching, the generated DEM are refined iteratively, and the point prediction accuracy of conjugate points will be more accurate.
### Back-projection Conjugate Points:
Figure 5 illustrates the matching results on orthophotos and original images. The coordinates transformation from orthophoto to original image is indeed a back-projection process. Given an image point \\(\\mathbf{p}\\) on orthophoto, the corresponding 3D ground point coordinates are calculated with the DEM used in the geometric rectification process. Through back-projection, the conjugate point \\(\\mathbf{p}^{\\prime}\\) on the original image is determined.
As shown in Figure 6, in order to perform back-projection for pushbroom images, the best scan line for ground point \\(P\\) must be determined. This requires that the rigorous geometric model for pushbroom images is constructed. Due to the special imaging principle of pushbroom images, each scan line has six exterior orientation elements. The rigorous geometric model is constructed with the extended collinear equation for pushbroom images, and the detailed expressions are given in equation 3.
\\[\\begin{array}{l}x=-f\\frac{a_{i}^{i}(X-x_{i}^{i})+b_{i}^{i}(Y-y_{i}^{i})+c_{i }^{i}(Z-Z_{i}^{i})}{a_{i}^{i}(X-x_{i}^{i})+b_{i}^{i}(Y-y_{i}^{i})+c_{i}^{i}(Z -Z_{i}^{i})}\\\\ y=-f\\frac{a_{i}^{i}(X-x_{i}^{i})+b_{i}^{i}(Y-y_{i}^{i})+c_{i}^{i}(Z-Z_{i}^{i}) }{a_{i}^{i}(X-x_{i}^{i})+b_{i}^{i}(Y-y_{i
### DEM Generation
#### 3.5.1 Forward Intersection for Pushbroom Images:
Using the matched points on original stereo images, 3D ground points are generated through forward intersection. When compared to the frame images, forward intersection for pushbroom images is more complicated. The detailed processing procedures of forward intersection are illustrated in Figure 7. Assuming that \\(p_{1}\\) and \\(p_{2}\\) are a pair of conjugate points, (\\(i_{1}\\),\\(j_{1}\\)) and (\\(i_{2}\\),\\(j_{2}\\)) are pixel coordinates for \\(p_{1}\\) and \\(p_{2}\\), (\\(x_{1}\\),\\(y_{1}\\)) and (\\(x_{2}\\),\\(y_{2}\\)) are focal plane coordinates, \\((x_{1}\\),\\(y_{1}\\),\\(-f_{1})\\) and \\((x_{2}\\),\\(y_{2}\\)) are image space coordinates respectively, and \\((X,Y,Z)\\) are the 3D ground point coordinates. Firstly, the pixel coordinates are converted to focal plane coordinates by using the transformation equation obtained from the SPICE kernels. Secondly, the exact exterior orientation elements EO1 and EO2 for same line \\(j_{1}\\) and \\(j_{2}\\) are interpolated by using the EO data. Then, the focal plane coordinates are converted to image space coordinates. At last, the ground point \\((X,Y,Z)\\) is calculated with extended collinear equation. It is noteworthy that the ground points are defined in body-fixed Martian Cartesian coordinates. In order to generate DEM, map projection and DEM interpolation are required.
#### 3.5.2 Wrong Matched Points Elimination
It is no doubt that wrong matched points cannot be avoided. Therefore, it is necessary to detect and eliminate the outliers. Here, we use the residuals of forward intersection to eliminate the wrong matched points. Given a pair of conjugate points on stereo images, in the forward intersection process there are three unknowns and four observation equations. Consequently, the residuals of ground point coordinates can be calculated. Assuming that the IO and EO are accurate enough, the residuals are determined by the image matching accuracy. As shown in Figure 8, image point \\(p_{1}\\) and \\(p_{2}\\) are a pair of conjugate points, and \\(P\\) is the corresponding ground point. If the image point \\(p_{2}^{*}\\) is wrongly determined as the conjugate point of \\(p_{1}\\), the forward intersection will deliver large residuals. Therefore, we give a threshold such as two GSD at each pyramid level to eliminate wrong matched points.
## 4 Experiment Results
The software development for image matching and geometric rectification was carried out using Visual Studio 2013 and Qt 5.4.2 on Windows 7 platform. The rigorous geometric model and forward intersection for pushbroom images are implemented based on the open source photogrammetric software DGAP (Stallman, 2008). The tests were performed on a laptop with Intel Core i5 CPU and 8 GB RAM capacity.
### Test Datasets
Two orbits HRSC Level-2 images are used to test our method. SPICE kernels and images are ingested into ISIS firstly. Then, image pyramids are generated with four levels by using bi-cubic interpolation. The Level-2 images are orthorectified using equirectangular projection. The Martian reference datum is defined using a sphere body with axis of 3396.19 km. The test datasets information is listed in table 1. The image width of orbit 5273 and orbit 5124 are 5176 and 2584 respectively, and each of two orbits has more than 16000 scalines. The orbit 5273 images are located at Gale crater, which covers the landing area of Curiosity. The search window size is 3\\(\\cdot\\)5 and the match window size is 9\\(\\cdot\\)9. Stereo matching is performed between S1 and S2 images. The processing time of the image matching process are measured. It is noteworthy that the forward intersection and DEM interpolation with large-scale point clouds requires a lot of processing time as well.
### Experiment Results
**Point Prediction Accuracy:** One main advantage of our method is using ground point coordinates of orthophotos to predict the approximate value of conjugate points. The pixel coordinates displacements between the predicted value and the matched value are calculated, and the results are illustrated in Figure 9. It is observed that at the original image resolution, the pixel coordinates displacements were less than two pixels. This indicates that our method provides very precise approximate
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline Orbit & Location & GSD & Size & Matching \\\\ & & (m/pixel) & (MB) & Time (h) \\\\ \\hline
5273 &
\\begin{tabular}{c} 135.8\\({}^{\\circ}\\)E-138.4\\({}^{\\circ}\\)E \\\\ 8.5\\({}^{\\circ}\\)S-2.1\\({}^{\\circ}\\)S \\\\ \\end{tabular} & 25 & 186 & 7.5 \\\\ \\hline
5124 &
\\begin{tabular}{c} 74.9\\({}^{\\circ}\\)W-73.6\\({}^{\\circ}\\)W \\\\ 6.2\\({}^{\\circ}\\)N-13.1\\({}^{\\circ}\\)N \\\\ \\end{tabular} & 27 & 84 & 3.2 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Information of test datasets.
Figure 8: Using forward intersection residuals to eliminate wrong matched points
Figure 7: The processing procedure of forward intersection for pushbroom images
#### 4.2.3 DEM Accuracy Verification
The height accuracy of generated DEM is compared to the HRSC Level-4 DEM product, and the results are shown in Figure 14. It is observed that the root mean square errors (RMSE) of height displacements are 60.3m for orbit 5273 and 33.7m for orbit 5124 respectively. It is obvious that the results show systematic errors. This may be caused by the inaccurate SPIE kernels. Indeed, the systematic errors can be easily eliminated by using several GCPs collected from HRSC Level-4 DOM and DEM. Therefore, the experiment results demonstrate that the DEM generated with our method show good consistency with the HRSC Level-4 product.
### Discussions
Our image matching method has the advantage that the a priori knowledge at each pyramid level is utilized to the most extent. In addition, the matching strategies take account of the terrain continuity characteristic, and are especially suitable for Martian surface image matching. However, the generated DEM exhibits some deficiencies. At low contrast area, area-based matching may fail and results in pointless region.
## 5 Conclusion
The objective of this paper is to generate higher resolution Mars DEM through stereo photogrammetry. A targeted pixel-level image matching method for Mars mapping is proposed. The experiment was carried out using only S1 and S2 images. Indeed, more stereo viewing angles images such as nadir, P1 and P2 can be introduced to improve geometric accuracy. Obviously, more images require more data processing time. In order to make our algorithm more practical, the method shall be optimized with parallel processing abilities in the future. Additionally, it is necessary to introduce other image matching strategies into our method to address the topography discontinuity issue at low contrast area.
## Acknowledgements
This work was funded by the National Basic Research Program of China (973 Program) (2012CB72000) and National Natural Science Foundation Project of China (41401533). We also thank HRSC team for their valuable work in photogrammetric processing and providing the HRSC products to the public.
## References
* Albertz et al. (2005) Albertz, J., Attwenger, M., Barrett, J., et al., 2005. HRSC on Mars Express-photogrammetric and cartographic research. _Photogrammetric Engineering and Remote Sensing_, 71(10), pp. 1153-1166.
* Di et al. (2008) Di, K.C., Xu, F., Wang, J., et al., 2008. Photogrammetric processing of rover imagery of the 2003 Mars Exploration Rover mission. _ISPRS Journal of Photogrammetry and Remote Sensing_, 63(2), pp.181-201.
* Geng et al. (2013) Geng, X., Xu, Q., Xing, S., et. al, 2013. Differential rectification of linear pushroom imagery based on the fast algorithm for best scan line searching. _Acta Geodaetica et Cartographica Sinica_, 42(6), pp.861-868.
* Gwinner et al. (2009) Gwinner, K., Scholten, F., Spiegel, M., et. al, 2009. Derivation and validation of high-resolution digital terrain models from mars
Figure 14: Height accuracy comparison between the generated DEM and the HRSC Level-4 DEM.
Figure 13: Visual comparison of generated DEM and MOLA DEM. (Height = intensity)express HRSC data. _Photogrammetric Engineering and Remote Sensing_, 75(9), pp.1127-1142.
* Heipke et al. (2007) Heipke, C., Oberst, J., Albertz, J., et al., 2007. Evaluating planetary digital terrain models--The HRSC DTM test. _Planetary and Space Science_, 55(14), pp.2173-2191.
* Hirschmuller et al. (2006) Hirschmuller, H., Mayer, H., Neukum, G., 2006. Stereo processing of HRSC Mars Express Images by Semi-Global Matching. In: _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Goa, India, Vol. XXXVI, Part B4.
* Hirschmuller (2008) Hirschmuller, H., 2008. Stereo Processing by Semi-global Matching and Mutual Information. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 30(2), pp.328-341.
* Kirk et al. (2008) Kirk, R.L., Kraus, E.H., Rosiek, R.M., et al., 2008. Ultrahigh resolution topographic mapping of Mars with MRO HiRISE stereo images: meter-scale slopes of candidate Phoenix landing sites. _Journal of Geophysical Research Atmospheres_,113(3), pp.5578-5579.
* Rosiek et al. (2005) Rosiek, M.R., Kirk, R.L., Archinal B. A, et al., 2005. Utility of Viking Orbiter images and products for Mars mapping. _Photogrammetric Engineering and Remote Sensing_, 71(10), pp.1187-1195.
* Shan et al. (2005) Shan, J., Yoon, J., Lee, D.S., et al., 2005. Photogrammetric analysis of the Mars Global Surveyor mapping data. _Photogrammetric Engineering and Remote Sensing_, 71(1), pp.97-108.
* Sidiropoulos and Muller (2016) Sidiropoulos, P., Muller, J., 2016. Batch co-registration of Mars high-resolution images to HRSC MC11-E mosaic. In: _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Prague, Czech, Vol. XLI, Part B4, pp. 491-495.
* Stallman (2008) Stallman, D., 2008. DGAP notes, Stuttgart, Germany. [http://www.ifp.uni-tutgart.de/publications/software/openbundle](http://www.ifp.uni-tutgart.de/publications/software/openbundle) /index.en.html (accessed 23 June, 2017).
* Wang et al. (2009) Wang, M., Hu, F., Li, J., Pan, J., 2009. A fast approach to best scanline search of airborne linear pushbroom images. _Photogrammetric Engineering and Remote Sensing_, 75(9), pp.1059-1067.
* Wu et al. (2016) Wu, B., Liu, W.C., Grumpe, A., et al., 2016. Shape and albedo from shading (SAS) for pixel-level dem generation from monocular images constrained by low-resolution dem. In: _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Prague, Czech, Vol. XLI, Part B4, pp. 521-527. | Mars mapping is essential to the scientific research of the red planet. The special terrain characteristics of Martian surface can be used to develop the targeted image matching method. In this paper, in order to generate high resolution Mars DEM, a pixel-level image matching method for Mars orbital pushbroom images is proposed. The main strategies of our method include: (1) image matching on approximate orthophotos; (2) estimating approximate value of conjugate points by using ground point coordinates of orthophotos; (3) hierarchical image matching; (4) generating DEM and approximate orthophotos at each pyramid level; (5) fast transformation from ground points to image points for pushbroom images. The derived DEM at each pyramid level is used as reference data for the generation of approximate orthophotos at the next pyramid level. With iterative processing, the generated DEM becomes more and more accurate and a very small search window is precise enough for the determination of conjugate points. The images acquired by High Resolution Stereo Camera (HRSC) on European Mars Express were used to verify our method's feasibility. Experiment results demonstrate that accurate DEM data can be derived with an acceptable time cost by pixel-level image matching. | Condense the content of the following passage. | 241 |
ieee/f28a3077_c41b_4b52_9127_b99cca38b2ec.md | A Hyperspectral Image Classification Method Based on Weight Wavelet Kernel Joint Sparse Representation Ensemble and \\(\\beta\\)-Whale Optimization Algorithm
Mingwei Wang\\({}^{\\text{\\textcircled{C}}}\\), Zitong Jia, Jianwei Luo\\({}^{\\text{\\textcircled{C}}}\\), Maolin Chen\\({}^{\\text{\\textcircled{C}}}\\), Shuping Wang, and Zhiwei Ye
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
## I Introduction
In recent years, hyperspectral remote sensing sensors have been applied to collect images with enough spectral resolution that contains hundreds of bands and allows the discrimination of objects with similar attributes [1]. A hyperspectral image (HSI) has been considered as an applicable tool for Earth observation because of its ability to obtain independent and continuous bands, analyze information from visible to near-infrared wavelength ranges, and supply multiple features from the fixed wavelength. It provides abundant spectral information and has a huge potential for the interpretation of different ground objects [2, 3]. As a result, the analysis of HSI has become a subject of research interest in remote sensing, which has been applied in a series of fields such as quantitative analysis [4], environmental monitoring [5], and land-cover mapping [6]. In addition, image classification is a significant step in identifying object types on the Earth's surface, and HSI classification aims to distinguish each sample into a discrete group of specific category labels [7, 8].
Existing HSI classification techniques are separated into two scopes: unsupervised and supervised [9]. For unsupervised techniques, fuzzy clustering [10], rough set [11], and iterative self-organizing data analysis technique algorithm [12] have been utilized to classify HSI samples. In these techniques, the process of classification is only based on the characteristics of feature values, and the misclassification is obvious as spectral characteristics are similar for different objects. For supervised techniques, active learning [13], random forest [14], and support vector machine (SVM) [15] have been utilized to obtain the category label of each pixel. Although these classifiers make full use of spectrum difference, the category label of the current pixel is usually impacted by the feature values on the neighbor. Therefore, several ideas are presented to synthesize the spatial and spectral characteristics of HSIs, and they are based on the hypothesis that samples within a local space have approximate spectral characteristics and express the same objects [16, 17]. In addition, HSI classification based on a deep learning model has been proposed to sufficiently synthesize spatial and spectral information, thus obtaining the category label of each pixel, but it is supported by the sufficient number of training samples and the sufficient amount of iterations, which is time-consuming as the data dimension increases [18].
As a well-behaved supervised classification model, sparse representation (SR) is used to recover the original data and report class discriminative information, which has been widely used in the field of pattern recognition [19]. In addition, joint representation is presented to promote the stability of SR and boost its capability [20]. For HSI classification, samples with the same category are theoretically located in a low-dimensional subspace, and joint SR (JSR) makes associative decision on neighbor pixels as to which are feasible and particularly suitable for HSIs [21]. For the classification process, a testing sample is similarly expressed by a certain number of rules from the training dictionary, and the reconstructed matrix is utilized to determine the category label by searching for the minimum [22]. For instance, Peng _et al._[23] designed a local adaptive JSR (LAJSR) technique for HSI classification; the dictionary construction and SR phases were improved by choosing representative rules from an additional dictionary. Tu _et al._[24] proposed an HSI classification approach based on the balance of JSR and correlation coefficient (JSR-CC), which synthetically considered both local spatial and spectral similarities. Furthermore, the reconstructed matrix is usually computed by a linear kernel, making it difficult to reflect the inner product of nonlinear mapping between input spectral features and output category labels. Hence, Zhang _et al._[25] proposed a novel HSI classification technique using JSR and nonlinear kernel extension, which mapped the input into a high-dimensional space to separate different objects and reflect better performance than that using the linear kernel. However, the category label is determined by kernel computing of higher order polynomial; the misclassification for specific categories will be enlarged if the order is uncertain within effective time.
The category label is obtained by the probability of kernel computing for JSR, and it is the same as other nonlinear classifiers in mechanism, such as k-nearest neighbor (KNN) and SVM. Moreover, the wavelet function is a series of formulations that are based on wavelet analysis and adequately keeps regularity and orthogonality; it has been employed in the field of HSI classification as the kernel of KNN and SVM to substitute for a linear kernel [26, 27]. As a result, the wavelet function is able to act as the kernel of JSR in theory. Ensemble learning is a machine learning paradigm that synthesizes multiple subclassifies to solve the same problem; better discrimination ability is obtained than the single classifier according to different emphases of subclassifiers especially for indeterminate objects and has been applied for HSI classification [28, 29]. However, the category label is usually obtained by the voting strategy for ensemble learning; the discrimination is confused if the votes are similar for two categories. As for JSR, the category label is assigned by searching for the minimum of reconstructed error for each sample, and the reconstructed matrix of ensemble learning can be updated by that of subclassifiers with weight setting. A higher weight means that the subclassifier produces more contribution for classification, and a suitable weight setting is able to balance the reconstructed error of subclassifiers [30]. In general, how to obtain the optimal weight of subclassifiers is seen as a combination optimization problem, and it can be solved by the swarm intelligence algorithm with heuristic search guiding strategies [31]. Among them, the whale optimization algorithm (WOA) is a newly proposed swarm intelligence algorithm and has been widely used in diverse applications especially for weight optimization [32, 33]. However, the convergence rate is not fast enough with a fixed population updating equation and the small probability of local search. Nowadays, the factorial function with a single parameter has been combined with the swarm intelligence algorithm to enhance the exploration phase, but it is not adapted to various population updating conditions such as WOA with multiple parameters [34, 35]. Here, the \\(\\beta\\) function is combined with the WOA, two parameters are corresponding to two evolution processes, and the weight is adaptively located on the range of [0,1].
Therefore, an HSI classification technique based on the weight wavelet kernel JSR ensemble (W\\({}^{2}\\) JSEE) model and the \\(\\beta\\)-WOA is proposed to conduct pixel-level classification for HSIs. Because the spectral feature is output by 16 bits, the discrimination is not significant for different categories, and the misclassification is obvious as the dataset is mapped into the linear kernel. The classification accuracy is improved as the wavelet function is acted as the kernel of KNN and SVM; the dataset is mapped into quadratic, exponential, and trigonometric functions with different types and has been utilized in the field of HSI classification, but it is not acted as the kernel of JSR for previous work. In addition, a series of subclassifiers based on JSR with wavelet kernels are integrated by ensemble learning, the wavelet kernel of JSR concerns on the homogeneity for each subclassifier, and the ensemble with multiple wavelet kernels emphasizes the heterogeneity. Furthermore, the swarm intelligence algorithm is widely used to solve the nonpolynomial hard problem, such as weight optimization, and the \\(\\beta\\)-WOA is designed to obtain the optimal weight of subclassifiers, and the category label is output by total reconstructed error minimization of ensemble learning. The main contributions of this article are concluded as follows.
1. To improve the scale of mapping, the wavelet function is acted as the kernel of JSR, and the HSI dataset is mapped into quadratic, exponential, and trigonometric functions with different types.
2. To synthesize the homogeneity and heterogeneity of the JSR ensemble, the W\\({}^{2}\\) JSEE model is proposed by using different types of wavelet function as the kernel, and the classification map is output by pixel level.
3. To balance the reconstructed error of subclassifiers, weight setting is conducted for ensemble learning, and the category label is obtained by total reconstructed error minimization.
4. To enhance the exploration phase of the WOA, the \\(\\beta\\)-WOA is designed by fusing the \\(\\beta\\) function into two evolution processes of the WOA, and the optimal weight of subclassifiers is obtained.
The overall construction of this article is listed as follows. Section II describes the related work of JSR and WOA. Section III illustrates the principle of the proposed W\\({}^{2}\\) JSEE model and \\(\\beta\\)-WOA and the fundamental process of HSI classification. Section IV analyzes the experimental results and expends discussion of data statistics and visual senses. Finally, Section V concludes this article.
## II Related Work
### _Basic Theory of JSR_
JSR is devoted to minimizing the reconstructed error of some independent SRs, and the inner correlations between different SRs are synthetically considered. In the HSI, spectral characteristics of a pixel are strongly correlated with its neighbor pixels, which means that they belong to the same object with large probability, and the spatial correlations are ensured by supposing that neighbor pixels within a local space are jointly indicated by some common-sense rules from a training dictionary [36]. In particular, the size of local space at center pixel \\(y_{t}\\) is signed by \\(l\\times l\\), and pixels within such a space are marked by \\(y_{i}\\), where \\(i=1,2,\\ldots,l\\times l\\). All of the above pixels are stacked into a matrix \\(Y=[y_{1},y_{2},\\ldots,y_{t},\\ldots,y_{l\\times l}]\\in R^{b\\times l^{2}}\\). The matrix is succinctly represented as follows:
\\[Y =[y_{1},y_{2},\\ldots y_{t},\\ldots,y_{l\\times l}]\\!=\\![D\\alpha_{1 },D\\alpha_{2},\\ldots D\\alpha_{t},\\ldots,D\\alpha_{l\\times l}]\\] \\[=D[\\alpha_{1},\\alpha_{2},\\ldots\\alpha_{t},\\ldots,\\alpha_{l \\times l}]=DA \\tag{1}\\]
where \\(A=[\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{t},\\ldots,\\alpha_{l\\times l}]\\in R^{b \\times l^{2}}\\) is the recovered data with regard to \\(Y\\). The selected rules in \\(D\\) are assigned by rows and columns of elements that are not equal to 0 in \\([\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{t},\\ldots,\\alpha_{l\\times l}]\\), by setting part of rows as the value of 0 on the reconstructed matrix \\(A\\). The neighbor pixel \\(Y\\) is expressed by a subset of common-sense rules. Afterward, the matrix is recovered by seeking the equation to represent the following optimization problem:
\\[\\hat{A}=\\text{arg}\\min_{A}\\|Y-DA\\|_{F}\\ \\ \\ \\text{s.t.}\\|A\\|_{\\text{row},0}\\leq K \\tag{2}\\]
where \\(\\|A\\|_{\\text{row},0}\\) is the joint sparse norm that finds the most representative nonzero rows in \\(A\\), and \\(|\\cdot|_{F}\\) is the Frobenius norm. As \\(\\hat{A}\\) is recovered, the category label at the center pixel \\(y_{t}\\) is judged by the reconstructed error that is defined as follows:
\\[\\text{label}(y_{t})=\\text{arg}\\min r(y)=\\text{arg}\\min_{i=1,2,\\ldots,c}\\|Y-D_ {i}\\hat{A}_{i}\\|_{2} \\tag{3}\\]
where \\(\\hat{A}_{i}\\) indicates the rows in \\(\\hat{A}\\) associated with the category index of \\(i\\).
### _Mathematical Model of WOA_
In 2016, Mirjalili designed a swarm intelligence algorithm called WOA that is based on the predatory strategy of humpback whales. Humpback whales tend to catch crowd of krill or small fishes near the surface. The process is conducted by producing specific bubbles with a ring path, and the operator is separated into three parts: encircling prey, spiral bubble-net attacking, and searching for prey. The main procedure for the WOA is depicted as follows [37]:
_Encircling prey_: Humpback whales have the ability to search for the position of prey and surround them, and the mechanism of global search is represented by the process. It is assumed that the position of optimal solution is the objective prey or it is the proximate solution moving close to the optimum in theory, and others should endeavor to motivate their positions toward to it. The process is written as follows:
\\[\\vec{S}=|\\vec{C}\\cdot X^{*}(t)-X(t)| \\tag{4}\\]
\\[X(t+1)=X^{*}(t)-\\vec{A}\\cdot\\vec{S} \\tag{5}\\]
where \\(t\\) is the number of current iterations, \\(X^{*}(t)\\) is the position of prey, and \\(X(t)\\) and \\(X(t+1)\\), respectively, represent the position of humpback whales in the current and the next procedure. \\(\\vec{A}\\) and \\(\\vec{C}\\) are the variable vectors that are expressed as \\(\\vec{A}=2\\vec{a}\\cdot\\vec{r}-\\vec{a}\\) and \\(\\vec{C}=2\\cdot\\vec{r}\\), \\(\\vec{a}=2-2*t/T\\) is gradually decreased within the scope of [2,0], \\(T\\) is the maximum number of iteration, and \\(\\vec{r}\\) is a random number on the range of [0,1].
_Bubble-net attacking:_ Each humpback whale moves close to the prey within a compact ring and follows a spiral-shaped path in the meantime, and the mechanism of local search is represented by the process. A probability of 0.5 is set to choose whether following the compact ring or spiral mechanism, the position of humpback whale is renewed. The formulation of the process is expressed as follows:
\\[X(t+1)=\\begin{cases}X^{*}(t)-\\vec{A}\\cdot\\vec{S},&\\text{if}\\ \\ \\ p<0.5\\\\ \\vec{S}^{\\prime}\\cdot e^{bl}\\cdot\\text{cos}(2\\pi l)+X^{*}(t),&\\text{if}\\ \\ \\ p \\geq 0.5\\end{cases} \\tag{6}\\]
where \\(\\vec{S}^{\\prime}\\) is the distance of current humpback whale to prey, which is expressed as \\(\\vec{S}^{\\prime}=|X^{*}(t)-X(t)|\\), \\(b=1\\) represents a constant number that is the situation of logarithmic spiral, and \\(l\\) and \\(p\\) are two random numbers, respectively, within the scope of [\\(-\\)1,1] and [0,1].
_Searching for prey:_ The position of current humpback whale is updated according to the random walk strategy rather than the best humpback whale, the strategy of random search is reflected by the process, and the details are expressed as follows:
\\[\\vec{S}=|\\vec{C}\\cdot X_{\\text{rand}}-X(t)| \\tag{7}\\]
\\[X(t+1)=X_{\\text{rand}}-\\vec{A}\\cdot\\vec{S} \\tag{8}\\]
where \\(X_{\\text{rand}}\\) indicates the position of a random humpback whale selected from the population.
## III Proposed Methodology
### _Classification Process With W\\({}^{2}\\) JSRE_
As for (3), the category label is determined by reconstructed error minimization, and it is computed on the same scale with the linear kernel, which makes it difficult to express the difference of feature values on multiple scales and emerge the relationship of nonlinear mapping in detail. The basic theory of wavelet analysis is to combine wavelet basis that builds an arbitrary function following the time series \\(t\\), there are five types of wavelet function that are proposed by analytical expressions with compactly supported and can be decomposed to different scales, and they are defined as follows [38, 39]:
\\[f_{1}(t) =\\text{exp}(-t^{2}/2) \\tag{9}\\] \\[f_{2}(t) =(1-t^{2})\\cdot\\text{exp}(-t^{2}/2)\\] (10) \\[f_{3}(t) =\\text{cos}(1.75\\cdot t)\\cdot\\text{exp}(-t^{2}/2) \\tag{11}\\]\\[f_{4}(t) =\\frac{\\text{sin}(0.5\\pi\\cdot t)}{0.5\\pi\\cdot t}\\cdot\\text{cos}(1. 5\\pi\\cdot t) \\tag{12}\\] \\[f_{5}(t) =\\frac{e^{i4\\pi t}-e^{i2\\pi t}}{i2\\pi\\cdot t}. \\tag{13}\\]
The wavelet function contents the fixed condition of shift-invariant form, it is based on the inner product of nonlinear mapping on different scales, and the difference between the original and recovered data can be represented by shift-invariant form [41]. Nowadays, the wavelet function is acted as the kernel of wavelet kernel SVM (WSVM) and wavelet kernel KNN (WKNN), and the classification result is improved as the dataset is mapped into different scales. More importantly, the dataset with ten thousands of samples is difficultly expressed by a linear kernel mapping. For JSR, the learning mechanism is the same with SVM and KNN, and the wavelet function can be acted as the kernel of JSR; the reconstructed error is defined on the basis of (3) and is expressed as follows:
\\[r_{1}(y) =\\text{exp}(-\\|Y-D_{i}\\hat{A}_{i}\\|_{2}/2) \\tag{14}\\] \\[r_{2}(y) =(1-\\|Y-D_{i}\\hat{A}_{i}\\|_{2})\\cdot\\text{exp}(-\\|Y-D_{i}\\hat{A} _{i}\\|_{2}/2)\\] (15) \\[r_{3}(y) =\\text{cos}(1.75\\times\\|Y-D_{i}\\hat{A}_{i}\\|_{1})\\cdot\\text{exp} (-\\|Y-D_{i}\\hat{A}_{i}\\|_{2}/2)\\] (16) \\[r_{4}(y) =\\frac{\\text{sin}(0.5\\pi\\times\\|Y-D_{i}\\hat{A}_{i}\\|_{1})}{0.5 \\pi\\times\\|Y-D_{i}\\hat{A}_{i}\\|_{1}}\\cdot\\text{cos}(1.5\\pi\\times\\|Y-D_{i}\\hat{ A}_{i}\\|_{1})\\] (17) \\[r_{5}(y) =\\frac{e^{i4\\pi\\|Y-D_{i}\\hat{A}_{i}\\|_{1}}-e^{i2\\pi\\|Y-D_{i}\\hat{ A}_{i}\\|_{1}}}{i2\\pi\\|Y-D_{i}\\hat{A}_{i}\\|_{1}} \\tag{18}\\]
where \"\\(\\cdot\\)\" represents the inner product between the vectors of reconstructed error with two different scales, and the original dataset is mapped into quadratic, exponential, and trigonometric functions with different types. Experimental results demonstrate that a scale parameter is involved in the dilation and, thus, can be naturally used to accommodate the multiscale phenomenon [40].
The category label of a sample is determined by five subclassifiers (JSRs) with different wavelet kernels at the same time, which is able to improve the discrimination ability compared with single JSR and linear kernel. The significance of subclassifiers is decided by weight setting, and the reconstructed error of the proposed W\\({}^{2}\\) JSRE model is computed as follows:
\\[\\text{label}(y_{t})=\\text{arg}\\min\\sum_{j=1}^{5}\\omega_{j}\\times r_{j}(y) \\tag{19}\\]
where \\(\\omega_{j}\\) is the weight of the \\(j\\)th subclassifier, which is directly multiplied with the reconstructed matrix, and weight represents the significance of subclassifiers. It is seen as a fuzzy quantitative analysis for the ensemble learning of JSRs, and the performance is better than the traditional voting strategy with fixed category analysis.
### _Weight Optimization With \\(\\beta\\)-WoA_
The exploration phase is represented by searching for prey to conduct random walk, which is computed by the position of a random humpback whale, but the operation efficiency is decreased by random number generation and the evolution trend is uncollected for the enlarge of \\(\\vec{S}\\) in (8). The \\(\\beta\\) function is a factorial function with analytic continuation in the complex plane; two parameters \\(\\gamma\\) and \\(\\eta\\) are defined to adjust the value. For the improvement of the swarm intelligence algorithm, it is necessary to weaken the random process and synthesize multiple parameters updating the individuals. The value range of the \\(\\beta\\) function is [0,1], which is adapted to the weight \\(\\omega_{j}\\) of subclassifiers. As for the proposed \\(\\beta\\)-WOA, the exploration phase is based on the \\(\\beta\\) function instead of searching for prey, and, respectively, acting on encircling prey and bubble-net attacking, which is defined as follows:
\\[X(t+1)=\\int_{0}^{1}t^{\\gamma-1}(1-t)^{\\eta-1}dt \\tag{20}\\]
where
\\[\\gamma =(X^{*}(t)-\\vec{A}\\cdot\\vec{S})^{-1}\\] \\[\\eta =(\\vec{S}^{\\prime}\\cdot e^{bl}\\cdot\\text{cos}(2\\pi l)+X^{*}(t))^{ -1}. \\tag{21}\\]
There is no random humpback whale that needs to be extracted, all of individuals are, respectively, computed by two processes of encircling prey and bubble-net attacking, they are corresponding to \\(\\gamma\\) and \\(\\eta\\) of \\(\\beta\\) function, and the population is updated by (20) afterward. As a result, the global and local processes are integrated for each individual and iteration, and time complexity is decreased by no random sample generation. Moreover, the coding length of the \\(\\beta\\)-WOA is equal to 5, which is the same as the number of subclassifiers, and directly represents the weight of subclassifiers.
### _Definition of the Objective Function_
The key issue of HSI classification based on the W\\({}^{2}\\) JSRE model is how to establish a reasonable mapping between the solution and the \\(\\beta\\)-WOA. As for weight setting, it is expressed by a constant on the range of [0,1] for subclassifiers and corresponding to a bit of \\(\\beta\\)-WOA. Each individual of \\(\\beta\\)-WOA includes 5 bits: the first bit represents the weight of the first JSR (subclassifier), the second bit is the weight of the second JSR (subclassifier), and so on. The entire code indicates the solution about the optimal weight of the W\\({}^{2}\\) JSRE model, and the fitness value is computed according to the average entropy of the reconstructed matrix, which is defined as follows:
\\[F(i)=-\\sum_{i=1}^{s}\\min_{j}\\hat{A_{ij}}\\text{log}_{2}(\\hat{A_{ij}})/s \\tag{22}\\]
where \\(s\\) is the scale of testing samples, and \\(j\\) is the category index that takes on the minimum for the \\(i\\)th testing sample. A larger fitness value means that the reconstructed error is smaller, and the category label is more likely to obey the true distribution.
### _Implementation of the Proposed Method_
The proposed HSI classification technique is easy to be fulfilled. The W\\({}^{2}\\) JSRE model is used for pixel-level classification of HSIs and the category label is obtained for each sample, the \\(\\beta\\)-WOA is used to search for the optimal weight of subclassifiers (JSRs), and the exact flow is listed as follows.
## IV Experimental Results and Discussion
### _Data Description_
To evaluate the performance of the proposed HSI classification technique based on the W\\({}^{2}\\) JSRE model and the \\(\\beta\\)-WOA, three public collected HSIs and two measured airborne HSIs are used in the experiments.
The first HSI was acquired by the ROSIS sensor during a flight campaign over Pavia University, Italy, and the geometric resolution was 1.3 m [42]. The image was composed of \\(610\\times 340\\) pixels with 103 spectral bands. Fig. 1 displays the ground truth of PaviaU scene. The number and names of corresponding categories that were used are shown in Table I.
The second HSI was collected by the AVIRIS sensor and covered the agricultural region of Indian Pines, India, in 1992 [42]. The spectral range was 0.4-2.5 \\(\\mu\\)m with a spectral resolution about 10 nm, and the image was composed of \\(145\\times 145\\) pixels and 220 spectral bands with a spatial resolution of 20 m. Fig. 2 displays the ground truth of Indian scene. The number and names of corresponding categories that were used are shown in Table II.
The third HSI was collected by the 224-band AVIRIS sensor over Salinas Valley, California, and it was characterized by high spatial resolution. The image was composed of \\(512\\times 217\\) pixels and available only as sensor radiance data, and 20 water absorption bands were discarded [42]. Fig. 3 displays the ground
Fig. 1: Original image and reference map of PaviaU.
Fig. 2: Original image and reference map of Indian.
truth of Salinas scene. The number and names of corresponding categories that were used are shown in Table III.
The fourth HSI was collected by the CASI sensor over the suburban area of Xiongan, China, in the summer of 2017. The spectral range was 0.36-1.05 \\(\\mu\\)m with a spectral resolution of 7.2 nm, and the image was composed of \\(160\\times 190\\) pixels with 96 spectral bands. Fig. 4 shows the ground truth of XionganS scene. The number and names of corresponding categories that were used are shown in Table IV.
The fifth HSI was acquired by the SASI sensor over the urban area of Xiongan, China, in the spring of 2018. The spectral range was 1.0-2.5 \\(\\mu\\)m with a spectral resolution of 15 nm, and the image was composed of \\(270\\times 232\\) pixels with 100 spectral bands. Fig. 5 shows the ground truth of XionganU scene. The
Fig. 4: Original image and reference map of XionganS.
Fig. 5: Original image and reference map of XionganU.
Fig. 3: Original image and reference map of Salinas.
number and names of corresponding categories that were used are shown in Table V.
### _Parameters Setting of Different Algorithms_
As for the \\(\\beta\\)-WOA, there is one parameter that needs to be set by the corresponding reference [32]. Moreover, some commonly used swarm intelligence algorithms are also assessed to conduct weight optimization. As the illustration in Section III, the \\(\\beta\\)-WOA is utilized here, whereas particle swarm optimization (PSO) [43], differential evolution (DE) [44], cuckoo search (CS) [45], grey wolf optimizer (GWO) [46], ant lion optimizer (ALO) [47], and standard WOA are utilized to make intuitive comparisons. All of the above algorithms are ended as the of evaluations reaches 300. Thirty independent operations are conducted because of the randomness of initial population. Although the computational complexity is \\(O(n\\mathrm{logn})\\) for the algorithms above [48], there is no random humpback whale that needs to be extracted for the \\(\\beta\\)-WOA, and each bit will be adaptively located on [0,1] by the range of \\(\\beta\\) function, which will cost less CPU time than that of the standard WOA. The parameters of these algorithms are set by constants and based on the empirical value of corresponding references, and they are listed in Table VI.
### _Experimental Results on Swarm Intelligence Algorithms_
In this subsection, evaluation of training samples with the weight optimized by different swarm intelligence algorithms is investigated. For five HSIs in Section IV-A, 10% of pixels for each category are randomly extracted as the training samples to obtain weights of subclassifiers. Table VII shows the experimental results with different swarm intelligence algorithms, where Fiv and Std represent the average and standard deviation of fitness value, respectively, and Time is the CPU time after 30 independent operations.
As for the data in Table VII, the optimization ability of the WOA is obviously better than that of PSO, DE, CS, GWO, and ALO, and the fitness value is higher than 0.30 for the five datasets. In addition, the \\(\\beta\\) function is operated for encircling prey and bubble-net attacking of the basic WOA, which is acted as the heuristic information of exploration phase. More importantly, the fitness value is further improved compared with the basic WOA, which illustrates that the reconstructed error remains in a small interval between the original and recovered datasets. With regard to the operating efficiency, the convergence speed of the WOA is better than that of other algorithms because of less multiplications, and there is no random humpback whale that needs to be extracted for the \\(\\beta\\)-WOA, and the CPU time is further decreased to some extent. Meanwhile, the weights optimized by the \\(\\beta\\)-WOA are suitably assigned for five subclassifiers; these are set as 0.2242, 0.1101, 0.6585, 0.2343, and 0.3887 for the Indian dataset, and all of subclassifiers have a certain contribution for training. However, the category label may focus on one or two subclassifiers by using other algorithms. The weight is greater than 0.9 for a subclassifier, and the performance of ensemble learning does not sufficiently play. In brief, the optimization ability of the \\(\\beta\\)-WOA is superior, and the convergence speed is fast enough to obtain the satisfactory weight, which is applicable for the practical work of sample training about HSI classification.
### _Experimental Results About HSI Classification on Pixel Level_
In this subsection, five HSIs, named PaviaU, Indian, Salinas, XionganS, and XionganU, are utilized to conduct pixel-level classification of HSIs and verify the performance of the W\\({}^{2}\\) JSRE model and the \\(\\beta\\)-WOA. Moreover, some corresponding and newly proposed HSI classification techniques such as JSR [22], LAJSR [23], JSR-CC [24], wavelet kernel JSR (WJSR), WKNN [26], WSVM [27], and deep learning model, such as fully convolutional networks (FCN) [49], discriminative stacked autoencoder (DSAE) [50], are also used to make an overall comparison. In addition, the classification results with different percentages of training samples (Indian image is not operated because of less number of samples for Alfafa and Oats categories) and three subclassifiers of ensemble learning are also exhibited to make a further verification; the experiments are not conducted for LAJSR, JSR-CC, and WJSR because of the correlation of JSR-based techniques. The classification maps of different techniques are listed in
Fig. 6: Classification results of PaviaU image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers).
Fig. 7: Classification results of Indian image. (a) WKNN, (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WISR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers).
Fig. 8: Classification results of Salinas image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSFE (three subclassifiers). (j) W\\({}^{2}\\) JSFE (five subclassifiers).
Figs. 6-10, and Tables VIII-XII outline the overall classification accuracy (OA), Kappa coefficient, and CPU time of each HSI.
Based on the data in Tables VIII-XII, there are no samples that are accurately classified to Alfafla or Oats categories for Indian image by using traditional techniques. The OA of JSR-based techniques is obviously better than that of WKNN and WSVM, and it is higher than 80% for all categories of XionganU and XionganS images. Compared with the linear kernel, the wavelet kernel improves the scale of mapping, and the Kappa coefficient has reached 0.91 for five images. As for the W\\({}^{2}\\) JSRE model, the OA is superior to 95% for five images, and the Kappa coefficient
Fig. 9: Classification results of XionganS image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers).
exceeds 0.95. In particular, the OA has reached 99% for PaviaU and Salinas images, and it is higher than 95% for all categories of above two images. Experimental results illustrate that almost any samples are truly classified, and the discrimination ability is enhanced to analyze the samples with similar feature values. Although the OA via deep learning model is close to that of the proposed W\\({}^{2}\\) JSRE model, the process will take more than 2000 s to complete classification for XionganS image, and it is difficult to satisfy real-time processing. As shown in Figs. 6-10, the classified noise is obviously appeared via WCNN and WSVM, which makes it difficult to recognize different objects from the images, where Grass and Vegetation categories are confused because of the similar spectral characteristics. The JSR-based techniques are able to obtain better classification performance, and the classified noise is eliminated to some extent, but the misclassification still exists on the edge region. The classification maps of WJSR clearly reflect different objects and correspond to the reference maps. In addition, ensemble learning is efficient to comprehensively judge the category label by a series of subclassifiers, and the objects are continuously presented for each category by using five subclassifiers. However, the learning ability is not sufficient as lack of training samples and inadequate of subclassifiers, and scattered noise is reflected on the classification maps. As for the curve of Fig. 11, the OA is
Fig. 10: Classification results of XionganU image. (a) WKNN. (b) WSVM. (c) FCN. (d) DSAE. (e) JSR. (f) LAJSR. (g) JSR-CC. (h) WJSR. (i) W\\({}^{2}\\) JSRE (three subclassifiers). (j) W\\({}^{2}\\) JSRE (five subclassifiers).
improved as the percentage increase of training samples, and it keeps stable on a high level as most of noise eliminated, but it is difficult to reflect a further improvement as the percentage is reached 10%, and the extent is only 0.4% as more than 10% of pixels acted as training samples. In short, the proposed W\\({}^{2}\\) JSRE model is suitable for some practical work of HSI classification, and the classification maps are well coincided with the reference maps.
## V Conclusion
In the article, an HSI classification technique based on the W\\({}^{2}\\) JSRE model and the \\(\\beta\\)-WOA is proposed. The category label of each pixel is obtained by reconstructed error minimization of JSR, and the wavelet function is acted as the kernel of JSR. Moreover, ensemble learning is used to conduct detailed analysis of independent features, and the \\(\\beta\\)-WOA is utilized to obtain the optimal weight of subclassifiers. In general, it is observed that the swarm intelligence algorithm is adapted to achieve the suitable weight and represent the contribution of each subclassifier. In particular, the \\(\\beta\\)-WOA has the highest fitness value among the algorithms, which is appropriate to synthesize the discrimination ability of five subclassifiers. Furthermore, the optimal weight is employed to obtain the category label of HSIs, and the OA is compared with some newly proposed and corresponding HSI classification techniques. In all, the proposed W\\({}^{2}\\) JSRE model
Fig. 11: OA of different percentage of training samples. (a) PaviaU image. (b) Salinas image. (c) XionganS image. (d) XionganU image.
recognizes different objects on the image, and it is sufficient to distinguish most of similar objects, which has reached 95% for pixel-level classification. As a summary, JSR combined with the wavelet kernel has a good property to solve the classification problem in most cases, the misclassification is apparently weaken by ensemble learning, and the weight optimized by the \\(\\beta\\)-WOA is reasonable to improve the OA to some extent. In the future, it is preferable to combine the spatial and spectral features with different types of subclassifier for HSI classification.
## References
* [1]S. Arjovsky, M. G. Unsal, and H. H. Orcku (2020) Use of the heuristic optimization in the parameter estimation of generalized gamma distribution: comparison of GA, DE, PSO and SA methods. Comput. Statist.374, pp. 1-31. Cited by: SSII-A.
* [2]S. Chen, C. Guo, and J. Lai (2016-05) Deep ranking for person re-identification via joint representation learning. IEEE Trans. Image Process.25 (5), pp. 2353-2367. Cited by: SSII-A.
[MISSING_PAGE_POST]
. Chen, and Y. Chen (2019-05) A new learning algorithm for hyperspectral image classification. IEEE Trans. Image Process.32 (5), pp.
* [43] J. Kennedy and R. Eberhart, \"Particle swarm optimization,\" in _Proc. Int. Conf. Neural Netw._, 1995, vol. 4, pp. 1942-1948.
* [44] K. V. Price, \"Differential evolution: A fast and simple numerical optimizer,\" _Proc. North Amer. Fuzzy Inf. Process._, 1996, pp. 524-527.
* [45] X. S. Yang and S. Deb, \"Cuckoo search via Levy flights,\" in _Proc. World Congr. Nat. Biol. Inspired Comput._, 2009, pp. 210-214.
* [46] S. Mirjalili, S. M. Mirjalili, and A. Lewis, \"Grey wolf optimizer,\" _Adv. Eng. Softw._, vol. 69, pp. 46-61, 2014.
* [47] S. Mirjalili, \"The ant lion optimizer,\" _Adv. Eng. Softw._, vol. 83, pp. 80-98, 2015.
* [48] C. Witt, \"Tight bounds on the optimization time of a randomized search heuristic on linear functions,\" _Combinatorics, Probab. Comput._, vol. 22, pp. 294-318, 2013.
* [49] L. Zou, X. Zhu, C. Wu, Y. Liu, and L. Qu, \"Spectral-spatial exploration for hyperspectral image classification via the fusion of fully convolutional networks,\" _IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens._, vol. 13, pp. 659-674, 2020.
* [50] P. Zhou, J. Han, G. Cheng, and B. Zhang, \"Learning compact and discriminative stacked autoencoder for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 7, pp. 4823-4833, Jul. 2019.
\\begin{tabular}{c c} & Mingwei Wang received the B.S. degree in electronic information science and technology from Hubei Normal University, Huangshi, China, in 2011, the M.S. degree in software engineering from the Hubei University of Technology, Wuhan, China, in 2015, and the Ph.D. degree in photogrammetry and remote sensing from Wuhan University, Wuhan, in 2018. Since 2018, he has been a Professional Researcher with the Institute of Geological Survey, China University of Geosciences, Wuhan. His major research interests include hyperspectral image processing, swarm intelligence algorithm, and deep learning models. \\\\ \\end{tabular} \\begin{tabular}{c c} & Zitong Jia received the B.S. degree in hydrology and water resources engineering from Northwest Agriculture and Forest University, Kaiyang, China, in 2020. She is currently working toward the M.S. degree with the China University of Geosciences, Wuhan, China. Her major research interests include hydrologic model research based on geographic information system, remote sensing image processing, and classification of land-cover change. \\\\ \\end{tabular} \\begin{tabular}{c c} & Jianwei Luo received the B.S. degree in computer science and technology from the Wuhan University of Technology, Wuhan, China, in 2005. Since 2012, he has been a Senior Engineer with the Hubei Cancer Hospital, Huazhong University of Science and Technology, Wuhan. His major research interests include hyperspectral imaging theory, machine learning models, and computer application technology. \\\\ \\end{tabular} \\begin{tabular}{c c} & Maolin Chen received the B.S., M.S., and Ph.D. degrees from Wuhan University, Wuhan, China, in 2012, 2014, and 2018, respectively, all in photogrammetry and remote sensing. He is currently an Associate Professor with the School of Civil Engineering, Chongqing Jiaotong University, Chongqing, China. His research interests include feature extraction, image interpretation, and object recognition of laser scanning data. \\\\ \\end{tabular} \\begin{tabular}{c c} & Shuping Wang received the B.S. and M.S. degrees from the Hubei University of Technology, Wuhan, China, in 2011 and 2014, respectively, both in computer science and technology. Since 2015, he has been an Engineer with the Hubei Cancer Hospital, Huazhong University of Science and Technology, Wuhan. His major research interests include medical image processing, swarm intelligence algorithm, and computer application technology. \\\\ \\end{tabular}
\\begin{tabular}{c c} & Zhiwei Ye received the Ph.D. degree in photogrammetry and remote sensing from Wuhan University, Wuhan, China, in 2006. He is currently a Professor with the School of Computer Science, Hubei University of Technology, Wuhan. He has authored or coauthored more than 30 papers in the area of digital image processing and swarm intelligence algorithm. His research interests include image analysis, pattern recognition, and data mining. \\\\ \\end{tabular} | Joint sparse representation (JSR) is a commonly used classifier that recognizes different objects with core features extracted from images. However, the generalization ability is weak for the traditional linear kernel, and the objects with similar feature values associated with different categories are not sufficiently distinguished especially for a hyperspectral image (HSI). In this article, an HSI classification technique based on the weight wavelet kernel JSR ensemble model and the \\(\\beta\\)-whale optimization algorithm is proposed to conduct pixel-level classification, where the wavelet function is acted as the kernel of JSR. Moreover, ensemble learning is used to determine the category label of each sample by comprehensively decision of some subclassifiers, and the \\(\\beta\\) function is utilized to enhance the exploration phase of the whale optimization algorithm and obtain the optimal weight of subclassifiers. Experimental results indicate that the performance of the proposed HSI classification method is better than that of other newly proposed and corresponding approaches, the misclassification and classified noise are eliminated to some extent, and the overall classification accuracy reaches 95% for all HSIs.
\\(\\beta\\) function, ensemble learning, hyperspectral image (HSI) classification, joint sparse representation (JSR), wavelet kernel, weight setting. | Condense the content of the following passage. | 255 |
mdpi/0233f074_9031_4736_881c_885d9a3e3f9b.md | A Spectral Mixture Analysis and Landscape Metrics Based Framework for Monitoring Spatiotemporal Forest Cover Changes: A Case Study in Mato Grosso, Brazil
Magdalena Halbgewachs
1German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), 82234 Wessling, Germany 11German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), 82234 Wessling, Germany 1
Martin Wegmann
2Department of Remote Sensing, Institute of Geography, Julius-Maximilians-Universitaet Wuerzburg, 97074 Wuerzburg, Germany; [email protected]
Emmanuel da Ponte
3Bicorabon Partners, Leopards Hill Business Park, Leopards Hill Road, Lusska P.O. Box 50830, Zambia; edaponte@biocarbon quarters.com
C Correspondence: [email protected]
######
L 2022 2022141907[https://doi.org/10.3390/rs14081907](https://doi.org/10.3390/rs14081907)
1
A Spectral Mixture Analysis and Landscape Metrics Based Framework for Monitoring Spatiotemporal Forest Cover Changes: A Case Study in Mato Grosso, Brazil
Magdalena Halbgewachs, M., Wegmann, M., da Ponte, E. A Spectral Mixture Analysis and Landscape Metrics Based Framework for Monitoring Spatiotemporal Forest Cover Changes: A Case Study in Mato Grosso, Brazil
Magdalena Halbgewachs, M., Wegmann, M., da Ponte, E. A Spectral Mixture Analysis and Landscape Metrics Based Framework for Monitoring Spatiotemporal Forest Cover Changes: A Case Study in Mato Grosso, Brazil
Magdalena Halbgewachs, M.
environmental transformations, such as soil erosion and contamination through the use of pesticides [9].
Especially selective logging and forest fires are one of the main drivers of forest decline in the Amazon [10]. Forest degradation is on the one hand often caused by logging events, where not only trees are cut down but also nearby trees are injured or damaged during harvesting. This in turn reduces biomass and destroys the habitat of some species [11]. On the other hand, forest degradation is caused by fire, which is not only caused by logging but also by droughts [11]. Droughts lead to moisture stress and favor the ignition of normally moist fuels [12]. Fire in turn alters forest structure and promotes the vulnerability of remaining trees to wind damage [13]. Windstorms are another stressor that drives degradation, especially in combination with forest fragmentation and fire disturbance [14]. Silverio et al. (2019) [13] found that both large trees previously impacted by fire and edge-exposed trees show high mortality rates during strong wind events. Trees at forest edges in forest fragments are furthermore exposed to higher stress due to more intense and frequent wind gusts [15]. Consequently, a synergistic effect is created by disturbance of any kind to the forest, with a chain reaction following: logging and burning destroys habitat, creating more fragments and forest edges, which in turn fosters disturbance by wind and fire, further accelerating tree mortality and degradation.
As Lovejoy and Nobre (2018) [16] point out, the Amazon rainforest is close to its ecological tipping point; it cannot tolerate ongoing deforestation for much longer. Changes in rainforest structures due to natural or anthropogenic factors not only have an enormous local impact on the composition of species diversity [17; 18] but also have global consequences, such as the release of immense amounts of carbon dioxide [19; 20]. Therefore, it is essential to develop large-scale monitoring frameworks for a better forest loss surveillance, which will form the basis for taking necessary action to combat deforestation.
Research has shown that methods for determining deforestation with remote sensing data are well developed, but there is a lack of valid methods for assessing forest degradation [14]. Such areas are more difficult to classify because different land cover types are combined--for example, vegetation, shade, soil, or dead wood [21]. The determination of degradation is highly dependent on operational definitions and calculations differ depending on the degradation processes considered, the scale of the analyses, and the thresholds set [14].
The remote-sensing-data-based System for Monitoring Degraded Forest Areas in the Amazon (DEGRAD), developed by the Brazilian National Space Agency (INPE), for example, only captures anthropogenic degradation patterns with a size bigger than 6.25 ha and does not consider smaller natural forest disturbances such as windfall or fire [10]. However, it is important to include these different factors, as the effects of disturbances on forests can vary greatly depending on the type, frequency, intensity, and extent of the perturbation [14].
Mato Grosso has been the focus of remote sensing analyses in various studies using data from different Landsat missions [22; 23; 24; 25; 26]. Some of these studies calculated deforestation rates for most of the Brazilian Amazon Biome [22], parts of Mato Grosso [24; 26], on selected municipalities [23], or on single satellite scenes [25]. In some studies, spectral mixture analysis (SMA) was used to generate fraction images as input for image classifications (e.g., [22; 23; 27; 28; 29]). This approach has shown to be a promising method for subpixel classification, which includes degraded forest areas as a separate class. Souza and Siqueira (2013b) [22] used Landsat scenes with a resolution of 30 m between 2000 and 2010 and performed a decision tree classification (DTC) based on an SMA for the whole Brazilian Amazon. Thereby, deforestation and degradation rates are based on a statistical approach. This is performed by determining forest loss and degradation between the first and last year mapped by calculating the percent rate of forest loss and degradation normalized to the time period. On this basis, the annual rate of deforestation and degradation is obtained per year, taking into account the forest area of the year under consideration. Thus, this does not represent the actual forest loss or decline per year. Rather, it represents only the rate that results from the total percentage forest loss and degradation of the first and last year considered.
In the study presented in this paper, land cover classifications and changes in Mato Grosso are performed. The goal is not only to distinguish between forest and nonforest but also to refine existing classification approaches to improve the detection of degraded forest. We use SMA to approach the problem of mixed pixels in small-scale land cover patterns. It allows to capture spectral details at the subpixel level, which favors the differentiation between the intended land cover classes. The challenge hereby is the selection of the endmembers, as they have to represent all ground materials within the study area [30]. The analyses carried out resemble those of Souza and Siqueira (2013b) [22] in terms of geographical and analytical scope, but in these analyses, their algorithms are refined with an additional cloud filter and a longer analysis period. The spatiotemporal changes in land cover are assessed for the period between 1987 and 2020 based on Landsat 5 and 8 image mosaics. In order to draw conclusions about temporal shifts in forest cover and the intensity of deforestation and degradation, degraded and closed forest areas are first identified by an SMA, followed by a DTC. Based on the classification results, a second classification scheme is implemented, consisting of six newly defined classes representing tree cover loss or gain.
Both spatial and temporal changes in a landscape influence its ecological stability, habitat capacity, and the possibility of the flora and fauna establishment [31].
To understand the structures, dynamics, and functions of boreal ecosystems and its habitat conditions, analyses of forest fragmentation and associated temporal changes can be used [32]. Chaplin-Kramer et al. (2015) [33] found that biomass in tropical forests is reduced by an average of 25% in edge areas (about 500 m from the forest margin) compared to core areas. This shows that forest fragmentation favors forest degradation, which in turn promotes tree mortality and biodiversity loss [34]. To better protect habitats affected by fragmentation, a deeper understanding of the causes, mechanisms, effects, and overarching impacts of fragmentation on biodiversity must be gained. Habitat loss is one of the greatest concerns for global biodiversity [35]. There are many common assumptions that illustrate the effects of habitat fragmentation on biodiversity. Patch isolation is an important factor in determining the habitat amount in a landscape; the more isolated a patch is, the less habitats are present in the surrounding landscape. Correlations between patch size and species diversity can be observed. Each species needs different requirements regarding the patch size of the landscape in which they live [36]. Debinski and Holt [37] compared several studies on the dependence of patch size and species diversity. They found that, in general, smaller patches provide less habitat than larger ones. In addition, it is understood that smaller isolated patches lose a higher number of species as well as larger, less dispersive species faster [38]. However, it was also noticed that species diversity changes positively as soon as patches are connected by corridors [37].
As de Filho and Metzger [39] stated, different deforestation patterns bring advantages and disadvantages to habitats. Forest fragmentation in large-scale rectangular logging patterns is often high, leaving only a few isolated patches of forest and thus low connectivity between them. This greatly restricts the movement of some animal species and only has little consequence for those that are able to migrate between isolated patches. On the other hand, these same remaining comparatively large patches of forest can also be advantageous for species that require large interior habitats. Fishbone patterns, in turn, can be positively attributed as long as forest corridors are maintained between small patches of deforested land, thus ensuring freedom of movement for species. However, not only the appropriate deforestation pattern is critical to maintain biodiversity but also the rate and intensity of clearcutting [39]. Increasing deforestation will further intensify fragmentation in the future, leaving only small, isolated, and widely distributed patches of forest.
Therefore, we additionally address the characterization of spatial changes in forest cover based on five selected landscape metrics. Whereas other studies focused on fragmentation in the whole Amazon biome [8], the Brazilian state of Parana [40], or on selected municipalities in the microregion Alto Teles Pires in central Mato Grosso [41], this study investigates pattern changes in Colniza, a municipality in the northwestern part of Mato Grosso within the Amazon biome, whose deforestation patterns have changed over time. In summary, the study specifically addresses the following research questions:
* Is the differentiation between forest and degraded forest possible by conducting a land cover classification in Mato Grosso?
* Which inferences can be made about the intensity of deforestation and degradation by recording and characterizing changes in forest cover over time?
* How does the recording and characterization of spatial changes in forest cover help to draw conclusions about forest fragmentation?
## 2 Materials and Methods
### Study Area
The study is limited to Mato Grosso, which is the third largest state of Brazil (Figure 1). Mato Grosso is located in the western-central part of the country between 7.23\\({}^{\\circ}\\)-17.87\\({}^{\\circ}\\) South and 50.57\\({}^{\\circ}\\)-61.52\\({}^{\\circ}\\) West [42; 43] and is part of the Legal Brazilian Amazon, a federation of Brazilian states within the Amazon region [44]. It covers an area of approximately 906,000 km\\({}^{2}\\)[45] and its elevation ranges from 77 m to 1139 m [46]. 109 conservation areas and 75 indigenous territories are recorded within the state, which in total corresponds to about 23% of the state's area [47]. Mato Grosso extends over the following three biomes with high biodiversity [48]:
* the Amazon: humid tropical rainforest in the North (53% of the state)
* the Cerrado: tropical savanna, which covers the center of the state from East to West (40% of the state)
* the Pantanal: wetland in the Southwest (7% of the state)
The state is marked by a hot, semihumid to humid climate with a dry season from July through September, followed by a rainy season until April [42; 49]. Annual precipitation varies along a south-north gradient, with annual precipitation values around 1000 mm/year in the South, while values in the North exceed 2000 mm/year [50]. The average annual temperature also follows a south-north gradient, ranging from 22 \\({}^{\\circ}\\)C in the South to 27 \\({}^{\\circ}\\)C in the North [51].
As basis for the fragmentation analysis, the municipality of Colniza was chosen to exemplify different forest fragmentation patterns (Figure 2). Colniza is located in the northwestern part of Mato Grosso within the Amazon biome, on the border to the state of Amazonas and covers an area of around 28,000 km\\({}^{2}\\)[57]. Five conservation units as well as three indigenous lands are located in this municipality [47]. From an ecological perspective, Colniza is one of the few municipalities in the state with more than 90% of its forested area still preserved in 2004 [58]. Visual inspections of deforestation patterns
Figure 1: Study area of Mato Grosso: (**a**) biomes and (**b**) protected areas, highways and urbanized areas (data from [47; 52; 53; 54; 55; 56]).
from satellite imagery show that fishbone fragmentation is predominant, with rectangular patterns also present in some cases.
### Data
For this study, atmospherically corrected surface reflectance from Landsat 5 TM and Landsat 8 OLI/TIRS time series data are used. The data are delivered together with a QA band (quality assessment band) indicating the presence of clouds [59]. Landsat 5 TM was launched on 1 March 1984 and was decommissioned on 5 June 2013. Landsat 8 OLI/TIRS was launched on 11 February 2013 and is still active [59]. Together, they cover a time span of 37 years with a revisit time of 16 days. In 2000 and 2001, a sensor error from Landsat 5 TM was identified over parts of the study area, which affects most of the period in which the annual mosaics are created. For this reason, these two years were not taken into account in all further evaluations. Likewise, change detection for the year 2002 (considering year\\({}_{2001}\\) and year\\({}_{2002}\\)) are also affected and therefore not presented. In total, 11,144 scenes from Landsat 5 TM and 5643 scenes from Landsat 8 OLI/TIRS were processed.
The Landsat legacy is particularly suitable for tropical forest monitoring: Due to the medium spatial resolution of 30 m, even smaller-scale natural events and human interventions in forest areas can be detected. In addition, Landsat is the only freely available satellite image archive covering a period of more than three decades of earth observation at this spatial resolution, which allows a long retrospective analysis of land surface changes. In the future, the continuity of Landsat recordings will enable consistent observations [60].
For the accuracy assessment, the classifications created in this work are evaluated against the annual classifications of the Program for Monitoring Deforestation in the Brazilian Amazon (PRODES) with a minimum mapping unit of 6.25 ha, accessible via the National Institute for Space Research (INPE: [http://www.obt.inpe.br/OBT/assuntos/programs/amazonia/prodes](http://www.obt.inpe.br/OBT/assuntos/programs/amazonia/prodes), accessed on 6 May 2021). The PRODES year of deforestation refers to the period from 1 August of one year to 31 July of the following year, e.g., annual deforestation for the year 2020 is obtained for the period of 1 August 2019 through 31 July 2020 [61]. However, only classifications from 2005 to 2019 are freely available and therefore used for the accuracy assessment. The datasets contain the four classes _Forest_, _Non-Forest_, _Water_, and _Cloud_. Other forest change dynamics such as forest degradation are not covered.
### Preprocessing
The approach of this work consists of several steps, schematically shown in Figure 3. In a first preprocessing step, annual multitemporal composite mosaics are built for the period between 1986 and 2020 based on the available imagery. Second, a cloud masking is performed. This is crucial because the study area is prone to a high cloud coverage due to the strong influence of the South Atlantic Convergence Zone (SACZ) [62]. The delivered QA band provides bitmasks for clouds and cloud shadows. All pixels containing the respective bit value are removed.
Furthermore, only images between the months of January and September are considered, since the dry season lasts until September in nondrought years, i.e., there are less
Figure 2: Municipality of Colniza (data from [47; 52]. Service Layer Credits: Earthstar Geographics).
clouds [49]. In addition, we target the months with high rates of slash-and-burn activities, which are between July and September in the Brazilian Amazon [63]. The annual composites are then created using the principle that the most recent and cloud-free pixel of all filtered scenes is used.
### Spectral Mixture Analysis
#### 2.4.1 Spectral Mixture Derivation
To detect forest degradation by selective logging or forest fires, a spectral mixture analysis (SMA) is applied in Google Earth Engine (GEE), which is described for the first time by Adams (1993) [64]. Thereby, the reflectance values per pixel are decomposed into different constituent spectra of pure materials, the so-called spectral endmembers, and corresponding fractions that correlate linearly with each other. The SMA obtains layers containing the pixelwise fraction for each specified endmember [21,22,65]. Thus, the resulting unmixing of pixels highlights details in subpixels and thereby possibly provides better results than conventional classification methods [65,66]. Spectral endmembers are often related to real-world objects in the study area, such as soil, water, metal, or any other natural or man-made material [65]. The selection of these endmembers is crucial for a representative SMA, since they have to reflect all ground materials present in the study area [30].
The defined endmembers in this study are Green Vegetation (GV), Non-photosynthetic Vegetation (NPV) (senescent vegetation, like dead trees or bark), Soil, and Shade, whereby the Landsat bands Blue, Green, Red, Near Infrared (NIR), Shortwave Infrared (SWIR) 1, and SWIR 2 were used for creating the spectral endmember curves. A fifth endmember, Cloud, was added afterward to capture clouds that were not detected by the initial masking [21]. The used endmember values in this study are derived by the Institute of People and the Environment of the Amazon (Imazon) [67] within the MapBiomas project ([https://github.com/mapbiomas-brazil/amazon/blob/master/modules/SMA_NDFI.py](https://github.com/mapbiomas-brazil/amazon/blob/master/modules/SMA_NDFI.py), accesssed on 6 May 2021). This was done using 35,000 samples for the entire Amazon region from the reference database of the Laboratory of Image and Geoprocessing
Figure 3: Schematic workflow.
(LAPIG) of the Federal University of Goias (UFG), trained and validated based on a random forest approach [68; 69].
Figure 4 illustrates the reflection curves per endmember. GV shows the typical spectral profile for photosynthetically active vegetation, where reflectance sharply rises between the Red and NIR band, the so-called red edge. The shape of the Soil endmember curve increases with increasing wavelength because of iron oxide absorptions at shorter wavelengths. The NPV curve has tendencies of the GV curve (red edge characteristic), as well as of the soil reflectance curve. The reflectance value in the Red band is higher because of the degradation of chlorophyll pigments [70].
The SMA is based on Equation (1), where the reflectance of a given pixel \\(R_{b}\\) in band \\(b\\) is modeled as the sum of the reflectance of each endmember within a pixel multiplied by its fractional cover [64; 71; 72; 73]:
\\[R_{b}=\\sum_{i=1}^{n}F_{i}\\cdot R_{i,b}+\\epsilon_{b} \\tag{1}\\]
where
\\(F_{i}\\) is the fraction of endmember \\(i\\) and
\\(n\\) is the number of pure spectra (endmembers).
Unmodeled portions of the spectrum are expressed as residual error \\(\\epsilon_{b}\\) in band \\(b\\).
All endmember fractions should sum to 1 and have positive values [74; 75]. However, since the endmember Cloud was additionally added in this study, all endmembers add up to 1 plus the portion of the Cloud endmember fraction.
The SMA is applied to the Landsat mosaics throughout the obervation period.
Based on the fraction images resulting from the SMA, the Normalized Difference Fraction Index (NDFI), developed by Souza, Roberts, et al. (2005) [23], is calculated. The NDFI enhances the degradation signal resulting from selective logging or burning and is computed by:
\\[NDFI=\\frac{GV_{Shade}-(NPV+Soil)}{GV_{Shade}+NPV+Soil} \\tag{2}\\]
where the GV\\({}_{Shade}\\) fraction is given by
\\[GV_{Shade}=\\frac{GV}{100-Shade} \\tag{3}\\]
Figure 4: Reflectance curves of used spectral endmembers (data from [67]).
The NDFI values range from \\(-1\\) to \\(1\\). For all further analyses, the NDFI values are rescaled to 0-200. Intact forests show high values (close to 1) due to the combination of high GV_Shate_ (high GV and canopy Shade) and low NPV and Soil values. In degraded forest regions, the NDFI value decreases as NPV and Soil fractions increase. The NDFI has therefore the potential to identify degraded forest areas; the degradation is caused by burning and selective logging [21; 22].
#### 2.4.2 Decision Tree Classification
Based on the fraction images and the resulting NDFI, a decision tree classification (DTC) is carried out. The DTC threshold values were empirically derived by Souza and Siqueira (2013) [28] and Souza et al. (2013) [22] and adopted in this study since the thresholds had been developed for a research project in the Brazilian Amazon. Their hierarchical classification rules are based on image classification techniques in their custom-developed remote sensing data analysis software, training samples, and knowledge-based empirical algorithms [28]. For the classification in this study, five target classes are established: _Forest_, _Degradation_, _Non-Forest_, _Water_, and _Cloud_. Each of these classes is based on previously defined assumptions, shown in Figure 5: First, all cloudy pixels are masked, where end-member Cloud \\(\\geq\\) 10% is taken into account. All pixels that do not fulfill this condition will be used for all subsequent rules. For the DTC target classes _Forest_ and _Degradation_ the NDFI is utilized: pixels with NDFI \\(\\geq\\) 185 are assigned to the class _Forest_, pixels in the value range 175 \\(\\leq\\) NDFI \\(<\\) 185 are assigned to the class _Degradation_. According to Souza and Siqueira (2013) [28], pixels belonging to the defined _Degradation_ class represent areas of canopy damage larger than 25% caused by selective logging or forest fires. This is followed by the _Water_ class, where endmembers GV and Soil show low fractions (GV \\(\\leq\\) 10% and Soil \\(\\leq\\) 5%), but high Shade fractions (\\(\\geq\\) 75%). If none of the conditions matches to a pixel, it will be assigned to the class _Non-Forest_.
Figure 5: Decision tree for mapping Non-Forest, Forest, Degradation, Water and Cloud (adopted from [22; 28]).
A cloud filter is applied after the DTC. Each pixel, classified as _Cloud_ in the current Year\\({}_{s}\\), is compared with the same pixel of Year\\({}_{s-1}\\) and the following years (Year\\({}_{s+x}\\)).
According to Liebsch et al. (2008) [76] and Rozendaal et al. (2019) [77], a forest needs several hundred years to fully recover after clearing. Therefore, it can be ruled out that forest degrades and returns to its original canopy cover within a period of a few years. Similarly, degraded forest does not densify and noticeably degrade again within a short period of time or disappears entirely and then grows rapidly again. It can therefore be assumed that cloud pixels in the whole analysis period can be replaced by the class as which it was classified both in the year before the first cloud pixel (Year\\({}_{s-1}\\)) and in the year after the last successive cloud pixel (Year\\({}_{s+x+1}\\)). This means, for example, that if a pixel was classified as _Forest_ in Year\\({}_{2006}\\) and Year\\({}_{2010}\\), but as _Cloud_ in Year\\({}_{2007}\\), Year\\({}_{2008}\\) and Year\\({}_{2009}\\), these three cloud pixels can be replaced by the class _Forest_. Figure 6 shows a schematic representation of this principle.
Since water bodies in the tropics are subject to high rates of hydrological dynamics, this rule is partially incorrect for the _Water_ class. At the same time, water bodies certainly do not change from water to degraded or dense forest within a few years. Yet, the _Water_ class plays a minor role in this study, as _Forest_, _Degradation_, and _Non-Forest_ classes are of primary importance.
#### 2.4.3 Accuracy Assessment and Plausibility Analysis
For the accuracy assessment, the classifications created in this work are evaluated against the annual classifications of the PRODES dataset. Since the _Cloud_ class, which is included in both classifications, would negatively affect the accuracy assessment and actually does not represent any data values, all cloud pixels in any of the classification (PRODES or classification of this study) were merged and masked out and not included in the accuracy assessment. Consequently, both classifications have the same cloud pixels, which are subsequently replaced by no data values. For the accuracy evaluation, 500,000 stratified random points were selected within each PRODES dataset, i.e., the number of pixels is proportionally distributed across the classes and afterwards compared with the corresponding pixel values of the classification performed in this study.
Since the classification of this study also includes the class _Degradation_, which is not listed in the PRODES dataset, a plausibility analysis of the class _Degradation_ is performed as an additional step. For this purpose, a forest area in eastern Mato Grosso that changes
Figure 6: Schematic illustration of the cloud filter after the classification: Each pixel, classified as Cloud in Year\\({}_{s}\\) is compared with the same pixel of Year\\({}_{s-1}\\) and the following years (Year\\({}_{s+x}\\)). Cloud pixels of Year\\({}_{s}\\) are replaced by the class which is classified both in the year before the first cloud pixel (Year\\({}_{s-1}\\)) and the year after the last successive cloud pixel (Year\\({}_{s+x+1}\\)). In this example, the cloud pixel of Year\\({}_{s}\\) is replaced by a forest pixel.
over time was selected. 125 points were then randomly placed within this area for the years 1990, 2002, 2010, and 2020. This results in a total number of 500 pixels over the four selected years. From these pixels, the pixel values of the corresponding classifications were extracted, as were the respective pixel values of the NPV, Soil, and \\(\\text{GV}_{Shat}\\) endmember bands. These selected three endmembers are critical for assigning the Landsat pixels to the _Forest_, _Degradation_, or _Non-Forest_ classes in the DTC, as they are used in the NDFI (see Figure 5 and Equation (2)). Subsequently, only pixels corresponding to either the _Non-Forest_, _Degradation_, or _Forest_ class are retained, resulting in 494 validation pixels.
### Estimation of Forest Cover Change and Degradation
Annual rates of tree cover change are determined by implementing an additional new classification scheme based on the DTC results, consisting of six newly defined change classes, in which tree cover is lost or gained (see Figure 3). The classes _Non-Forest_, _Degradation_, and _Forest_ from the DTC target classes are used for the reclassification. For the following analyses, it is disregarded whether the deforestation is of anthropogenic origin or natural. The aim is merely to analyze how the forest areas in Mato Grosso have changed between 1987 and 2020.
Tree cover loss is represented in three change classes: Pixels that were classified as _Forest_ in \\(\\text{Year}_{s-1}\\) and as _Degradation_ in \\(\\text{Year}_{s}\\) in the DTC are assigned to the new change class _Degradation_ in \\(\\text{Year}_{s}\\). Pixels that were classified as _Degradation_ in \\(\\text{Year}_{s-1}\\) and as _Non-Forest_ in \\(\\text{Year}_{s}\\) are reclassified to the new _Degradation_ - _Non-Forest_ class in \\(\\text{Year}_{s}\\). The third class, _Deforestation_ in \\(\\text{Year}_{s}\\), contains all pixels that were detected as _Forest_ in \\(\\text{Year}_{s-1}\\) and _Non-Forest_ in \\(\\text{Year}_{s}\\).
Positive tree cover changes are reclassified as new _Non-Forest_ - _Degradation_ class in \\(\\text{Year}_{s}\\), if _Non-Forest_ pixels in \\(\\text{Year}_{s-1}\\) change to _Degradation_ in \\(\\text{Year}_{s}\\). Pixels that were classified as _Degradation_ in \\(\\text{Year}_{s-1}\\) and as _Forest_ in \\(\\text{Year}_{s}\\) are reclassified as _Reforestation_ in \\(\\text{Year}_{s}\\). _Non-Forest_ pixels in \\(\\text{Year}_{s-1}\\) that changed to _Forest_ in \\(\\text{Year}_{s}\\) are reclassified as _Afforestation_ in \\(\\text{Year}_{s}\\).
Since a comparison is always made between the observed \\(\\text{Year}_{s}\\) and the antecedent \\(\\text{Year}_{s-1}\\), tree cover changes can only be detected from 1987 onward, as the first classification from 1986 does not precede any year included in this work.
### Fragmentation Analysis
Fragmentation analyses are designed to study given landscapes and to quantify their changes over time [78]. In this context, landscape metrics are used to investigate and understand the structure, complexity and changes of certain landscapes in more detail [79]. To quantify the fragmentation level and the resulting composition and configuration of the forested and nonforested landscape in the selected municipality of Colniza, five landscape metrics were computed and are described in more detail in Table 1 using the Python package _PyLandStats_, developed by [80]. We aim to analyze how forest area in this community has changed between 1986 and 2020, without focusing on the cause of forest loss, i.e., anthropogenic or natural. To execute the selected landscape metrics, a forest mask is generated from each previously created land classification, as described in Section 2.4.1, which is then used as the input landscape. In addition, a nonforest mask is created for each classified year to be able to draw conclusions about landscape changes based on the analyzed forest fragmentation (Figure 3).
## 3 Results
### Overall Classification Performance
Results show that the overall classification accuracy using the PRODES dataset for the years 2005 to 2019 is 85.6% and the average Kappa value 71.0%. The lowest Kappa value is 67.2% in 2012, the highest is 77.4% in 2007 (Table 2). According to Landis and Koch (1977) [82], Kappa values between 61.0% and 80.0% reflect a substantial strength of agreement. For the _Forest_ class, the average producer's accuracy is 90.3%, the user's accuracy is 79.9%. The class _Non-Forest_ has an average producer's accuracy of 83.6% and a user's accuracy of 96.3%.
The endmember fractions GV\\({}_{Shade}\\), NPV, and Soil of the selected 494 points out of the _Forest_, _Degradation_, and _Non-Forest_ classes for the plausibility analysis are shown in Figure 7. In Figure 7a, it can be seen that pixels assigned to the _Non-Forest_ class show much higher intraclass variability than the two forest classes. In addition, the zoomed-in section in Figure 7b allows improved visual separation of degraded and forested pixels.
The following differences can be identified from the class statistics in Table 3, calculated based on the points in Figure 7. The standard deviation of all displayed endmember fractions is highest for _Non-Forest_ pixels. Furthermore, _Forest_ pixels generally have lower scatter in NPV and Soil fractions than degraded pixels. _Non-Forest_ pixels show the highest mean NPV fractions (mean = 11.2%), followed by degraded forest pixels (mean = 6.4%), whereas _Forest_ pixels have the lowest mean value (mean = 3.6%). The opposite is true for the Soil fractions: _Forest_ pixels show the lowest mean value (mean = 0.2%), _Non-Forest_ pixels the highest (mean = 18.6%). Degraded pixel values are in between (mean = 1.6%) and much closer to _Forest_ values. In addition, _Forest_ pixels have the lowest standard deviation in the Soil fraction (sd = 0.5). A clear ascending mean value distribution is also visible
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline
**Landscape metric** & **Equation** & **Unit** & **Short Description** \\\\ \\hline Percentage of & \\(PLAND=\\frac{\\sum_{j=1}^{n}a_{ij}}{A}*100\\) & & Percentage of landscape belonging to class \\(i\\). \\\\ \\(PLAND\\) & \\(a_{ij}=\\) area of each patch & & Describes the composition of the landscape. \\\\ & \\(A=\\) total landscape area & & \\\\ Number of Patches & \\(NP=n_{i}\\) & - & Number of patches of class \\(i\\). \\\\ \\(NP\\) & \\(n_{i}=\\) number of patches & - & Describes the fragmentation of the landscape. \\\\ & \\(LPI=\\frac{max(a_{ij})_{j=1}^{n}}{A}*100\\) & & The proportion of total class comprised by the \\\\ \\(LPI\\) & \\(max(a_{ij})=\\) area of the largest patch & & \\\\ & \\(A=\\) total landscape area & & Simple measure of dominance. \\\\ & \\(PD=\\frac{n_{i}}{A}\\) & & \\\\ Patch Density & \\(n_{i}=\\) number of patches & [1/ha] & Density of patches of class \\(i\\). \\\\ & \\(A=\\) total landscape area & & Describes the fragmentation of the landscape. \\\\ Mean Patch Area & \\(AREA_{MN}=mean(AREA[patch_{ij}])\\) & & \\\\ \\(AREA_{MN}\\) & \\(AREA[patch_{ij}]=\\) area of each patch & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Used landscape metrics in this study (adapted from [80, 81]).
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c c c c c c} \\hline \\hline & **2005** & **2006** & **2007** & **2008** & **2009** & **2010** & **2011** & **2012** & **2013** & **2014** & **2015** & **2016** & **2017** & **2018** & **2019** & **Mean** \\\\ \\hline
**Overall** & 0.859 & 0.859 & 0.890 & 0.846 & 0.871 & 0.859 & 0.834 & 0.854 & 0.845 & 0.852 & 0.846 & 0.853 & 0.853 & 0.858 & **0.856** \\\\
**Kappa** & 0.717 & 0.716 & 0.774 & 0.715 & 0.696 & 0.737 & 0.714 & 0.672 & 0.706 & 0.691 & 0.702 & 0.691 & 0.701 & 0.702 & 0.709 & **0.710** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Overall accuracies and Kappa values for classifications from 2005 to 2019.
in the GV_Shade_ fraction. _Non-Forest_ pixels show the lowest mean GV_Shade_ value (mean = 30.5%), followed by _Degradation_ (mean = 77.1%) and _Forest_ (mean = 88.3%).
As explained in Section 2.4.1, degraded forests consist of mixed pixels, which include both soil and nonphotosynthetic vegetation in addition to green vegetation. The differences of the three classes just presented in Figure 7 confirm the assumption that degraded forest areas can be categorized as a separate class, since the NPV fraction lies between the values of _Non-Forest_ (high NPV fraction) and _Forest_ (low NPV fraction) pixels, although the transitions are smooth. This also testifies that degraded classified pixels cannot be assigned to _Non-Forest_, the value range for the Soil fractions differs by a great amount (_Degradation_: \\(0\\leq\\text{Soil}\\leq 5\\); _Non-Forest_: \\(0\\leq\\text{Soil}\\leq 53\\)) (Table 3). Furthermore, the mean values of _Degradation_ pixels of the GV_Shade_ fraction are closer to those of _Forest_ than _Non-Forest_, indicating that degraded areas still contain significantly more green vegetation than nonforest areas. Nevertheless, the share in the GV_Shade_ fraction of _Degradation_ pixels is always below 84%, on average at 77% and thus differs from _Forest_ pixels. At the same time, the NPV fraction of the sample points of _Degradation_ pixels is on average higher than that of _Forest_ pixels, which together also indicates a separation of the classes _Forest_ and _Degradation_.
### Land Cover Change
Using the classifications derived from the DTC, class distributions between 1986 and 2020 can be determined. Exemplary spatial visualization of land cover for the years 1986, 1997, 2008, and 2020 is given in Figure 8. In 1986, the forest cover in Mato Grosso comprises 48.7% and is reduced to 34.7% in 2020, representing a loss of more than 126,000 km\\({}^{2}\\) over the entire analysis period. With increasing time, the southern forest boundary of the Amazon
\\begin{table}
\\begin{tabular}{l l l l l l} \\hline \\hline \\multirow{2}{*}{**Class**} & \\multirow{2}{*}{**Fraction**} & \\multirow{2}{*}{**Minimum (\\%)**} & \\multirow{2}{*}{**Mean (\\%)**} & \\multirow{2}{*}{**Maximum (\\%)**} & \\multirow{2}{*}{
\\begin{tabular}{l} **Standard** \\\\ **Deviation** \\\\ \\end{tabular} } \\\\ \\hline Non-Forest & NPV & 0 & 11.2 & 20 & 4.4 \\\\ & GV_Shade_ & 0 & 30.5 & 76 & 20.2 \\\\ & Soil & 0 & 18.6 & 53 & 10.3 \\\\ Degradation & NPV & 2 & 6.4 & 10 & 1.8 \\\\ & GV_Shade_ & 70 & 77.1 & 83 & 3.0 \\\\ & Soil & 0 & 1.6 & 5 & 1.9 \\\\ Forest & NPV & 0 & 3.6 & 7 & 1.6 \\\\ & GV_Shade_ & 75 & 88.3 & 100 & 9.3 \\\\ & Soil & 0 & 0.2 & 3 & 0.5 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Class statistics as used within the plausibility analysis, extracted from 494 random selected pixels.
Figure 7: Fraction image composition of _Forest_, _Degradation_, and _Non-Forest_ pixels, plotted in a ternary diagram. Vertices represent fractions of endmember NPV, GV_Shade_, and Soil, where in (**a**) the entire fractions of endmembers and in (**b**) a zoomed-in section of endmembers NPV (0–20%), GV_Shade_ (70–100%), and Soil (0–20%) are shown.
shifts northward until nonforest cover clearly predominates in 2020. Forest cover is also increasingly disappearing in the Pantanal. Comparing the forest cover from 1986 to 2020, it becomes clear that the remaining forest areas are mostly located in protected sites.
Areas classified as _Non-Forest_ behave contrary to the _Forest_ class. At the beginning of the time series, the _Non-Forest_ share was 38.2% (1986) and reached 60.0% in 2020. The first time that the _Non-Forest_ area is larger than the _Forest_ area is in 1997, while the year before the shares were at similar levels with 43.7% _Non-Forest_ and 44.3% _Forest_, respectively.
Despite increasing deforestation, the proportion of degraded areas remains almost constant (4.5% on average), with the lowest proportion of 2.9% in 2006 and the highest proportion of 6.0% in 1997. Degraded areas of the years shown have mostly disappeared in the subsequent visualized classification and have mainly passed into the class _Non-Forest_.
Derived from the classes _Non-Forest_, _Degradation_, and _Forest_ from the DTC, tree cover changes per year, and the resulting annual loss or gain of tree coverage, calculated from the previous year's values, is visualized in Figure 9. From this, 19 of 31 years analyzed experienced a negative trend where more tree cover is lost than gained (black line). Tree cover loss and gain fluctuates periodically in a two- to three-year cycle. Especially the years 1992, 2009, 2012, and 2019 experienced a strong loss of tree cover, between 23,000 km\\({}^{2}\\) and 32,000 km\\({}^{2}\\) compared to the previous year. Contrary to that, in 1988, 2011 and 2013, the gain in tree cover, compared to the other years, was highest, between 23,000 km\\({}^{2}\\) and 34,000 km\\({}^{2}\\).
The individual change classes provide deeper information on changes in tree cover. In general, the gain in tree cover shows a negative trend in recent years, but the loss shows a positive tendency and thus more tree cover is disappearing.
Gain in tree cover due to densification of degraded forest areas (_Reforestation_) has the smallest positive proportion in all years. Further positive tree cover changes, visualized as _Afforestation_ and _Non-Forest_--_Degradation_ classes, often show similar tree cover gains. The largest increase in trees is experienced in the year 2013 with a total of around 48,000 km\\({}^{2}\\).
Per year, tree cover reductions are relatively evenly distributed within the three loss classes. The years 2012 and 2019 show a particularly high loss of tree cover in all three loss classes, with about 59,000 km\\({}^{2}\\) and 48,000km\\({}^{2}\\), respectively. The total amount of deforested areas, which was the smallest in 2007 with about 1800 km\\({}^{2}\\) (about 2% of the state area
Figure 8: Land cover classifications of the years 1986, 1997, 2008, and 2020 (data basis: [47; 54; 55]).
of Mato Grosso), increased to its maximum in 2012 with almost 21,000 km\\({}^{2}\\) (23% of the total area of Mato Grosso). The 70% increase in deforestation in 2019 compared to 2018 is particularly noticeable. Annual rates of areas affected by forest degradation are higher than deforestation rates for the entire period, except for 1995 and 2012, when deforestation rates were 2% and 23% greater, respectively. Forest changes by degradation were the highest with about 18,000 km\\({}^{2}\\) in 2009, the lowest in 2007 with about 2000 km\\({}^{2}\\).
On the basis of annually deforested areas, temporal intensities of deforestation can also be mapped. Figure 10a shows this as an example for an area in the Amazon. The four sections show deforested areas, grouped in decades, where the last classification per analysis period is shown underneath. It is visible that the forest area was particularly present in the beginning of the analysis period. Over time, small areas, adjacent to already cut-down areas, are predominantly cleared, often with additional deforestation along road axes. In 2020, most of the forest has been cleared and nonforest areas dominate.
Figure 10b illustrates the year of deforestation for a selected part of the area in Figure 10a. Cleared areas, visualized in grey, cannot be assigned to a specific year, rather the interval between 2000 and 2002, due to the sensor error in 2000 and 2001. The areas in the Southeast show that areas there were cleared in the 1990s and now result in one large unwooded patch. Areas for the creation of roads were also cleared in different years. Following the new road course of 2004 in the eastern part, newly cleared areas of different years in the 2010s are adjacent on both road sides.
The same site from Figure 10b is also shown in Figure 10c. This map refers to the degradation frequency over the period from 1986 to 2020. Degraded areas of the same frequency often occur in patterns, for example, along logged roads as well as on later or previously cleared areas. It is noticeable that they reflect the spatial limitations and patterns of the deforested areas. The area to the East (turquoise), which has been degraded at least two times, is as densely degraded and has similar dimensions as the area deforested in 2013 in Figure 10b. The same can be observed for the large northeastern area. It is degraded with the same patchiness as the corresponding deforested site in Figure 10b.
### Forest Fragmentation
The results of the five processed landscape metrics for the municipality of Colniza are shown in Figure 11. This plot includes a combination of the distribution of the calculated landscape metric values and the change of these over time.
At the beginning of the time series, the forest area comprises nearly 100% (PLAND index). The forest area decreases over time, although it remains above 75% in 2020. Converging to this, the PLAND index increases over time in nonforested areas.
Figure 9: Tree cover changes per year (colored bars) and resulting yearly loss or gain of tree coverage (black line).
Value distribution of the number of forest patches (NP) varies to a greater extent. More and more forest fragments are created over time. Until 1999, this number is below 10,000; afterward, the number increases and doubles to about 20,000 in 2004. With approximately 34,300 patches, the peak is reached in 2016. Similarly, the number of nonforested areas is increasing over the years.
The largest forest area (LPI) in Colniza remains stable containing over 94% of the total forest area. However, there is a slight downward trend through the years. The LPI of the nonforested areas differs strongly from forested areas. At the beginning of the time series, LPI values are smaller than 25%. Until 2006, values remain below 25%, mostly even below 15%, before increasing up to 50%.
Patch density (PD) in forested areas increases over the years but remains between 0.1 and 1.8 patches/ha throughout the analysis period, with values mostly below 1 patch/ha. In contrast, patch density in nonforested areas decreases steadily from an initial rate of almost 28 patches/ha to about 5 patches/ha in 2020. From 2006, patch density already averages about 6 patches/ha.
The results of AREA\\({}_{MN}\\) in forested areas change significantly. Forest fragments are largest at the beginning of the analysis period (1049 ha in 1988) and decrease continuously until 1999 (468 ha). A subsequent abrupt loss of area per patch to below 250 ha can be observed, reaching its smallest area of 61 ha in 2016.
Hence, it can be summarized that the forest area and largest forest patch are not only decreasing over time, but due to the increasing NP and PD and decreasing AREA\\({}_{MN}\\), the forested landscape becomes more and more fragmented.
Figure 10: Annual deforestation and forest degradation from 1987 to 2020: (**a**) mapping of deforestation within four decades; (**b**) annual deforestation visualized by year; (**c**) forest degradation frequencies (data basis: [54]. Service Layer Credits: Earthstar Geographics).
Subsequent to these statistical observations, further insights into fragmentation patterns can be obtained. In the first couple of years the percentage of the largest forested patch in the municipality of Colniza changes only slightly, suggesting that contiguous forest areas are not completely separated by deforestation but only incised. This assumption is also supported by the increasing percentage of LPI for unforested patches. Consequently, the largest unwooded patch has probably increased in size over the years as smaller patches were connected by logging of intervening forested areas. Similarly, it is apparent that a break point occurs in 2002. Not only does the average forested patch area decrease by more than 50% from this point, but the number of forest patches almost triples compared to the previous computed year. As the NP of forested land increases, so does the number of patches of nonforested land. A reason could be the effect that the more forest is cleared, the more nonforested patches are created and thus increasing the number of unconnected forest patches.
Changing logging patterns as of 2002 may be one reason for this progression, as it is shown in Figure 12. Until 1999, the predominant logging pattern is marked by the fishbone fragmentation. Especially in the eastern part of the municipality, this logging technique is used (see Section 2.1). At this time, forest patches were logged only sporadically on a small scale and new cutting is predominantly adjacent to existing roads (Figure 12b). As a result, few new patches are created, most of the forest area remains contiguous, and the total forest area decreases only slightly. Due to this method, the PD of deforested areas per hectare is high. However, starting in 2002, many large rectangular, widely distributed, and noncontiguous patches were also increasingly cleared in the West of the municipality, which may explain the high forest loss and decrease in patch density of nonforested areas in Colniza starting that year. This new logging pattern leads to the construction of more and more new roads in the West (see Figure 12a, new road since 2002 in the southwestern part of the map) through previously contiguous forest or protected areas, creating new forest and unforested patches. In addition, the rate of logging in the East of the region is increasing tremendously in the original fishbone fragmentation area (Figure 12b), which in turn is causing many small patches, increasing the number of forest/unforested patches, and decreasing the average forest area. In addition, the initially comparatively small unwooded patches are enlarged by clearing adjacent forests.
Figure 11: Line plot of the computed landscape metrics of the forest and nonforest mask for the municipality of Colniza between 1986 and 2020.
## 4 Discussion
### Spectral Mixture Analysis and Fraction Image Classification
The main objective of this study was the application of a classification methodology for identifying forest and degraded forest areas to draw conclusions on temporal and spatial forest cover changes between 1987 and 2020 in Mato Grosso by using a subpixel analysis. In general, the demarcation of degraded forest to forest and nonforest is difficult due to the combination of different land cover types, such as vegetation, soil, or dead wood, which causes a mixed pixel problem [21]. This challenge could be solved through the SMA, where pixels are divided into individual fractions of previously specified endmembers, which allows them to be assigned to a certain class using a DTC. The success of an SMA highly depends on the accuracy of the spectral endmember's selection, as they must reflect all ground materials [30]. Endmembers must be chosen in such a way that all ground entities are covered but spectrally differ from each other [83]. The accuracy of an SMA therefore depends on both the within-class variability and the between-class variability of the selected endmembers [84]. The higher the within-class variability, i.e., the variability within an endmember class, the lower the accuracy [84, 85]. Likewise, spectrally similar endmembers lead to a high between-class variability and to false results [83, 84]. Since Imazon (2019) [67] already defined meaningful endmembers through their expert knowledge within the MapBiomas project for the Amazon, these could be used for this work and meaningful results could be derived.
In order to classify the individual pixels with the help of fraction images, thresholds were defined per class. Previous studies showed that the determination and selection of thresholds used in DTCs can be locally applied to land cover classifications but cannot be adopted in other areas without taking the different vegetation types present there into account [86]. The thresholds of Souza and Siqueira (2013) [28] and Souza et al. (2013) [22], who carried out an SMA with a subsequent DTC in the same study area, could therefore be adopted but cannot be regarded as general rules. The results of the fraction image classification show a mean overall accuracy of 85.6% and a mean producer's accuracy of 90.3% in mapping forest areas between 2005 and 2019 in Mato Grosso, which correspond well to studies attempting to classify land cover structures for the same study region [27, 28, 87]. Given that the minimum mapping unit of PRODES, the reference data set used for the accuracy assessment, is 6.25 ha, smaller land cover features are not considered in this dataset. This contributes to partially inaccurate results in the accuracy assessment performed, where mapped small-scale land cover changes of our study could not be considered and were recorded as misclassified.
Further errors within the accuracy assessment may result from different time periods in the prediction and reference dataset. In addition, secondary forest regrowth and degraded forest areas are not recorded in PRODES [88]. Consequently, all areas classified as degraded in this study are represented with an accuracy of 0%. Likewise, woodlands in the Cerrado biome are mostly classified as nonforest [88]. It can thus be presumed that the accuracy of the classifications may be even higher than determined by the accuracy assessment. As
Figure 12: Deforestation patterns in Colniza: (**a**) newly developed roads and rectangular cleared patterns in the West as of 2002; (**b**) predominant fishbone patterns in the East (data basis: [47, 52]).
Tyukavina et al. [88] also describe, the PRODES classifications are spatially not accurate and required manual improvements. Visual inspection also revealed misclassifications of water in this datasets. For example, the Coldider Hydropower Plant is not recorded until 2016 and only the original course of the river is visible (Figure 13a). Therefore, PRODES classifications might be examined more closely in analyses involving deforestation rates in the Brazilian rainforest.
The calculated landscape metrics could also be used to show impacts of different deforestation forms. The fishbone logging pattern in Colniza is especially prominent in decreasing AREA\\({}_{MN}\\) of forest patches and PD of cleared areas. To support the assumption that different deforestation patterns lead to different results of certain landscape metrics, more areas with dominant deforestation patterns would need to be analyzed in terms of representative results.
The data basis is also essential. Due to the Landsat sensor resolution of 30 m, some small-scale changes, such as forest aisles or patches smaller than 90 m\\({}^{2}\\) cannot be recorded, which can certainly distort the result. For example, patches that are perceived as separate in the classification might be connected in reality by a narrow forest strip and thus the actual number of patches would be smaller. Therefore, a data set with a higher spatial resolution could lead to different results. It should also be mentioned that due to the partionally incomplete eliminated cloud cover, cloudy forest or nonforest areas could not be considered in this analysis. If a forest area is partially covered with clouds in one year, the extracted forest mask delivers distorted results, for example, more patches or a smaller LPI are detected, although they are actually connected. If in the following year this forest patch is free of clouds, this area is included in the analysis and reflects the actual reality. While an in situ field survey would provide the most accurate data for determining individual patches and their potential connectivity, it is nearly impossible to realize it for such a large study area.
Apart from the methodical identification of forest fragmentation, it would certainly be interesting to determine its consequences for flora and fauna and to establish possible actions. Since forest edges represent habitats with different characteristics than forest interiors [90], the identification of edge and core forest could determine consequences for population dynamics or community structures, such as the increased predation from bird nests at forest edges [91].
In fact, there is disagreement among scientists about whether habitat fragmentation is good or bad for biodiversity (e.g., [35; 92]), making the results of the fragmentation
Figure 13: Comparison of PRODES and SMA classifications: (**a**) undetected Colder Hydropower Plant by PRODES in 2016; (**b**) unmapped road courses in the PRODES dataset of 2006; (**c**) differences in the classification of degraded forest areas between the SMA and PRODES classification in 2019 (data from [54; 55; 89], Landsat 5 TM and Landsat 8 OLI/TIRS images from 18 July 2006, 15 June 2017 and 26 July 2019).
analysis from this work difficult to interpret. More studies would certainly have to be carried out, considering various factors. It can only be stated that large contiguous habitat patches should be preserved if fragmentation has predominantly negative effects on species diversity. If fragmentation has predominantly positive effects on species, then the focus should be on the preservation of a large number of small patches. In case fragmentation has neither negative nor positive effects on species diversity, the primary objective should be to protect all habitats, regardless of their size or distribution [92]. However, as shown in Section 3.3, it is not necessarily a matter of determining the best logging practice, but of the rapidity and intensity with which forests are disappearing and how this will affect any species in the future [39].
## 5 Conclusions and Outlook
This study analyzes the temporal and spatial effects of forest loss in Mato Grosso over 34 years. The availability of time series data from the Landsat legacy with high temporal and spatial resolution offers a cost-effective and solid basis for the analysis of interannual forest change. With this dataset, the complete analysis period in Mato Grosso could be covered. Additional cloud filters also help to reduce most of the cloud cover and thus improve the data basis. Nevertheless, the given resolution of 30 m can also be a limiting factor, since small-scale changes cannot be detected.
The fraction image classification provides a good technique to distinguish forest from nonforest and also degraded forest in the Amazon region. For the transferability of the used endmembers to other regions of interest, the ground materials must show the same spectral characteristics of the study area described here. Based on the classification, different processes of temporal forest cover change can be identified. In the majority of the analyzed years, forest loss is greater than gain, with irregular variation between years. To mitigate the resulting impacts in the future, it is important to continue the close monitoring of changes in the rainforest and to take further mandatory policy decisions, not least to achieve the goal of zero illegal deforestation in the Amazon by 2030 [93].
In order to measure altered forest structures in the community of Colniza, the conducted fragmentation analysis shows that different logging patterns result in different forms of fragmentation which are measured by landscape metrics. In general, the fragmentation of forest areas increases, resulting in changing habitats for flora and fauna. These results can help to better observe and analyze forested areas and to take appropriate decisions to protect this fragile ecosystem.
Conceptualization, M.H.; formal analysis, M.H.; methodology, M.H.; supervision, M.W. and E.d.P.; validation, M.H.; visualization, M.H; writing--original draft, M.H.; writing--review and editing, M.H., M.W. and E.d.P. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Not applicable.
The authors declare no conflict of interest.
## References
* _Simoes et al. (2020)_ Simoes, R.; Picoli, M.C.A.; Camara, G.; Maciel, A.; Santos, L.; Andrade, P.R.; Sanchez, A.; Ferreira, K.; Carvalho, A. Land use and cover maps for Mato Grosso State in Brazil from 2001 to 2017. _Sci. Data_**2020**, \\(7\\), 34. [CrossRef] [PubMed]
* FAO (2018) FAO. _The State of Agricultural Commodity Markets 2018: Agricultural Trade, Climate Change and Food Security;_ FAO: Roma, Italy, 2018.
* _Picoli et al. (2020)_ Picoli, M.C.A.; Rorato, A.; Leitao, P.; Camara, G.; Maciel, A.; Hostert, P.; Sanches, I.D. Impacts of Public and Private Sector Policies on Soybean and Pasture Expansion in Mato Grosso--Brazil from 2001 to 2017. _Land_**2020**, \\(9\\), 20. [CrossRef]
* _Lapola et al. (2014)_ Lapola, D.M.; Martinelli, L.A.; Peres, C.A.; Ometto, J.P.H.B.; Ferreira, M.E.; Nobre, C.A.; Aguiar, A.P.D.; Bustamante, M.M.C.; Cardoso, M.F.; Costa, M.H.; et al. Pervasive transition of the Brazilian land-use system. _Nat. Clim. Chang._**2014**, \\(4\\), 27-35. [CrossRef]* _Beuchle et al. (2015)_ _Beuchle, R.; Grecchi, R.C.; Shimabukuro, Y.E.; Seliger, R.; Eva, H.D.; Sano, E.; Achard, F. Land cover changes in the Brazilian Cerrado and Caatinga biomes from 1990 to 2010 based on a systematic remote sensing sampling approach._ Appl. Geogr._**2015**, _58_, 116-127. [CrossRef]
* _Da Cruz et al. (2021)_ _Da Cruz, D.C.; Benayas, J.M.R.; Ferreira, G.C.; Santos, S.R.; Schwartz, G. An overview of forest loss and restoration in the Brazilian Amazon._ New For._**2021**, _52_, 1-16. [CrossRef]
* _IBGE_ (2021)_ _IBGE. Producq Agricola Municipal--PAM: Series histories. Available online: [https://www.ibge.gov.br/estatisticas/economics/agricultura-e-pecuaring/9117-producao-agricola-municiipal-culturas-temporarias-e-permanentes.html?=&t=series-histories](https://www.ibge.gov.br/estatisticas/economics/agricultura-e-pecuaring/9117-producao-agricola-municiipal-culturas-temporarias-e-permanentes.html?=&t=series-histories) (accessed on 10 March 2021).
* _Broadbent et al. (2008)_ _Broadbent, E.N.; Asner, G.P.; Keller, M.; Knapp, D.E.; Oliveira, P.J.; Silva, J.N. Forest fragmentation and edge effects from deforestation and selective logging in the Brazilian Amazon._ Biol. Conserv._**2008**, _141_, 1745-1757. [CrossRef]
* _De La Vega-Leinert and Huber (2019)_ _De La Vega-Leinert, A.C.; Huber, C. The Down Side of Cross-Border Integration: The Case of Deforestation in the Brazilian Mato Grosso and Bolivian Santa Cruz Lowlands._ Environ. Sci. Policy Sustain. Dev._**2019**, _61_, 31-44. [CrossRef]
* _Bullock et al. (2020)_ _Bullock, E.L.; Woodcock, C.E.; Souza, C.; Olofsson, P. Satellite-based estimates reveal widespread forest degradation in the Amazon._ Global Chang. Biol._**2020**, _26_, 2956-2969. [CrossRef]
* _Nogueira et al. (2018)_ _Nogueira, E.M.; Yanai, A.M.; de Vasconcelos, S.S.; de Alencastro Graca, P.M.L.; Fearnside, P.M. Brazil's Amazonian protected areas as a bulwax against regional climate change._ Reg. Environ. Chang._**2018**, _18_, 573-579. [CrossRef]
* Cochrane (2003) Cochrane, M.A. Fire science for rainformess._ Nature_**2003**, _421_, 913-919. [CrossRef]
* _Silverio et al. (2019)_ _Silverio, D.V.; Brando, P.M.; Bustamante, M.M.C.; Putz, E.E.; Marra, D.M.; Levick, S.R.; Trumbore, S.E. Fire, fragmentation, and windstorms: A recipe for tropical forest degradation._ J. Ecol._**2019**, _107_, 656-667. [CrossRef]
* _Bustamante et al. (2016)_ _Bustamante, M.M.C.; Roitman, I.; Aide, T.M.; Alencar, A.; Anderson, L.O.; Aragao, L.; Asner, G.P.; Barlow, J.; Berenguer, E.; Chambers, J.; et al. Toward an integrated monitoring framework to assess the effects of tropical forest degradation and recovery on carbon stocks and biodiversity._ Global Chang. Biol._**2016**, _22_, 92-109. [CrossRef] [PubMed]
* Dupont and Brunet (2008) Dupont, S.; Brunet, Y. Impact of forest edge shape on tree stability: A large-eddy simulation study._ Forestry_**2008**, _81_, 299-315. [CrossRef]
* _Lovejoy et al. (2018)_ _Lovejoy, T.E.; Nobre, C. Amazon Tipping Point._ Sci. Adv._**2018**, \\(4\\), eaat2340. [CrossRef] [PubMed]
* _DeWalt et al. (2003)_ _DeWalt, S.J.; Maliakal, S.K.; Denslow, J.S. Changes in vegetation structure and composition along a tropical forest chronosequence: Implications for wildlife._ For. _Ecol. Manag._**2003**, _182_, 139-151. [CrossRef]
* _Laurance et al. (2002)_ _Laurance, W.F.; Lovejoy, T.E.; Vasconcelos, H.L.; Bruna, E.M.; Didham, R.K.; Stouffer, P.C.; Gascon, C.; Bierregaard, R.O.; Laurance, S.G.; Sampaio, E. Ecosystem Decay of Amazonian Forest Fragments: A 22-Year Investigation._ Conserv. Biol._**2002**, _16_, 605-618. [CrossRef]
* _Baccini et al. (2012)_ _Baccini, A.; Goetz, S.J.; Walker, W.S.; Laporte, N.T.; Sun, M.; Sulla-Menashe, D.; Hackler, J.; Beck, P.S.A.; Dubayah, R.; Friedl, M.A.; et al. Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps._ Nat. Clim. Chang._**2012**, \\(2\\), 182-185. [CrossRef]
* _Esquivel-Muelbert et al. (2019)_ _Esquivel-Muelbert, A.; Baker, T.R.; Dexter, K.G.; Lewis, S.L.; Brienen, R.J.W.; Feldpausch, T.R.; Lloyd, J.; Monteagudo-Mendoza, A.; Arroyo, L.; Alvarez-Davila, E.; et al. Compositional response of Amazon forests to climate change._ Global Chang. Biol._**2019**, _25_, 39-56. [CrossRef]
* _Achard et al. (2014)_ _Achard, F.; Boschetti, L.; Brown, S.; Brady, M.; DeFries, R.; Grassi, G.; Herold, M.; Mollicone, D.; Mora, B.; Pandey, D.; et al. _A Searcbook of Methods and Procedures for Monitoring and Reporting Anthropogenic Greenhouse Gas Emissions and Removals Associated with Deforestation, Gains and Losses of Carbon Stocks in Forests Remaining Forests, and Forestation_; Wageningen University: Wageningen, The Netherlands, 2014.
* _Souza et al. (2013)_ _Souza, C.M.; Siqueira, J.; Sales, M.; Fonseca, A.; Ribeiro, J.; Numata, I.; Cochrane, M.; Barber, C.; Roberts, D.; Barlow, J. Ten-Year Landsat Classification of Deforestation and Forest Degradation in the Brazilian Amazon._ Remote Sens._**2013**, \\(5\\), 5493-5513. [CrossRef]
* _Souza et al. (2005)_ _Souza, C.M.; Roberts, D.A.; Cochrane, M.A. Combining spectral and spatial information to map canopy damage from selective logging and forest fires._ Remote Sens. Environ._**2005**, _98_, 329-343. [CrossRef]
* _Daldegan et al. (2019)_ _Daldegan, G.A.; Roberts, D.A.; Ribeiro, F.d.F. Spectral mixture analysis in Google Earth Engine to model and delineate fire scars over a large extent and a long time-series in a rainforest-savanna transition zone._ Remote Sens. Environ._**2019**, _232_, 111340. [CrossRef]
* _Torres et al. (2021)_ _Torres, D.L.; Turnes, J.N.; Soto Vega, P.J.; Feitosa, R.Q.; Silva, D.E.; Marcato Junior, J.; Almeida, C. Deforestation Detection with Fully Convolutional Networks in the Amazon Forest from Landsat-8 and Sentinel-2 Images._ Remote Sens._**2021**, _13. [CrossRef]_
* _Matosak et al. (2022)_ _Matosak, B.M.; Fonseca, L.M.G.; Taquary, E.C.; Maretto, R.V.; Bendini, H.d.N.; Adami, M. Mapping Deforestation in Cerrado Based on Hybrid Deep Learning Architecture and Medium Spatial Resolution Satellite Time Series._ Remote Sens._**2022**, _14. [CrossRef]_
* _Souza (2003)_ _Souza, C.M. Mapping forest degradation in the Eastern Amazon from SPOT 4 through spectral mixture models._ Remote Sens. Environ._**2003**, _87_, 494-506. [CrossRef]
* _Souza and Siqueira (2013)_ _Souza, C.M.; Siqueira, J._ImgTools: A Software for Optical Remotely Sensed Data Analysis_; Anais XVI Simposio Brasileiro de Sensoriamento Remoto (SBSR): Foz do Iguacu, Brasil, 2013; pp. 1571-1578.
* _Betbeder et al. (2022)_ _Betbeder, J.; Arvor, D.; Blanc, L.; Cornu, G.; Bourgoin, C.; Le Roux, R.; Mercier, A.; Sist, P.; Lucas, M.; Brenez, C.; et al. Assessing the Causes of Tropical Forest Degradation Using Landsat Time Series: A Case Study in the Brazilian Amazon. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11-16 July 2021; pp. 1015-1018. [CrossRef]
* (30) Powell, R.; Roberts, D.; Dennison, P.; Hess, L. Sub-pixel mapping of urban land cover using multiple endmember spectral mixture analysis: Manaus, Brazil. _Remote Sens. Environ._**2007**, _106_, 253-267. [CrossRef]
* (31) Tapakova, L.; Stejskalova, D.; Karasek, P.; Podhrazska, J. Landscape Metrics as a Tool for Evaluation Landscape Structure--Case Study Hustopee. _Eur. Countries._**2013**, \\(5\\), 52-70. [CrossRef]
* (32) Hermosilla, T.; Wulder, M.A.; White, J.C.; Coops, N.C.; Pickell, P.D.; Bolton, D.K. Impact of time on interpretations of forest fragmentation: Three-decades of fragmentation dynamics over Canada. _Remote Sens. Environ._**2019**, _222_, 65-77. [CrossRef]
* (33) Chaplin-Kramer, R.; Ramler, I.; Sharp, R.; Haddad, N.M.; Gerber, J.S.; West, P.C.; Mandle, L.; Engstrom, P.; Baccini, A.; Sim, S.; et al. Degradation in carbon stocks near tropical forest edges. _Nat. Commun._**2015**, \\(6\\), 10158. [CrossRef]
* (34) Laurance, W.F.; Nascimento, H.E.M.; Laurance, S.G.; Andrade, A.; Ribeiro, J.E.L.S.; Giraldo, J.P.; Lovejoy, T.E.; Condit, R.; Chave, J.; Harms, K.E.; et al. Rapid decay of tree-community composition in Amazonian forest fragments. _Proc. Natl. Acad. Sci. USA_**2006**, _103_, 19010-19014. [CrossRef]
* (35) Fletcher, R.J.; Didham, R.K.; Banks-Leite, C.; Barlow, J.; Ewers, R.M.; Rosindell, J.; Holt, R.D.; Gonzalez, A.; Pardini, R.; Damschen, E.I. Is habitat fragmentation good for biodiversity? _Biol. Conserv._**2018**, _226_, 9-15. [CrossRef]
* (36) Diaz, J.A.; Carbonell, R.; Virgos, E.; Santos, T.; Telleria, J.L. Effects of forest fragmentation on the distribution of the lizard Psammodromus alirus. _Anim. Conserv._**2000**, \\(3\\), 235-240. [CrossRef]
* (37) Debinski, D.M.; Holt, R.D. A Survey and Overview of Habitat Fragmentation Experiments. _Conserv. Biol._**2000**, _14_, 342-355. [CrossRef]
* (38) Peh, K.S.H.; Lin, Y.; Luke, S.H.; Foster, W.A.; Turner, E.C. Forest Fragmentation and Ecosystem Function. In _Global Forest Fragmentation_; Kettle, C.J., Koh, L.P., Eds.; CABI: Wallingford, UK, 2014; pp. 96-114. [CrossRef]
* (39) De Filho, F.J.B.O.; Metzger, J.P. Thresholds in landscape structure for three common deforestation patterns in the Brazilian Amazon. _Landsc. Ecol._**2006**, _21_, 1061-1073. [CrossRef]
* (40) Ferreira, I.J.M.; Ferreira, J.H.D.; Bueno, P.A.A.; Vieira, L.M.; Bueno, R.d.O.; do Couto, E.V. Spatial dimension landscape metrics of Atlantic Forest remnants in Parana State, Brazil. _Acta Sci. Technol._**2018**, \\(1\\), e36503. [CrossRef]
* (41) Slattery, Z.; Fenner, R. Spatial Analysis of the Drivers, Characteristics, and Effects of Forest Fragmentation. _Sustainability_**2021**, _13_, 3246. [CrossRef]
* (42) Arvor, D.; Meirelles, M.; Dubreuil, V.; Begue, A.; Shimabukuro, YE. Analyzing the agricultural transition in Mato Grosso, Brazil, using satellite-derived indices. _Appl. Geogr._**2012**, _32_, 702-713. [CrossRef]
* (43) Spera, S.A.; Cohn, A.S.; VanWey, L.K.; Mustard, J.F.; Rudorff, B.F.; Risso, J.; Adami, M. Recent cropping frequency, expansion, and abandonmentment in Mato Grosso, Brazil had selective land characteristics. _Environ. Res. Lett._**2014**, \\(9\\), 064010. [CrossRef]
* (44) EMBRAPA. Codigo Florestial: Glossario. Available online: [https://www.embrapa.br/codigo-florestal/entenda-o-codigo-florestal/glossario](https://www.embrapa.br/codigo-florestal/entenda-o-codigo-florestal/glossario) (accessed on 6 May 2021).
* (45) Arvor, D.; Jonathan, M.; Meirelles, M.S.P.; Dubreuil, V.; Durieux, L. Classification of MODIS EVI time series for crop mapping in the state of Mato Grosso, Brazil. _Int. J. Remote Sens._**2011**, _32_, 7847-7871. [CrossRef]
* (46) Danielson, J.J.; Gesch, D.B. _Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010)_; US Department of the Interior, US Geological Survey: Washington, DC, USA, 2011. [CrossRef]
* (47) INTERMAT. Bases Cartograficas. Available online: [http://www.intermat.mt.gov.br/~/11303036-bases-cartograficas](http://www.intermat.mt.gov.br/~/11303036-bases-cartograficas) (accessed on 12 January 2021).
* (48) Kastens, J.H.; Brown, J.C.; Coutinho, A.C.; Bishop, C.R.; Esquedo, J.C.D.M. Soy moratorium impacts on soybean and deforestation dynamics in Mato Grosso, Brazil. _PLoS ONE_**2017**, _12_, e0176168. [CrossRef]
* (49) Aragso, L.E.O.C.; Malhi, Y.; Barbier, N.; Lima, A.; Shimabukuro, Y.; Anderson, L.; Saatchi, S. Interactions between rainfall, deforestation and fires during recent years in the Brazilian Amazonia. _Phil. Trans. R. Soc. B_**2008**, _363_, 1779-1785. [CrossRef]
* (50) Arvor, D.; Dubreuil, V.; Ronchail, J.; Simoes, M.; Funatsu, B.M. Spatial patterns of rainfall regimes related to levels of double cropping agriculture systems in Mato Grosso (Brazil). _Int. J. Climatol._**2014**, _34_, 2622-2633. [CrossRef]
* (51) Fick, S.E.; Hijmans, R.J. WorldClim 2: New 1-km spatial resolution climate surfaces for global land areas. _Int. J. Climatol._**2017**, \\(3\\), 4302-4315. [CrossRef]
* (52) OpenStreetMap Contributors. Planet Dump Retrieved from [https://planet.osm.org](https://planet.osm.org). Available online: [https://www.openstreetmap.org](https://www.openstreetmap.org) (accessed on 6 May 2021).
* (53) World Bank. Brazil--Road Network (Federal and State Highways). Available online: [https://datacatalog.worldbank.org/dataset/brazil-road-network-federal-and-state-highways](https://datacatalog.worldbank.org/dataset/brazil-road-network-federal-and-state-highways) (accessed on 6 May 2021).
* (54) IBGE. State Boundary: Mato Grosso, Brasil. Available online: [https://maps.princeton.edu/catalog/stanford-mt656s47052](https://maps.princeton.edu/catalog/stanford-mt656s47052) (accessed on 6 May 2021).
* (55) Global Forest Watch. Brazil Biomes. Available online: [https://data.globalforestwatch.org/datasets/54ec099791644be4b273d9d8a853452_4/explore?showTable=true](https://data.globalforestwatch.org/datasets/54ec099791644be4b273d9d8a853452_4/explore?showTable=true) (accessed on 6 May 2021).
* (56) IBGE. Urbanized Areas. Available online: [https://www.ibge.gov.br/en/geosciences/full-list-geosciences/18097-urbanized-areas.html?=&t=downloads](https://www.ibge.gov.br/en/geosciences/full-list-geosciences/18097-urbanized-areas.html?=&t=downloads) (accessed on 06 May 2021).
* (57) IBGE. Conheca Cidades e Estados do Brasil. Available online: [https://cidades.ibge.gov.br/](https://cidades.ibge.gov.br/) (accessed on 6 May 2021).
* (58) Ferreira, D.A.C.; Filho, A.C. Modelagem do desmatamento no municipio de Colniza--MT. Anais XIII Simposio Brasileiro de Sensoriamento Remoto; INPE, Ed.; Instituto Nacional de Pesquisas Espaciais (INPE): Sao Jose dos Campos, Brazil, 2007; pp. 2565-2572.
* (59) USGS. _Landsat 8 (L8): Data Users Handbook_; USGS: Reston, VA, USA, 2019.
* (60) Hadi.; Krasovskii, A.; Maus, V.; Youragana, P.; Pietsch, S.; Rautiainen, M. Monitoring Deforestation in Rainforests Using Satellite Data: A Pilot Study from Kalimantan, Indonesia. _Forests_**2018**, \\(9\\), 389. [CrossRef]
* (61) INPE. _Metodologia Utilizada nos Projetos PRODES e DETER_; Instituto Nacional de Pesquisas Espaciais (INPE): Sao Jose dos Campos, Brazil, 2019.
* (62) Carvalho, L.M.V.; Jones, C.; Liebmann, B. The South Atlantic Convergence Zone: Intensity, Form, Persistence, and Relationships with Intraseasonal to Interannual Activity and Extreme Rainfall. _J. Clim._**2004**, _17_, 88-108. [CrossRef]
* (63) Fonseca, M.G.; Alves, L.M.; Aguiar, A.P.D.; Arai, E.; Anderson, L.O.; Rosa, T.M.; Shimabukuro, Y.E.; de Aragao, L.E.O.E.C. Effects of climate and land-use change scenarios on fire probability during the 21st century in the Brazilian Amazon. _Global Chang. Biol._**2019**, _25_, 2931-2946. [CrossRef] [PubMed]
* (64) Adams, J.B.; Smith, M.O.; Gillespie, A.R. Imaging spectroscopy: Interpretation based on spectral mixture analysis. In _Remote Geochemical Analysis_; Pieters, C., Englert, P., Eds.; Cambridge University Press: Cambridge, UK, 1993; pp. 145-166.
* (65) Keshava, N. A Survey of Spectral Unmixing Algorithms. _Linc. Lab. J._**2003**, _14_, 55-78.
* (66) Pereira, J.; Chuvicco, E.; Beaudoin, A.; Desbois, N. Remote sensing of burned areas: A review. In _A Review of Remote Sensing Methods for the Study of Large Wildland Fires_; Chuvicco, E., Ed.; Universidad de Alcala, Departamento de Geografia: Alcala de Henares, Spain, 1997; pp. 127-183.
* (67) Imazo. _Project MapBiomas--Brazilian Land Cover & Use Map Series_; Instituto do Homem e Meio Ambiente da Amazonia. (Imazon): Belem, Brazil, 2019.
* (68) Souza, C.; Oliveira, L.; Fonseca, A.V. Multi-Decadal Annual Land Cover Dynamics and Forest Disturbance in the Brazilian Amazon Biome. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11-16 July 2021, pp. 665-665. [CrossRef]
* (69) Souza, C.; Oliveira, L.; Fonseca, A.; Siqueira, J.V.; Pinheiro, S.; Ribeiro, J.; Ferreira, B.; Ferreira, R.; Sales, M. Project MapBiomas--Amazon Appendix, Collection 6.0, Version 1. Available online: [https://mapbiomas-br-site.s3.amazonaws.com/Metodologia/Amazon_Appendix_ATBD_Collection_6.doc.pdf](https://mapbiomas-br-site.s3.amazonaws.com/Metodologia/Amazon_Appendix_ATBD_Collection_6.doc.pdf) (accessed on 24 March 2022).
* (70) Huete, A.R. Remote Sensing for Environmental Monitoring. In _Environmental Monitoring and Characterization_; Elsevier: Amsterdam, The Netherlands, 2004; pp. 183-206. [CrossRef]
* (71) Dennison, P.E.; Halligan, K.Q.; Roberts, D.A. A comparison of error metrics and constraints for multiple endmember spectral mixture analysis and spectral angle mapper. _Remote Sens. Environ._**2004**, _93_, 359-367. [CrossRef]
* (72) Roberts, D.A.; Gardner, M.; Church, R.; Ustin, S.; Scheer, G.; Green, R.O. Mapping Chaparral in the Santa Monica Mountains Using Multiple Endmember Spectral Mixture Models. _Remote Sens. Environ._**1998**, _65_, 267-279. [CrossRef]
* (73) Souza, C.M.; Roberts, D.; Monteiro, A. Multitemporal Analysis of Degraded Forests in the Southern Brazilian Amazon. _Earth Interact_**2005**, \\(9\\), 1-25. [CrossRef]
* (74) Shimabukuro, Y.E.; Smith, J.A. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data. _IEEE Trans. Geosci. Remote Sens._**1991**, _29_, 16-20. [CrossRef]
* (75) Adams, J.B.; Sabol, D.E.; Kapos, V.; Filho, R.A.; Roberts, D.A.; Smith, M.O.; Gillespie, A.R. Classification of multispectral images based on fractions of endmembers: Application to land-cover change in the Brazilian Amazon. _Remote Sens. Environ._**1995**, _52_, 137-154. [CrossRef]
* (76) Liebsch, D.; Marques, M.C.; Goldenberg, R. How long does the Atlantic Rain Forest take to recover after a disturbance? Changes in species composition and ecological features during secondary succession. _Biol. Conserv._**2008**, _141_, 1717-1725. [CrossRef]
* (77) Rozendaal, D.M.A.; Dongers, F.; Aide, T.M.; Alvarez-Davila, E.; Ascarrunz, N.; Balvanera, P.; Becknell, J.M.; Bentos, T.V.; Brancalion, P.H.S.; Cabral, G.A.L.; et al. Biodiversity recovery of Neuotropical secondary forests. _Sci. Adv._**2019**, \\(5\\), eaau3114. [CrossRef] [PubMed]
* (78) Singh, S.K.; Srivastava, P.K.; Szabo, S.; Petropoulos, G.P.; Gupta, M.; Islam, T. Landscape transform and spatial metrics for mapping spatiotemporal land cover dynamics using Earth Observation data-sets. _Geocarto Int._**2016**, _32_, 113-127. [CrossRef]
* (79) Gokyer, E. Understanding Landscape Structure Using Landscape Metrics. In _Advances in Landscape Architecture_; Ozyavuz, M., Ed.; InTech: London, UK, 2013. [CrossRef]
* (80) Bosch, M. PyLandStats: An open-source Pythonic library to compute landscape metrics. _PLoS ONE_**2019**, _14_, e0225734. [CrossRef]
* (81) McGarigal, K.; Cushman, S.; Ene, E. _FRAGSTATS: Spatial Pattern Analysis Program for Categorical Maps_; USDA: Washington, DC, USA, 2015.
* (82) Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. _Biometrics_**1977**, _33_, 159-174. [CrossRef]
* (83) Me
* Settle (2006) Settle, J. On the effect of variable endmember spectra in the linear mixture model. _IEEE Trans. Geosci. Remote Sens._**2006**, _44_, 389-396. [CrossRef]
* Roberts et al. (1999) Roberts, D.A.; Batista, G.; Pereira, J.; Waller, E.; Nelson, B. Change Identification Using Multitemporal Spectral Mixture Analysis:Applications in Eastern Amazonia. In _Remote Sensing Change Detection_; Elvidge, C.D.; Lunetta, R.S., Eds.; Ann Arbor Press: Chelsea, MI, USA, 1999; pp. 137-158.
* Shimabukuro et al. (2014) Shimabukuro, Y.E.; Beuchle, R.; Grecchi, R.C.; Achard, F. Assessment of forest degradation in Brazilian Amazon due to selective logging and fires using time series of fraction images derived from Landsat ETM+ images. _Remote Sens. Lett._**2014**, \\(5\\), 773-782. [CrossRef]
* Tyukavina et al. (2017) Tyukavina, A.; Hansen, M.C.; Potapov, P.V.; Stehman, S.V.; Smith-Rodriguez, K.; Okpa, C.; Aguilar, R. Types and rates of forest disturbance in Brazilian Legal Amazon, 2000-2013. _Sci. Adv._**2017**, \\(3\\), e1601047. [CrossRef] [PubMedMed]
* INPE (2020) INPE. _Montiomento da Fluesta Amazonica Brasileira por Satelite_; Instituto Nacional de Pesquisas Espaciais: Sao Jose dos Campos, Brasil, 2020.
* Lopez Barrera et al. (2007) Lopez Barrera, F.; Armesto, J.J.; Williams-Linera, G.; Smith-Ramirez, C.; Manson, R. Fragmentation and Edge Effects on Plant-Animal Interactions, Ecological Processes and Biodiversity. In _Biodiversity Loss and Conservation in Fragmented Forest Landscapes_; Newton, A.C., Ed.; CABI. Wallingford, UK; Cambridge, MA, USA, 2007; pp. 69-101. [CrossRef]
* Ries et al. (2004) Ries, L.; Fletcher, R.J.; Battin, J.; Sisk, T.D. Ecological Responses to Habitat Edges: Mechanisms, Models, and Variability Explained. _Annu. Rev. Ecol. Evol._**2004**, _35_, 491-522. [CrossRef]
* Fahrig et al. (2019) Fahrig, L.; Arroyo-Rodriguez, V.; Bennett, J.R.; Boucher-Lalonde, V.; Cazetta, E.; Currie, D.J.; Eigenbrod, F.; Ford, A.T.; Harrison, S.P.; Jaeger, J.A. Is habitat fragmentation bad for biodiversity? _Biol. Conserv._**2019**, _230_, 179-186. [CrossRef]
* Ministerio das Relacoes Exteriones (2015) Ministerio das Relacoes Exteriones. _Federative Republic of Brazil: Intended Nationally Determined Contribution Towards Achieving the Objective of the United Nations Framework Convention on Climate Change_; Ministerio das Relacoes Exteriones: Brasilia, Brazil, 2015. | An increasing amount of Brazilian rainforest is being lost or degraded for various reasons, both anthropogenic and natural, leading to a loss of biodiversity and further global consequences. Especially in the Brazilian state of Mato Grosso, soy production and large-scale cattle farms led to extensive losses of rainforest in recent years. We used a spectral mixture approach followed by a decision tree classification based on more than 30 years of Landsat data to quantify these losses. Research has shown that current methods for assessing forest degradation are lacking accuracy. Therefore, we generated classifications to determine land cover changes for each year, focusing on both cleared and degraded forest land. The analyses showed a decrease in forest area in Mato Grosso by 28.8% between 1986 and 2020. In order to measure changed forest structures for the selected period, fragmentation analyses based on diverse landscape metrics were carried out for the municipality of Colnia in Mato Grosso. It was found that forest areas experienced also a high degree of fragmentation over the study period, with an increase of 83.3% of the number of patches and a decrease of the mean patch area of 86.1% for the selected time period, resulting in altered habitats for flora and fauna. | Write a summary of the passage below. | 251 |
arxiv-format/2311_03076v3.md | SugarViT--Multi-objective Regression of UAV Images with Vision Transformers and Deep Label Distribution Learning Demonstrated on Disease Severity Prediction in Sugar Beet
Maurice Gunder
Facundo R. Ispizua Yamati
Institute for Computer Science III, University of Bonn, Friedrich-Hirzebruch-Allee 5, 53115 Bonn, Germany
Abel A. Barreto Alcantara
Institute for Computer Science III, University of Bonn, Friedrich-Hirzebruch-Allee 5, 53115 Bonn, Germany
Anne-Katrin Mahlein
Institute for Computer Science III, University of Bonn, Friedrich-Hirzebruch-Allee 5, 53115 Bonn, Germany
Rafet Sifa
Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Schloss Birlinghoven, 53757 Sankt Augustin, Germany
Christian Bauckhage
## 1 Introduction
In precision agriculture, the use of Unmanned Aerial Vehicles (UAVs) equipped with multispectral cameras for monitoring agricultural fields is well-established for various tasks regarding plant phenotyping and health status [5, 9, 34]. Especially in phenotyping for breeding, one of the main advantages of UAV imagery is the mapping flexibility in comparison to satellite imagery, automation and homogenization of laborious and time-consuming visual scoring activities, which usually require large numbers of hours of specialized human labor to score large fields. In agriculture, the term \"visual scoring\" is commonly used to describe a field assessment, such as phenotyping canopy structure or the quantification of disease intensity, specifically, disease severity (DS) [8]. On the other hand, in data science, a related procedure called \"annotation\" is used. Annotation consists of labeling data elements in order to add semantic information or metadata. In essence, although the terms differ in their apparent applications, they represent in this paper an equivalent concept, and in this context, we will use \"annotation\" as a synonym for \"scoring\". Theoretically, when large field experiments are conducted, this data is immediately available. However, a lot of human-powered effort is needed to gain information out of large imagery. This is where machine learning comes into play. (Rehashed) UAV image data has the potential to serve as training data for even large image-processing deep learning models as the currently broadly applied transformer-based models [24]. The origin of the transformer architecture lies in the field of language processing and has led to large success in recent large language models such as the Generative Pretrained Transformer (GPT) [22]. The basic principle of transformers is the so-called attention mechanism [29]. It enables the model to connect and associate features over large semantic or sequential distances. This is beneficial not only for one-dimensional tasks as language processing, since this context can be transferred to higher dimensional use cases like image processing. In this case, we are dealing with a Vision Transformer (ViT) [11] model.
With the power of transformers also a major drawback appears, namely their low data efficiency. Transformers need lots of data to train. This is why their success is currently mainly in application fields where large datasets are available, such as text data. However, we will show, that also on large-scale agricultural datasets enabled by UAVs, those models can be used for annotation tasks. To demonstrate this potential in our work, we will focus on a classification task based on single plant images extracted from recorded sugar beet fields according to Gunder et al. [16]. The single sugar beet plant images are annotated with DS estimations of Cercospora Leaf Spot (CLS), a fungal leaf disease that is a relevant disease causing yield losses in sugar beet production [31]. We aim a DS prediction modeling task and will motivate a multi-objective approach as well as the use of a deep learning architecture based on a ViT. By identifying the DS prediction as an ordinal classification, we reinterpret the classification as a regression task by using the concept of Deep Label Distribution Learning (DLDL) introduced in [12]. We further optimize the vanilla DLDL approach by an improved loss function that does not need a proper hyperparameter tuning [17]. After the training, our model, we further call SugarViT, is able to predict the disease severity of individual plant images by a probability distribution which gains training robustness and output interpretability. In the following work, we will go into all the details of SugarViT and the conducted experiments.
## 2 Materials and Methods
### Data and Preprocessing
A major challenge that comes with the application and, particularly, the training of vision transformer based architectures requires large amounts of image data. In principle, the utilization of UAV imaging from crop fields has the potential to gain those large datasets. However, the conditions under which the images are taken can be very diverse, e.g., due to variable weather, lighting, and resolution. Additionally, device-specific properties can come into play when dealing with different camera models or calibration methods. In the context of plant phenotyping, it is particularly desirable to accumulate image data from multiple growing seasons, which implies that all the above-mentioned difficulties can play a role for an accumulation of large-scale datasets. Thus, in order to utilize as much potential from the data, a preprocessing is needed, that is robust against as many confounding factors as possible.
#### 2.1.1 Available Field Data
The dataset we use in this work consists of multispectral images expressed in reflectance of single sugar beet plants recorded by UAV systems on 6 different locations near Gottingen, Germany (\\(51^{\\circ}33^{\\prime}\\)N \\(9^{\\circ}53^{\\prime}\\)E) [16]. UAV systems are equipped with a multispectral camera recording 5 spectral channels. Those are, sorted by wavelength, _blue_, _green_, _red_, _red edge_, and _near infrared_. Due to a large amount of different sensors, sugar beet varieties, resolutions (or ground sampling distances), locations, and time points, the dataset is very diverse. In total, it covers 4 harvesting periods from 2019 to 2022 and comprises 17 different experiments or flight missions. Table 1 gives an overview over all important information of the dataset. Additionally, Table 2 shows the spectral bandwidths of the two camera sensor systems used in this work.
All experiment fields are equipped with weather sensors, allowing for hourly temperature and humidity measurements in the fields. We can use this data to infer some more environmental quantities. We particularly focus on two of them. Firstly, a basic yet widely used quantity in phytology that connects the local weather with the development stage of the crop is the cumulative Growing Degree Days (GDDs). For each day, the plant accumulates a so-called _thermal sum_ calculated by
\\[\\text{GDD}=\\frac{1}{24}\\sum_{t=0}^{23}\\left(\\frac{T_{\\text{max}}(t)+T_{\\text{ min}}(t)}{2}-T_{\\text{base}}\\right)\\,. \\tag{1}\\]
For the hourly maximum and minimum temperatures \\(T_{\\text{max}}(t)\\) and \\(T_{\\text{min}}(t)\\) additionally applies an upper and lower bound
\\[T(t)=\\begin{cases}T_{\\text{max}}&,\\,\\,T(t)>T_{\\text{max}}\\\\ T_{\\text{base}}&,\\,\\,T(t)<T_{\\text{base}}\\\\ T(t)&\\text{else}\\end{cases}\\,, \\tag{2}\\]
where the base and maximum temperatures \\(T_{\\text{base}}\\) and \\(T_{\\text{max}}\\) are plant-specific parameters. For sugar beet, it is empirically shown that \\(T_{\\text{base}}=1.1\\,^{\\circ}\\mathrm{C}\\) and \\(T_{\\text{max}}=30\\,^{\\circ}\\mathrm{C}\\). [18]. The cumulative quantity of GDDs beginning at the souring date is, after all, a proxy of the plant's development. Secondly, we can calculate a disease-specific quantity. Simply put, the time between the infection of a plant with Cercospora and the ability of infecting other plants is called a _generation_ or _incubation period_. The thermal sum of one incubation period for Cercospora in sugar beet is found to be \\(4963\\,^{\\circ}\\mathrm{C}\\times\\mathrm{h}\\) with \\(T_{\\text{base}}=6.3\\,^{\\circ}\\mathrm{C}\\) and \\(T_{\\text{max}}=32\\,^{\\circ}\\mathrm{C}\\)[6]. For each hourly summand, there is an additional empirical correction coefficient based on the relative humidity: the hourly summand is multiplied by \\(\\frac{7}{8}\\) if the hourly relative humidity is less than \\(80\\,\\%\\). If it is at least \\(80\\,\\%\\), the summand is multiplied by \\(\\frac{9}{8}\\). [1, 7] Summation of the thermal sum and division by the incubation period yields a quotient, that describes the potential number of incubation periods a hypothetically infected plant could have undergone. Wecall this the number of possible generation (NPG). Thus, given environmental information, we can calculate field- and recording-date-specific parameters that can serve as additional data to support the individual plant image data.
#### 2.1.2 Image Normalization
In the vast majority of machine learning tasks dealing with image processing, images are normalized to ensure interoperability and robustness against varying image recording conditions. Additionally, numerical issues in forward and backward pass to deep learning architectures lead to the usage of data values around zero. A naive yet common approach is a simple standardization of the image data by subtracting the channel-specific mean \\(\\mu_{c}\\) or a global mean \\(\\mu\\) and division by the standard deviation (std) \\(\\sigma_{c}\\) or \\(\\sigma\\), respectively, for each image channel \\(\\mathbf{C}\\) in the image \\(\\mathbf{I}\\). The standardization can be done with precalculated (channel-wise) means and stds by using the information of the whole given dataset, with image specific means and stds, or even with fixed, suggested values. In this work, we will assume that our reflectance images dataset could possibly have a bias. Therefore, we standardize each image by only using its own information. Further, we differ between channel-wise and a total standardization by using means and stds for each channel separately and cross-channel calculations, respectively. Thus, we get channel-wise standardized images \\(\\mathbf{I}_{s}^{\\text{th}}\\) and total standardized images \\(\\mathbf{I}_{s}^{\\text{tot}}\\) by
\\[\\mathbf{I}_{s}^{\\text{th}} =\\left\\{\\frac{\\mathbf{C}-\\mu_{c}}{\\sigma_{c}}\\right\\}_{\\mathbf{C} \\in\\mathbf{I}}\\,, \\tag{3}\\] \\[\\mathbf{I}_{s}^{\\text{tot}} =\\frac{\\mathbf{I}-\\mu}{\\sigma}\\,. \\tag{4}\\]
Figure 2 shows some example images separated into its channel components and normalized with those two standardization methods.
A more sophisticated normalization method that, however, comes with more computational effort, is to make use of the image histogram, i.e. the abu
\\begin{table}
\\begin{tabular}{l l l l l l l l} \\hline \\hline
**ID** & **sowing date** & **location** & **sensor** & **ground sampling dist.** & **\\# varieties** & **\\# rec. dates** & **\\# images** & **used for** \\\\ & & & & \\(\\mathrm{mm\\,px^{-1}}\\) & & & & \\\\ \\hline Tr01 & 2019-04-09 & Holtensen I & RedEdge & \\(3.5\\) & \\(2\\) & \\(23\\) & \\(12\\,952\\) & train/val \\\\ Tr02 & 2019-04-09 & Holtensen I & RedEdge & \\(15\\) & \\(2\\) & \\(21\\) & \\(19\\,593\\) & train/val \\\\ Tr03 & 2020-04-06 & Weende & ALTUM & \\(3\\) & \\(2\\) & \\(25\\) & \\(34\\,143\\) & train/val \\\\ Tr04 & 2020-04-06 & Weende & ALTUM & \\(4\\) & \\(1\\) & \\(25\\) & \\(250\\,242\\) & train/val \\\\ Tr05 & 2021-04-01 & Sieboldshausen & ALTUM & \\(5.1\\) & \\(51\\) & \\(7\\) & \\(39\\,475\\) & train/val \\\\ Tr06 & 2021-04-23 & Holtensen I & ALTUM & \\(3\\) & \\(5\\) & \\(27\\) & \\(89\\,082\\) & train/val \\\\ Tr07 & 2021-04-23 & Holtensen I & ALTUM & \\(4\\) & \\(1\\) & \\(23\\) & \\(216\\,098\\) & train/val \\\\ Tr08 & 2021-04-23 & Holtensen I & ALTUM & \\(3\\) & \\(1\\) & \\(22\\) & \\(39\\,118\\) & train/val \\\\ Tr09 & 2021-04-23 & Holtensen I & ALTUM & \\(5\\) & \\(1\\) & \\(22\\) & \\(48\\,915\\) & train/val \\\\ Tr10 & 2021-04-30 & Dransfeld & ALTUM & \\(4\\) & \\(1\\) & \\(22\\) & \\(32\\,604\\) & train/val \\\\ Tr11 & 2021-04-30 & Dransfeld & ALTUM & \\(5\\) & \\(1\\) & \\(22\\) & \\(30\\,530\\) & train/val \\\\ Te01 & 2022-04-19 & Weende & ALTUM & \\(4\\) & \\(1\\) & \\(12\\) & \\(128\\,550\\) & test \\\\ Tr12 & 2022-04-20 & Reinshof & ALTUM & \\(5\\) & \\(4\\) & \\(11\\) & \\(210\\) & train/val \\\\ Tr13 & 2022-04-20 & Weende & ALTUM & \\(5.1\\) & \\(4\\) & \\(12\\) & \\(16\\,693\\) & train/val \\\\ Tr14 & 2022-03-31 & Reinshof & ALTUM & \\(9\\) & \\(1\\) & \\(11\\) & \\(9882\\) & train/val \\\\ Tr15 & 2022-03-31 & Reinshof & ALTUM & \\(18.5\\) & \\(1\\) & \\(8\\) & \\(6104\\) & train/val \\\\ Tr16 & 2022-03-31 & Holtensen II & ALTUM & \\(3.4\\) & \\(1\\) & \\(21\\) & \\(35\\,672\\) & train/val \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Overview of datasets. All images are given as 5-channel multispectral data with \\(144\\,\\mathrm{px}\\times 144\\,\\mathrm{px}\\) of size.
Figure 1: Histograms of available labels for DS, NPG, and GDD separated by train/validation and test data.
\\begin{table}
\\begin{tabular}{l l l l} \\hline \\hline \\multicolumn{2}{c}{**band name**} & \\multicolumn{2}{c}{**sensor**} \\\\ long & short & ALTUM & RedEdge \\\\ \\hline blue & B & \\(459\\cdot 491\\,\\mathrm{nm}\\) & \\(465\\cdot 485\\,\\mathrm{nm}\\) \\\\ green & G & \\(548\\cdot 572\\,\\mathrm{nm}\\) & \\(550\\cdot 570\\,\\mathrm{nm}\\) \\\\ red & R & \\(661\\cdot 675\\,\\mathrm{nm}\\) & \\(663\\cdot 673\\,\\mathrm{nm}\\) \\\\ red edge & REDGE & \\(711\\cdot 723\\,\\mathrm{nm}\\) & \\(712\\cdot 722\\,\\mathrm{nm}\\) \\\\ near infrared & NIR & \\(814\\cdot 870\\,\\mathrm{nm}\\) & \\(820\\cdot 860\\,\\mathrm{nm}\\) \\\\ thermal & TH & \\(8\\cdot 14\\,\\mathrm{\\SIUnitSymbolMicro m}\\) & - \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Spectral ranges of the two camera sensor systems used in this work. The thermal band is not used in this work.
the images. The Histogram Equalization (HE) is a contrast enhancement method that is broadly used in computer vision and image processing task for many application fields like medical imaging, as well as for signal processing like in speech recognition [26]. Briefly spoken, with respect to images, particularly, the idea is to normalize the elementary pixel values by their abundance. Thus, the number of pixels in each bin, or range of contrast, is equalized. As a result, each image is forced to use the full range of possible contrast.
Generally, there are two basic methods--local and global. Unlike global methods, local methods additionally use the environment of the corresponding pixel for equalization. They are usually grouped under the term _Adaptive Histogram Equalization_ (AHE) where a prominent method is called _Contrast Limited AHE_ (CLAHE) [27]. The adaptive methods are used for image processing tasks, where the pure contrast between neighboring objects is important, like in medical applications for tomography images [3]. In this work, we apply the HE method and introduce a channel-wise and cross-channel variant analogous to the standardization. For the histogram-based methods, a lower and upper limit has to be predefined to which the values are scaled. In this work, we chose the values to be in the range \\([-1,1]\\).
#### 2.1.3 Data Augmentation
A common approach to artificially increase the dataset size is data augmentation. For image data, the principle is to sample similar images around \"real\" data instances by various techniques like flip, rotation, color and brightness jitter, etc. Obviously, different use cases allow for different augmentation methods. For instance, in case of medical imaging task, one is mostly bound to the image orientation. In case of street scene images for autonomous driving applications, a vertical flip, i.e. put the image upside down, does not make any sense. However, both cases eventually allow for changes in brightness and/or contrast. In our case of plant images, we fortunately have all degrees of freedom regarding flips and rotations. Thus, we flip each image randomly with a probability of \\(25\\,\\mathrm{\\char 37}\\) and rotate each image by a random angle. Brightness and contrast jitters are not necessary, since our normalization methods neutralize them. Additionally, we can exploit this principle in model inference mode by evaluation the images in different rotations and average the predictions. In order to be robust against different ground sampling distances and accompanied resolution changes, we introduce a Gaussian blur augmentation. With a probability of \\(10\\,\\mathrm{\\char 37}\\), an image is blurred at a strength of \\(3\\) - \\(8\\,\\mathrm{px}\\). Another optional augmentation, is a random channel dropout. With a probability of \\(25\\,\\mathrm{\\char 37}\\), we drop the information of up to \\(3\\) channels. Although it is quite unlikely, that in the application, single channels will be dropped, it is interesting to train models being robust against missing information in order to see, how important each image channel is for the prediction of our target quantity. With the channel dropout, we lower the ability of the model to focus on single channels and rather connect information among all channels.
Next, we shed light on one purpose the data has been recorded for and how we will use it for the use case described in this work.
#### 2.1.4 Use Case: Disease Severity Estimation
The goal behind our use case in this work is about determining the DS in CLS-infected sugar beet fields, one of the most damaging foliar diseases in sugar beet cultivation. CLS is caused by the fungus _Cercospora beticola_. Symptoms appear as numerous small, round, gray spots with a red or brown border on leaves. As the infestation increases, the leaves become necrotic and dry up. When a large part of the leaf area is lost, the plant often tries to recover by generating a new birth of leaves at the cost of its stored sugar. However, if the conditions are favorable for the fungus and the attack is very severe, the plants die. [32, 28]
Figure 2: Example images shown by its separate channel components and processed with total and channel-wise standardization, respectively.
The observation of plant diseases is usually done by visual scoring. Being the visual field scoring of DS an activity that requires a lot of time and well-trained personnel [25] is the main bottleneck in the control of CLS. Therefore, it is desirable to have an automatic DS estimation model.
Considering heterogeneous disease distribution within sugar beet fields, a detailed and geo-referenced assessment of DS might lead to precise protection measures within the canopy. Geo-referenced and plant-based determination of DS is therefore essential. A naive approach for the prediction of DS with single plant images would be to model a classification problem like in [30]. Despite being a valid approach at first sight, certain phenotypical knowledge enables to model this problem more intelligently. In the following sections, we will explain our basic paradigm to solve the DS estimation problem and our proposed deep learning model based on it.
First, we have to define a DS annotation scale that serves as a guideline for all human expert annotations and finally as \"unit\" of the model input. In this work, the rating scale developed in [19] will be used with an extension for non-infested and newly sprouted plants. Figure 3 shows the numerical scale with exemplary plant images. The scale from 1 to 9 belongs to definitions of the KWS scale. The KWS scale is a severity diagram that ranges from 1 to 9. A rating of 1 indicates the complete absence of symptoms, while a rating of 3 indicates the presence of leaf spots on older leaves. A rating of 5 signifies the merging of leaf spots, resulting in the formation of necrotic areas. A rating of 7 is assigned when the disease advances from the oldest leaves to the inner leaves, leading to their death. Finally, a rating of 9 is given when the foliage experiences complete death [23].
In field experiments, we often face data or annotations that need a high effort to acquire. Nevertheless, several data can be acquired rather automatically or with low human effort. In this work, we call this \"cheap\" and \"expensive\" labels. The DS acquisition is rather expensive, while, for instance, weather data acquired with automatic sensors or public weather stations is, typically, relatively cheap. Additionally, the development and epidemiology of the pathogen and disease CLS is highly influenced by specific environmental conditions. [33] In this work, we will make use of the cheap data in order to increase efficiency on expensive data. As shown above in Section 2.1.1, we have the weather-based parameters GDD and NPG. They are not plant specific but, at least, specific for the recording date. Thus, we can annotate many plants with those labels at one stroke. Those labels are, surely, not as meaningful as manually annotated labels, but they can serve for pretraining models. This is particularly interesting for our application of transformers, since they usually need lots of data: we can pretrain the model with the cheap labels and finetune on the expensive labels. Thus, a possible lower availability of the expensive labels could be compensated and training speed is enhanced if the model backbone at the start of fine-tuning stage already \"knows\" low-level filters and the basic concept of our input data. The two different stages of pretraining and finetuning are represented as different learning paths in our model sketch in Figure 4. Additional details of the model are discussed in the later sections of this work. First, we want to introduce in the concept behind our model architecture and, secondly, we shed light on the different model parts in detail.
### Deep Label Distribution Learning
If classification problems can be formulated within an ordinal scale, the transfer into a regression task might be a good choice. However, if the classification is very granular, the collection of data with precise labels can be challenging. Rather than learning distinct, unique labels, the paradigm of Label Distribution Learning (LDL) [13] was proposed. It stabilizes the model training of labels by modeling their ambiguity. It is used for tasks like facial age estimation [15] or head pose estimation [14]. In combination with deep neural networks, the paradigm is referred to as DLDL [12]. In DLDL, the output of a deep neural network mimics the label distribution by a series of neurons that learn a discrete representation of the probability density function. This representation is commonly known as the probability mass function (pmf). Thus, the labels have the form of a probability distribution and the obvious difference in contrast to a pure regression is that the network output is not only based on a single neuron, whereas the difference to a pure classification is that, in contrast to one- or multi-hot-labels, also neighboring neurons are triggered which stabilizes regions, where fewer data is available. Two additional advantages, especially for the use case in this work, are, firstly, that we can easily model uncertainty of labels. The DS annotation is based on individual human experts' judgement. Often, different plants are annotated by different experts, which causes uncertain classifications. Secondly, the model output becomes more transparent, since one can observe how confident the model is in its prediction by comparing the shapes of true and predicted label distribution. Thus, DLDL, once more, is an ideal way to model these annotations.
#### 2.2.1 Full Kullback-Leibler Divergence Loss
The DLDL approach proposed in [12] utilizes a L1 loss for the expectation value and a Kullback-Leibler Divergence (KLD) [21] loss for the label distribution. However, L1 and KLD loss originate from different statistical concepts and, therefore, have scales that are, per se, not comparable. In most cases, a weighting parameter has to be introduced, resulting in an artificial hyperparameter of the model. Our approach circumvents the problem by reformulating the L1 loss as a KLD loss. Additionally, we further accelerate the training by introducing a \"smoothness\" regularization to the label distribution. The regularization is also formulated as a KLD loss, not needing any hyperparameter, either. Furthermore, the gained scale invariance not only makes the components comparable, but also enables the cross-comparability between different labels. This is especially interesting in the use case of this work, since we aim a joint regression of diverse phenological parameters, probably having different domains. This novel loss function is already introduced in [17]. However, since the approach is very well suited for the use case in this work, we will introduce the 3 loss components again in the following.
Label Distribution Loss
Let \\(\\mathbb{P}(y\\mid\\mathbf{x})\\) be the true label distribution for a given data point, i.e. an image, \\(\\mathbf{x}\\). Then, the label distribution loss \\(L_{ld}\\) is the discrete Kullback-Leibler divergence between the true and predicted label distribution,
\\[L_{ld}=\\text{KLD}(\\mathbb{P}||\\hat{\\mathbb{P}})=\\sum_{y}\\mathbb{P}(y\\mid\\mathbf{x} )\\log\\frac{\\mathbb{P}(y\\mid\\mathbf{x})}{\\hat{\\mathbb{P}}(y\\mid\\mathbf{x})}\\,, \\tag{5}\\]
where the hat denotes the prediction. This definition follows the label distribution loss given in [12].
Expectation Value Loss
Unlike in [12], we formulate the expectation value loss as a KLD of truth and prediction as if both of them were normal distributions \\(\\mathcal{N}(\\cdot\\mid\\mu,\\sigma^{2})\\) with expectation value \\(\\mu\\) and variance \\(\\sigma^{2}\\). For the model predictions, \\(\\hat{\\mu}\\) and \\(\\hat{\\sigma}^{2}\\) can be calculated via
\\[\\hat{\\mu}=\\sum_{y}y\\hat{\\mathbb{P}}(y\\mid\\mathbf{x})\\,,\\quad\\hat{\\sigma}^{2}=\\sum _{y}(y-\\hat{\\mu})^{2}\\hat{\\mathbb{P}}(y\\mid\\mathbf{x})\\,. \\tag{6}\\]
Thus, our expectation value loss is
\\[L_{exp} =\\text{KLD}(\\mathcal{N}(\\cdot\\mid\\mu,\\sigma^{2})||\\mathcal{N}( \\cdot\\mid\\hat{\\mu},\\hat{\\sigma}^{2}))\\] \\[=\\log\\frac{\\hat{\\sigma}}{\\sigma}-\\frac{1}{2}+\\frac{\\sigma^{2}+( \\hat{\\mu}-\\mu)^{2}}{2\\hat{\\sigma}^{2}}\\,. \\tag{7}\\]
Detailed calculation steps can be found in the Appendix of [17].
Smoothness Regularization Loss In order to accelerate the training process, especially in early stages, we force the predicted label distribution to be smooth by a KLD regularization term. The idea is to shift the predicted distribution by one position which we call \\(\\hat{\\mathbb{P}}^{s}\\) and calculate a symmetric discrete KLD, i.e. we average both shift directions. Thus,
\\[L_{smooth} =\\frac{1}{2}\\left[\\text{KLD}(\\hat{\\mathbb{P}}||\\hat{\\mathbb{P}}^{ s})+\\text{KLD}(\\hat{\\mathbb{P}}^{s}||\\hat{\\mathbb{P}})\\right]\\] \\[=\\frac{1}{2}\\sum_{y}(\\hat{\\mathbb{P}}(y\\mid\\mathbf{x})-\\hat{\\mathbb{P }}^{s}(y\\mid\\mathbf{x}))\\log\\frac{\\hat{\\mathbb{P}}(y\\mid\\mathbf{x})}{\\hat{\\mathbb{P}} ^{s}(y\\mid\\mathbf{x})}\\,. \\tag{8}\\]
Finally, a sum combines the loss components. Thus, our final loss is
\\[L=L_{ld}+L_{exp}+L_{smooth}\\,. \\tag{9}\\]
#### 2.2.2 Multi-Head Regression
If multiple sources of labels are available, it may be considerable to perform the regression with multiple labels jointly. Each regression problem is then realized by an own so called \"regression head\", i.e., a sub-model, that is trained to transform the feature representation from the backbone into the respective label space of interest. Especially for large backbone models, this has the advantage that only one backbone is needed for multiple purposes, which reduces the total model size. We further call this concept \"Multi-Head Regression\".
### Model Architecture
In this chapter, we shed light on the architecture of our proposed model. Figure 4 shows all the building blocks of our proposed model, further called \"SugarViT\". We now describe the 3 main building blocks of SugarViT and its design motivations.
#### 2.3.1 Vision Transformer Backbone
In recent years, transformer architectures are successfully utilized for diverse deep learning tasks. Especially in the field of natural language processing (NLP), transformer-based models show great success. [22]. In NLP, transformers learn structures in sequential data like text or sentences by processing its basic building blocks, commonly knows as \"tokens\". To use this principle also for classification tasks on image data, the Vision Transformer (ViT) model has been proposed. [11] The main principle is to divide an input image into flattened tiles that are processed by several multi-head attention layers [29]. An additional
Figure 3: Used disease severity scale for our prediction model with example images. The scale is based on the usual CLS rating scale. We added the 0 for non-infested sugar beets before canopy closure, and the 10 for newly sprouted plants as in [19].
learnable \"class token\" is added, which processed output is passed through a classification head. Figure 4 also shows the mentioned building blocks. By comprising many attention blocks and hidden layers, (vision) transformer architectures are complex and need large amounts of data. Thus, they are usually pretrained on large-scale datasets like ImageNet [10] for most of the image processing tasks. For the use case this work is about, we process multi-spectral 5-channel images rather than red-, green-, and blue-channel (RGB) images. Therefore, we cannot use ImageNet-pretrained architectures per se. However, the plant image dataset used in this work is large enough to train an architecture with a vision transformer backbone from scratch, as we will see in further sections. The goal of the learning process is that the ViT backbone is trained to be an expert in understanding the image as a whole and extract remarkable traits to \"encode\" the image information into a rich feature space.
#### 2.3.2 MLP Neck
In the original ViT model, an MLP is used as a classification head. Since we want to build a multi-head regression model (cf. Section 2.2.2), we use an MLP as an intermediate layer between the ViT backbone and the regression heads. If certain labels have something in common or share some information, i.e., latent label correlations, this \"neck\" sub-model between backbone and heads is trained to learn those latent correlations. We could exploit this principle in our use case by introducing a simple \"cheap\" feature, i.e., that is easy to measure and has a more or less obvious correlation to the DS. For instance, we could choose the interval between the image recording date and the date where canopy closure can be observed in the corresponding field experiment. Canopy closure means that neighboring plants touch each other, resulting in a closed field vegetation. In the following, we call this feature days after canopy closure (DAC). Obviously, this feature has negative values before canopy closure is reached. Alternatively, one could also think about including the days after sowing. The DAC are expected to guide the model to the correct DS by having some correlation with it, e.g., young plants (low DAC) are probably less severely infected, whereas older plants (high DAC) are probably rather severely infected. In addition, the infection probability raises when the plats are in contact. All in all, the MLP neck part is a part of the model, where expert knowledge and known correlations can be integrated. Please note here that in the experiments, we follow another approach to integrate associated knowledge in the model to predict the DS. Nevertheless, the above approach can be a valid, as well.
Figure 4: Sketch of our proposed Multi Deep Label Distribution Learning (Multi-DDDL) network with a Vision Transformer (ViT) backbone. The Label Distribution Learning (LDL) heads are trained with separate optimizers and loss functions. The ViT and Multi-Layer Perceptron (MLP) part are the joint basis and are trained in each backward pass of the LDL heads. As output of the ViT, the last hidden state of the learnable class token is used. Furthermore, our use case is shown by having multispectral plant image data and two training stages. The pretraining is done on the environmental, field-related quantities Growing Degree Day (GDD) and number of possible generation (NPG). The target label disease severity (DS) is trained in the subsequent finetuning stage. In principle, the model can be generalized to more labels in each training stage by adding more LDL heads.
#### 2.3.3 LDL Heads
For each label, the output of the MLP neck is processed by a separate, so called, LDL head. It consists of individual fully-connected layers (FCLs) after the MLP neck for each label. The idea behind the individual networks is to enable the model to learn label-individual transformations from the cross-label output of the MLP neck. Thus, the LDL heads are trained to be experts in their label domain and be able to transform the feature space to their regarding label space. When passed through these layers, the features are mixed with a component we call _Feature Mixing_.
#### 2.3.4 Feature Mixing
In the Feature Mixing component, the output layers of all individual LDL Heads are combined linearly. This enables the model to scale and mix information of other labels into specific labels. The mixing coefficients can be learned during training and are initialized to the unit matrix, i.e., at training start, only the respective label is used. A final FCL for each label maps the mixed features into the corresponding label distribution space. The size of the FCL is determined by the number of discretization or quantization steps that can be different for each label. A softmax activation ensures the outputs of the FCLs sum to 1 each. Thus, the FCL approximates the label distribution in the form of a pmf. On this pmf, we can then evaluate our KLD loss functions and, finally, train with the ground truth label distributions.
## 3 Results
In this section, we will introduce our performed experiments. At first, we test if the histogram equalization preprocessing step for the images is really beneficial for the model performance.
In the next experiment, we make use of \"cheap\" data, i.e., weather data as mentioned in Section 2.1.4. In our case, we have weather stations in the field measuring basic weather parameters. This data is available for a whole field, thus, many single plant images. With the single images and those cheap labels, we can perform a pretraining of the backbone. However, this is another approach to combine cheap and expensive data than the one mentioned in section 2.3.2, the major advantage is that in the final model, only the label of interest is used, which results in a slightly lighter model and decreases inference time. After pretraining, we perform a model training with DS labels, resulting in our SugarViT model. Example predictions of a trained SugarViT model are shown in Figure 5.
In a last experiment, we finally investigate if the pretraining also improves the performance of SugarViT by comparing the finetuned SugarViT with a one only trained by the DS labels. We further compare a non-pretrained SugarViT model to a one that is only trained on with RGB bands information in order to see, whether the beyond-optical spectral information is important.
Before performing the actual experiments, the variances for the DS label distributions must be set, since there is no individual information for each data point, or image. In this work, we model the DS label distributions by normal distributions \\(\\mathcal{N}(\\cdot\\mid\\mu,\\sigma^{2})\\) with the experts' labels as expectation values \\(\\mu\\) and a variance \\(\\sigma^{2}\\) that is based on an assumed standard error. We set \\(\\sigma_{DS}=0.6\\) as a \"human estimation\" error. Please note, that this is no empirically found error but rather an educated guess.
In all experiments, we randomly split the training and validation data (cf. Table 1) with initialization seeds assuring reproducibility. For our dataset, the plants have mostly low DS scores in images of the early plant growth, leading to label imbalance, as seen in the histograms in Figure 1. Looking at Table 1, the sizes of the datasets are quite different. In order to minimize the bias and to prevent the model from focusing on few labels and datasets, we use a weighted sampling of the data. The weight of each image is the inverse of the total abundance of its DS label times the size of the respective dataset. Thus, in each training batch, the distribution of datasets and labels is uniformly distributed in average.
Finally, we define a validation metric. Since our model outputs distributions, metrics like root mean squared error or mean absolute error (MAE) are not appropriate since they do not give information about the overall distribution. Alternatively, we use the mean overlap between the predicted and true label distribution. Since the distributions are pmfs,
Figure 5: Output of SugarViT. The disease severity (DS) labels are learned as label distributions (green curves). SugarViT outputs again probability distributions (blue curve). The prediction in the end is the expectation value of the output distributions (dashed lines).
the calculation of the mean distribution overlap (MDO) for a batch of \\(N\\) instances is
\\[\\text{MDO}=\\frac{1}{N}\\sum_{i=1}^{N}\\sum_{y}\\left\\{\\min\\left(\\mathbb{P}(y\\mid \\boldsymbol{x}),\\hat{\\mathbb{P}}(y\\mid\\boldsymbol{x})\\right)\\right\\}_{y}\\,. \\tag{10}\\]
The MDO takes values between \\(0\\) and \\(1\\) where \\(1\\) indicates perfect overlap. For the validation, we use the same weighted sampling as in the training stage, to validate on pseudo-uniform distributed labels. Thus, we respect the prediction quality for each label equally and independent of the total label abundance in the dataset.
### Standardization vs. Histogram Equalization
Before we perform the training of our SugarViT model, we evaluate how the histogram equalization improves the model performance in favor of a \"simpler\" standardization preprocessing method. To have a potentially more universal model in the event, we do not assume that we know the dataset as a whole. Thus, we use normalization only based on the information of a single image, as already mentioned in Section 2.1.2. In the following experiment, we compare the standardization and the histogram equalization method, respectively, with channel-wise and cross-channel calculation. For each method, we perform \\(10\\) runs with different initialization seeds with the model configuration given in Table 3 and trained for \\(80\\) epochs. To speed up the training, we only use the datasets Tr01, Tr02, and Tr03 (cf. Table 1). Those are randomly split into a training and validation subset by ratio \\(80\\,\\%\\!\\!:\\!\\!20\\,\\%\\).
The results are presented in Figure 6. As expected, the total normalization methods perform better than channel-wise normalization. This makes sense, because for DS prediction, an important feature is the difference in radiance of spectral bands, thus, the difference in values across channels. When normalizing the image totally by calculation of cross-channel histograms or mean and std, respectively, this information is preserved, while in the channel-wise normalization it is lost. Nevertheless, channel-wise normalization is more robust against calibration errors of the sensor. Another remarkable observation is, that the standardization method is not only computationally more efficient than histogram equalization, but also performs better. Thus, we find the total (cross-channel) standardization method to be the best performing normalization method, and we will use this method for our SugarViT model.
### SugarViT Pretraining
We perform a pretraining of the SugarViT model on the environmental data labels. This should prepare the model for the plant images by learning low-level features of the plant images. The configuration of our SugarViT model for both pretraining and finetuning stage is listed in Table 4.
The results for training and validation loss, as well as the validation MDO are shown in Figure 7. As seen in the training loss component plot, our loss function is indeed invariant under the label scale, as described in Section 2.2.1. Without any scaling parameter, both GDD and NPG labels have a comparable loss, although the scales are very different. We see a convergence of the validation MDO at ca. \\(90\\,\\%\\) after roughly \\(40\\) training epochs for both GDD and NPG labels. As the best model, we take the one with the best validation MDO and use it for the further steps. The results for the training variants with channel dropout and RGB-only information are very similar. Plots can be found in the supplementary material A. If the pretraining was beneficial for the subsequent finetuning regarding convergence speed and prediction quality, is shown in the following section.
### Comparison Experiments
In this section, we want to discuss the training and validation metrics of the pretrained SugarViT compared to a non-pretrained model that is trained \"from scratch\". Also, we show results for the two training variants mentioned in Section 3 by using channel dropout or only RGB information during training. For channel dropout, the validation is done without dropout. Some performance plots are shown in Figure 8. During the training process (cf. Figure 7(a)),
\\begin{table}
\\begin{tabular}{l r} \\hline \\hline
**ViT backbone** & \\\\ \\hline input size & \\(5\\times 144\\,\\text{px}\\times 144\\,\\text{px}\\) \\\\ patch size & \\(12\\,\\text{px}\\) \\\\ hidden size & \\(512\\) \\\\ \\# hidden layers & \\(4\\) \\\\ \\# attention heads & \\(4\\) \\\\ intermediate size & \\(512\\) \\\\ activation hidden layers & GELU \\\\ dropout hidden layers & \\(0.02\\) \\\\ dropout attention & \\(0.02\\) \\\\ \\hline
**MLP neck** & \\\\ \\hline layer size & \\(512\\) \\\\ layers & \\(3\\) \\\\ activation & ReLU \\\\ dropout & \\(0.2\\) \\\\ \\hline
**LDL heads** & \\\\ \\hline individual MLP layers & \\(2\\) \\\\ individual MLP layer size & \\(256\\) \\\\ activation & ReLU \\\\ dropout & \\(0.8\\) \\\\ DS quantization steps & \\(111\\) \\\\ DS regression limits & \\([-0.5,10.5]\\) \\\\ DS label distribution std & \\(0.6\\) \\\\ \\hline
**Optimizer** & \\\\ \\hline algorithm & AdamW \\\\ initial learning rate & \\(10^{-3}\\) \\\\ weight decay & \\(0.1\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Model configuration for comparison between standardization and histogram equalization.
all training loss components converge to similar values, whereby the pretrained models converge, as expected, substantially faster. The plot only shows the full model. The loss curves for the other training variants are similar and can be found in the supplementary material A. The validation loss (cf. Figure 8b) shows similar behavior. In addition, a slight overfitting can be observed for all trainings as the loss is increasing after a minimum reach at about epochs \\(10\\) - \\(15\\) for the pretrained and about epochs \\(50\\) - \\(80\\) for the non-pretrained models. The overfitting can not be observed in terms of validation MDO (cf. Figure 8c). For both validation loss and validation MDO the pretrained models reach, besides the faster convergence, slightly better values compared to the non-pretrained models. Overall, the convergence for the channel dropout training is tendentially slower than for the RGB-only and even slower compared to the full model.
So far, we just evaluated the model on the validation dataset which is, being a random subset of the training data, quite similar to the training data. In a next step, we want to evaluate our SugarViT models on test data, that is completely unknown by the model in order to see the generalization capabilities of our approach.
\\begin{table}
\\begin{tabular}{l r} \\hline \\hline
**ViT backbone** & \\\\ \\hline input size & \\(5\\times 144\\,\\mathrm{px}\\times 144\\,\\mathrm{px}\\) \\\\ patch size & \\(12\\,\\mathrm{px}\\) \\\\ hidden size & \\(1024\\) \\\\ \\# hidden layers & \\(8\\) \\\\ \\# attention heads & \\(4\\) \\\\ intermediate size & \\(1024\\) \\\\ activation hidden layers & GELU \\\\ dropout hidden layers & \\(0.02\\) \\\\ dropout attention & \\(0.02\\) \\\\
**MLP neck** & \\\\ \\hline layer size & \\(512\\) \\\\ layers & \\(3\\) \\\\ activation & ReLU \\\\ dropout & \\(0.2\\) \\\\ \\hline
**LDL heads** & \\\\ \\hline individual MLP layers & \\(2\\) \\\\ individual MLP layer size & \\(256\\) \\\\ activation & ReLU \\\\ dropout & \\(0.8\\) \\\\ NPG quantization steps & \\(100\\) \\\\ NPG regression limits & \\([-0.5,\\,11.5]\\) \\\\ NPG label distribution std & \\(0.3\\) \\\\ GDD quantization steps & \\(100\\) \\\\ GDD regression limits & \\([-5\\,^{\\circ}\\mathrm{C}\\times\\mathrm{d},3500\\,^{\\circ}\\mathrm{C}\\times\\mathrm{d}]\\) \\\\ GDD label distribution std & \\(100\\,^{\\circ}\\mathrm{C}\\times\\mathrm{d}\\) \\\\ DS quantization steps & \\(89\\) \\\\ DS regression limits & \\([-0.5,\\,10.5]\\) \\\\ DS label distribution std & \\(0.6\\) \\\\ \\hline
**Optimizer** & \\\\ \\hline algorithm & AdamW \\\\ initial learning rate & \\(5\\times 10^{-4}\\) \\\\ weight decay & \\(0.1\\) \\\\ \\hline
**Learning rate scheduler** & \\\\ \\hline strategy & cyclic learning rate (linear) \\\\ interval & step \\\\ maximum lr & \\(1\\times 10^{-3}\\) \\\\ step size (up and down) & \\(500\\) \\\\ mode & exponential range \\\\ \\(\\gamma\\) & \\(0.9999\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: SugarViT model configuration for pretraining and finetuning. For pretraining, the labels number of possible generation (NPG) and Growing Degree Day (GDD) are used. In finetuning stage, the final label of interest, disease severity (DS), is trained.
Figure 6: Results of the standardization vs. histogram equalization experiment. For both methods, total and channel-wise variants are shown. For each experiment, \\(10\\) runs with different seeds are performed. The solid lines describe the means, whereas the bands are the standard errors of the mean. The three plots show training loss, validation loss, and validation MDOs.
### Evaluation on Test Dataset
Conclusively, we want to evaluate our model on unseen data. Therefore, we use our dataset Te01 (cf. 1). Although we stated that the MAE is not an appropriate metric for the DLDL approach in SugarViT, it can give some insights on the prediction quality in the field usage, since the expectation value of the predicted label distribution is used as the overall model prediction. Recap, that the MAE of \\(N\\) DS predictions is given by
\\[\\text{MAE}(\\text{DS}) =\\frac{1}{N}\\sum_{i=1}^{N}|\\text{DS}_{\\text{true}}(\\mathbf{x}_{i})- \\text{DS}_{\\text{pred}}(\\mathbf{x}_{i})| \\tag{11}\\] \\[=\\frac{1}{N}\\sum_{i=1}^{N}\\left|E\\left[\\mathbb{P}(y\\mid\\mathbf{x}_{i} )\\right]-E\\left[\\hat{\\mathbb{P}}(y\\mid\\mathbf{x}_{i})\\right]\\right|\\,. \\tag{12}\\]
Figure 8: Comparison of the pretrained and non-pretrained SugarViT model.
Figure 7: Results of the SugarViT pretraining.
We evaluated three of our model adaptions, i.e., with using all information, with using channel dropout during training, and with only using RGB channels, each of them with and without using pretraining. Certainly, recognitions can still be incorrect. However, we can apply some techniques to reduce the errors. On the one hand, we can augment the input and use the average of all augmented inputs as the final prediction for the augmented image. One could use any augmentation that we also used during training. However, we just use \"simple\" augmentations here like mirroring and rotation in order not to reduce the performance, i.e., inference time too much. Thus, we can augment one single image to \\(8\\) instances in total. Since the model outputs are pmfs, we can just add them and renormalize them by dividing by \\(8\\). As stated in Section 3.3, we could observe overfitting in the validation loss, while the MDO is still increasing. We therefore evaluate our training variants by their best models concerning both metrics. It can be observed that the models with the best validation loss, respectively, perform slightly better in test data evaluation. Table 5 shows the MAE and MDO values distributed in each true DS label for those models. The values for the best validation MDO models can be found in the supplementary material A. In all evaluations, we use the augmentations mentioned above.
The most remarkable observation is that all models show performance shortcomings for plants with DS of about \\(5\\) - \\(9\\). A reason might be the limited amount of training data in comparison to DSs that occur far more frequent than low and high DSs. We already try to cope with this imbalance by a weighted sampling as mentioned above. However, the augmented data surely is no actual \"new\" data in the actual sense. Unfortunately, those intermediate DS scores are also the most ambiguous ones due to possible bias by margin of interpretation and evaluation. The data lack could be seen when comparing the results for the two 5-channel-trained models with the RGB-only model. For DS with many training data instances, the 5-channel models typically outperform the RGB model, whereas for low-data regimes, the RGB model outperforms the 5-channel models. This might be due to the lower complexity for \\(3\\) rather than \\(5\\) channels from that the features have to be extracted. Another remarkable result is that, apparently, the model channel-dropout does not show significant improvement in terms of generalization to unseen data. It outperforms the fully trained model in only a few DSs. In difference to the evaluation on validation data, the non-pretrained models outperform the pretrained ones in most cases for the test dataset. This is quite remarkable and opens the discussion if the pretraining possibly leads to a slight overfitting to the data and gives another hint about the importance of generalization, especially in agricultural-related use cases. Nevertheless, the idea of pretraining is still important, since training times can be reduced by adaption of one trained backbone to multiple labels of interest. Additionally, in use cases with low data coverage, finetuning existing, pre-trained models might be the only way to get performant prediction models.
For the usage of SugarViT in the field, there are still some points to mention regarding further prediction improvements that we want to discuss in the next section.
### Application in the Field
Frameworks like [16] help to extract the plant positions and extraction of the single images for large-scale UAV image data. Thereafter, our trained SugarViT model can be applied for new field experiments, enabling fast large-scale DS annotations.
Figure 9 shows an exemplary application of our model to a test dataset image using the described augmented evaluation. On the other hand, the model does not consider temporal and spatial dependency between the plant images so far. We could further reduce the error rate by correcting single \"obvious\" outliers that do not fit into the temporal and spatial vicinity of the other plants. Additionally, SugarViT has the advantage to actually output a label distribution. Since we know, with which (fixed) training label standard deviation \\(\\sigma_{\\text{train}}\\) the model is trained, we can compare the standard deviation of the output (assuming a normal distribution) \\(\\sigma_{\\text{pred}}\\) with it in order to see, how \"confident\" the model is in its prediction. Thus, for each output, we can calculate a \"confidence\"
\\[\\tilde{c}=\\frac{\\sigma_{\\text{pred}}}{\\sigma_{\\text{train}}}\\,, \\tag{13}\\]
that is \\(1\\) for an exact conformity of standard deviations. For values \\(\\tilde{c}>1\\), the model is more and more unconfident, whereas for values \\(0<\\tilde{c}<1\\), the model is over-confident in its decision. However, please note that this is no proper confidence in a statistical sense, since the expectation value still could be completely wrong. Since we cannot know the expectation value in an unlabeled dataset, this purely standard deviation based value is a rather imprecise yet helpful measure of the model's confidence.
In our model training and evaluation framework, we include some convenience functions to load orthoimages and
Figure 9: Exemplary application of SugarViT for disease severity prediction on unseen UAV data. Each prediction is completely independent of its surrounding predictions. The model shows a highly consistent prediction behavior.
plant positions as _geopackage_ or similar files and evaluate models on. Afterward, the prediction can be exported again to _geopackage_ format or, for instance, as _Pandas DataFrame_ objects. Thus, we added interfaces to widely used _GIS_ software that are frequently used for georeferenced image data.
### Attention Maps
The fact, that the backbone of our SugarViT model is based on the attention mechanism [29], we can analyze and, ideally, interpret which image feature are more or less important to the model's decisions. One helpful visualization method for that are so-called _attention maps_[2]. Roughly spoken, attention maps can visualize, \"where the model looks at\". The ViT backbone in SugarViT consists of \\(8\\) attention layers, with \\(4\\) attention heads each, that can in principle be trained to focus on completely different features. In order to accumulate the attention maps of each single layer to one overall map, the technique of _attention rollout_[2] is used. Figure 10 shows the result for one randomly chosen image from the validation dataset per DS class. A main observation is that SugarViT indeed focuses on the plant itself and not, e.g., on the amount of visible soil around it. This is particularly visible for the DS \\(0\\) example. Additionally, one observes that the model focuses on multiple image regions that seem to be complementary for the decision-making process, which is exactly the power of the attention mechanism compared to Convolutional Neural Networks (CNNs). CNNs learn filters that are applied on the whole image. Thus, relations between pixel values can only be covered locally. The attention mechanism allows connecting those local features with other distant images regions and, thus, is considered to have more power of \"understanding\" the image as a whole.
## 4 Discussion
Our work shows, that simple normalizing methods like standardization can already outperform more sophisticated and expensive normalization methods like histogram equalization (cf. Section 3.1). For our use-case where spectral differences carry much information, a total standardization leads to better results than a channel-wise standardization. However, it has to be stated, that the data used in this work has been calibrated by a reflectance panel, so the spectral information is directly comparable. Thus, if the data can not be calibrated by any reason, also the channel-wise standardization may be a good choice, since it also leads to acceptable prediction qualities.
The pretraining on environmental metadata turned out to be beneficial for the final prediction quality as well as the training speed (cf. Section 3.2). Additionally, our pretraining on general, environmental annotation and the subsequent finetuning on the annotation of interest can be an approach for further generalization even on smaller datasets. The pretrained ViT backbone can be seen as a fixed, plant-image-aware feature extractor that learned plant specific traits. On top of that, a smaller model can be trained for different annotation purposes, which accelerates and improves the training procedure substantially. This concept is also widely used in use cases of large language models where the model sizes often exceed computational resources for local training. Pretraining also enables the usage of large-scale image data. Even if only few data is labeled with expensive human-expert annotations, unlabeled data can still be used in a pretraining stage unfolding the full potential of the collected data. If data labeling originated from multiple human experts that may have ambiguous estimations or assessment guidelines, our usage of LDL is able to incorporate detailed uncertainty information in training process and, finally, in the model.
The use of attention mechanism instead of convolutional networks turns out to make sense in this use case because of
\\begin{table}
\\end{table}
Table 5: Evaluation metrics for test dataset separated by true DS value. For each training variant, the model with the best validation loss is chosen. Below the single DSs, the metrics for the full dataset are listed, both with and without correcting for the label abundance.
the long-distance relations of leaf spots. The interpretability of resulting attention maps on single instances is often questionable. However, they can reveal, if the model focuses on the right regions and features itself rather than on spurious correlations.
## 5 Conclusions
Generalization to data from unseen fields with unseen weather conditions and climate is certainly one of the most challenging questions in data-driven machine learning approaches in agriculture. Our findings in Section 3.4 emphasize that. The data given in the scope of this work comprises already \\(4\\) growing seasons but only from some locations in Central Germany. If the model performs comparably well in other regions is at least questionable. Nevertheless, a model that is at least locally accurate already has a high value for increasing the efficiency of DS assessment. Extrapolation to other environmental conditions is difficult, but interpolation on the same field has the poten
Figure 10: Attention maps for an example image per disease severity class. The first column shows the original input image in its RGB representation. The second column is the joint attention map after performing attention rollout [2]. The following columns are the attention maps of each of the \\(8\\) attention layers in our SugarViT model.
tial to save valuable expert working time in the field, where the model can complement a few spot-wise expert annotations on the whole field. As seen in Section 3.4, the high data imbalance and label ambiguity still remains challenging, even with our contributions of weighed sampling and LDL, respectively. The disease assessment in individual objects as plants is hard to standardize and schematize by exemplary images, as we, for instance, have in the case of CLS. Therefore, it is important to have a model that incorporates label uncertainties and is transparent in its prediction uncertainties, like in SugarViT.
With this large-scale DS assessment available, further challenges regarding disease assessment can be tackled. The retrieval of DS complemented by further environmental sensors enables, for instance, detailed investigations on disease spread and its modeling. Consequently, our approach could also find application in terms of disease control by, for instance, punctual application of pesticides, lowering costs and environmental impact. Thus, future work regarding this topic will be to use SugarViT for disease spread modeling. Perspectives, the concept behind SugarViT could also be applied in a wide variety of other use cases in the field of UAV-supported phenotyping. [20, 9, 4]
## 6 Abbreviations
* C. C.
on phytopathometry -- visual assessment, remote sensing, and artificial intelligence in the twenty-first century. _Tropical Plant Pathology_, 47(1):1-4, February 2022.
* [9] Walter Chivasa, Onisimo Mutanga, and Chandrashekhar Biradar. Uav-based multispectral phenotyping for disease resistance to accelerate crop improvement under changing climate conditions. _Remote Sensing_, 12(15):2445, Jul 2020.
* [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255. Ieee, 2009.
* [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
* [12] Bin Bin Gao, Chao Xing, Chen Wei Xie, Jianxin Wu, and Xin Geng. Deep Label Distribution Learning with Label Ambiguity. _IEEE Transactions on Image Processing_, 26(6):2825-2838, 2017.
* [13] Xin Geng. Label distribution learning, 2016.
* [14] Xin Geng, Xin Qian, Zengwei Huo, and Yu Zhang. Head pose estimation based on multivariate label distribution. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(4):1974-1991, 2022.
* [15] Xin Geng, Chao Yin, and Zhi-Hua Zhou. Facial age estimation by learning from label distributions. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 35(10):2401-2412, oct 2013.
* [16] Maurice Gunder, Facundo R Ispizua Yamati, Jana Kierdorf, Ribana Roscher, Anne-Katrin Mahlein, and Christian Bauckhage. Agricultural plant cataloging and establishment of a data framework from UAV-based crop images by computer vision. _GigaScience_, 11:giac054, 06 2022.
* [17] Maurice Gunder, Nico Piatkowski, and Christian Bauckhage. Full kullback-leibler-divergence loss for hyperparameter-free label distribution learning, 2022.
* [18] Carlyle D. Holen and Alan G. Dexter. A growing degree day equation for early sugarbeet leaf stages. _Sugarbeet Research and Extension Reports_, 27:152-157, 1997.
* [19] Facundo Ramon Ispizua Yamati, Abel Barreto, Maurice Gunder, Christian Bauckhage, and Anne-Katrin Mahlein. Sensing the occurrence and dynamics of cercospora leaf spot disease using UAV-supported image data and deep learning. _Zuckerindustrie_, pages 79-86, February 2022.
* [20] Facundo Ramon Ispizua Yamati, Maurice Gunder, Abel Andree Barreto Alcantara, Jonas Bomer, Daniel Laufer, Christian Bauckhage, and Anne-Katrin Mahlein. Automatic scoring of rhizectonia crown and root not affected sugar beet fields from orthorectified uav images using machine learning. _Plant Disease_, 0(ja):null, 0. PMID: 37755420.
* [21] James M. Joyce. _Kullback-Leibler Divergence_, pages 720-722. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
* [22] Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sangeetha. Ammus : A survey of transformer-based pretrained models in natural language processing, 2021.
* [23] Kleinwanzlebener Saatzucht AG, Rabbethge and Giesecke, Einbeck, Germany. _Cercospora Tafel_, 1970.
* [24] Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. A survey of transformers, 2021.
* [25] Forrest Nutter, Jr, Paul Teng, and F.M. Shokes. Disease assessment terms and concepts. _Plant Disease_, 75:1187-1188, 01 1991.
* [26] Omprakash Patel, Yogendra P. S. Maravi, and Sanjeev Sharma. A comparative study of histogram equalization based image enhancement techniques for brightness preservation and contrast enhancement. _CoRR_, abs/1311.4033, 2013.
* [27] Stephen M. Pizer, E. Philip Amburn, John D. Austin, Robert Cromartie, Ari Geselowitz, Trey Greer, Bartter Haar Romeny, John B. Zimmerman, and Karel Zuiderveld. Adaptive histogram equalization and its variations. _Computer Vision, Graphics, and Image Processing_, 39(3):355-368, 1987.
* [28] Lorena I. Rangel, Rebecca E. Spanner, Malaika K. Ebert, Sarah J. Pethybridge, Eva H. Stukenbrock, Ronnie de Jonge, Gary A. Secor, and Melvin D. Bolton. Cercospora beticola: The intoxicating lifestyle of the leaf spot pathogen of sugar beet. _Molecular Plant Pathology_, 21(8):1020-1041, 2020.
* [29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, L ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 30. Curran Associates, Inc., 2017.
* [30] Guan Wang, Yu Sun, and Jianxin Wang. Automatic image-based plant disease severity estimation using deep learning. _Computational Intelligence and Neuroscience_, 2017:1-8, 07 2017.
* [31] John Weiland and Georg Koch. Sugarbeet leaf spot disease (cercospora beticola sacc.). _Molecular plant pathology_, 5(3):157-166, 2004.
* [33] P. F. J. Wolf and J. A. Verreet. An integrated pest management system in germany for the control of fungal leaf diseases in sugar beet: The IPM sugar beet model. _Plant Disease_, 86(4):336-344, April 2002.
* [34] Rui Xu, Changying Li, and Andrew H. Paterson. Multispectral imaging and unmanned aerial systems for cotton plant phenotyping. _PLOS ONE_, 14(2):1-20, 02 2019.
\\begin{table}
\\end{table}
Table 6: Evaluation metrics for test dataset separated by true DS value. For each training variant, the model with the best validation mean distribution overlap (MDO) is chosen. Below the single DSs, the metrics for the full dataset are listed, both with and without correcting for the label abundance. | Remote sensing and artificial intelligence are pivotal technologies of precision agriculture nowadays. The efficient retrieval of large-scale field imagery combined with machine learning techniques shows success in various tasks like phenotyping, weeding, cropping, and disease control. This work will introduce a machine learning framework for automatized large-scale plant-specific trait annotation for the use case disease severity scoring for Cereospora Leaf Spot (CLS) in sugar beet. With concepts of Deep Label Distribution Learning (DLDL), special loss functions, and a tailored model architecture, we develop an efficient Vision Transformer based model for disease severity scoring called SugarViT. One novelty in this work is the combination of remote sensing data with environmental parameters of the experimental sites for disease severity prediction. Although the model is evaluated on this special use case, it is held as generic as possible to also be applicable to various image-based classification and regression tasks. With our framework, it is even possible to learn models on multi-objective problems as we show by a pretraining on environmental metadata.
Vision Transformer Multi-objective Deep Label Distribution Learning Remote Sensing Disease Assessment Precision Agriculture | Give a concise overview of the text below. | 220 |
mdpi/0221461e_213e_42d5_8966_d337db82a737.md | # Physical Activities in Public Squares: The Impact of Companionship on Chinese Residents' Health
Xiuhai Xiong
1Department of Urban Planning, School of Urban Design, Wuhan University, Wuhan 430072, China; [email protected] (0.X.); [email protected] (L.L.)
Lingbo Liu
1Department of Urban Planning, School of Urban Design, Wuhan University, Wuhan 430072, China; [email protected] (0.X.); [email protected] (L.L.)
Zhenghong Peng
2Department of Graphics and Digital Technology, School of Urban Design, Wuhan University, Wuhan 430072, China; [email protected] 2
Hao Wu
2Department of Graphics and Digital Technology, School of Urban Design, Wuhan University, Wuhan 430072, China; [email protected] 2
## 1 Introduction
Evidence has shown that POS provides residents with places for physical exercise and contributes to their physical and mental health, wherein social support is supposed to have a positive effect on increasing physical activity [1]. Social support is defined as providing support or assistance to accomplish a specific behavior [2], including instrumental support, information support, companionship, emotional support, etc. [3, 4], wherein companionship is supposed to be one of the most important social support factors in physical activity [5, 6]. Companionship can be further divided into the companionship of minors, the companionship of adults, and the companionship of pets [7, 8, 9].
With the improvement of health awareness, the physical activity of Chinese residents has also begun to increase in recent years [10]. A large number of residents choose to go to the city square near the residential area for activities, including square dancing, running, and walking. Urban squares are often located near high-density commercial areas and residential areas, and some of them are dominated by hard pavement. Its green coverage is lower than that of parks, and it is also different from common parks abroad. Although there are relatively many studies on companionship, physical activity, and healthy behavior in foreign studies [11, 12], the influence of companionship on physical activity and health of Chinese residents in urban squares is not clear, and further exploration is still needed.
Berkman, et al. [13] propose a model for the relationship between social connection and health, suggesting that in health-related behaviors, companionship and physical activity are possible ways to improve health. However, the theoretical and empirical findings of the mechanism behind this hypothesis in China are scarce; this article aims to study the impact of companionship on the health of Chinese residents in the context of physical activities in the city squares.
In previous studies, the effects of companionship on physical activity and health were divided into three aspects: the influence of companionship on physical activities, the influence of companionship on mental health and physical health.
### The Impact of Companionship on Physical Activity
Previous studies have shown that companionship has an impact on physical activity, indicating that some promoting factors for physical activity in the process of companionship may provide exercisers with social and emotional support, exercise methods, and exercise fun, and regulate the exerciser's self-efficacy [14; 15]. In terms of specific groups, compared with those who exercise alone, companionship is proved to obviously improve the exercise level of women [14; 15; 16], minors [17; 18], and the elderly [19]. However, the different sources of companionship have various effects on sports, for example, the companionship of friends has a more positive effect on the sports of college students than the companionship of the family [20]. Companionship is, therefore, considered a strategy to increase physical activity and reduce sedentary behavior [16].
### The Impact of Companionship on Mental Health
The effect of companionship on mental health could also be divided into direct and indirect influences. Companionship can help improve people's life satisfaction, memory, and prevent depression [21; 22; 23], wherein the direct positive impact has been verified existing in the companionship of family, friends, pets, etc., especially for the elder population, and children. Previous studies have shown that elderly people living alone not only have more anxiety and loneliness, but also their cognitive abilities decrease [24; 25], which indicates that losing companionship will lead to psychological distress. On the contrary, the companionship of family and friends can alleviate the psychological distress of the elderly [26; 27]. As far as children are concerned, only children are more likely to escape difficulties and lack confidence, having a high depression rate [28]. Left-behind children in rural China also tend to be lonely, depressed, and have a lower life satisfaction [9; 29], while family members can ease the working pressure of Chinese migrant workers in remote places [30]. In addition, the companionship of pets can not only reduce depression, anxiety, and loneliness, but also increase empathy and socialization of pet owners [31; 32; 33].
The indirect impact of companionship in physical activities on mental health is mainly reflected as a mediating variable. In other words, companionship improves the level of physical activity which further enhances mental health. It has been proven that companionship, including the companionship of friends, family members, and pets, has a positive impact on physical activities, active participation in physical activities will further reduce loneliness, improve quality of life, improve mental health, and reduce depression risk [34; 35; 36; 37; 38; 39; 40]. People with a high degree of social isolation and low outdoor physical activities may have a very high probability of depression, while groups with friends and neighbors, maintaining high levels of outdoor physical activity, have an extremely low probability of depression [41].
### The Impact of Companionship on Physical Health
The impact of companionship on physical health could be divided into direct and indirect impacts. In terms of direct impact, the population which is accompanied by family and friends has better physical health levels than those living alone, such as blood pressure, height, and weight [30]. On the contrary, the elderly who are socially isolated are at risk of physical failure and even death [42; 43]. The indirect influence on physical health is mainly reflected as mediating variable in physical activities. Though there are relatively few studies on this topic, related studies showed that pet companionship could stimulate more frequent leisure-time physical activities, which would increase physical activity and significantly enhance the health situation of patients, including obesity, hypertension, and hyperlipidemia [32; 33; 44; 45].
### Study Aim and Hypotheses
It must be admitted that companionship is a complicated process with different patterns, such as family [46], friends [9], pets [47], robots [48], social media [49], etc., and there may also be occurring a series of behaviors, such as hugging [50], talking [50], dancing [50],exercise [50], eating [51], etc. Therefore, in order to explain clearly the role of companionship on health, it is necessary to conduct a comprehensive investigation and research on the pattern and behavior of companionship. Time geography believes that the occurrence of behavior has to be restricted by time and space and the analysis of spatiotemporal behavior is helpful to discover the matching relationship between individual and social systems, so as to find the general social laws behind the behavior [52; 53; 54]. As a specific activity space, the squares for physical actives contain different types of companionship and behavior under a certain time and space background, which provides a natural laboratory to study the impact of companionship on the physical and mental health of Chinese residents.
The purpose of the current research is to evaluate the relationship between companionship and physical and mental health, clarify whether this relationship is positive or negative, and further explain the mechanism. Based on questionnaire surveys and interviews with active crowds in the three squares of Hongshan, Shouyi, and Xibeihu in Wuhan, China, the study divided the survey population into a single pattern and companionship pattern which was further divided into minors, adults, and pets. As shown in Figure 1, the hypothesis is listed as follows:
1. The pattern of companionship directly and positively affects physical and mental health.
2. Companionship indirectly affects physical and mental health through the intensity of physical exercise.
3. There is a relationship of mutual influence and mutual promotion between mental and physical health.
Figure 1: Conceptual model with hypothesized paths.
## 2 Materials and Methods
### Procedure
The questionnaire-based study was conducted anonymously. As shown in Figure 2, the survey locations are from three squares in the center of Wuhan. There are many squares in Wuhan, but most of the squares are small and belong only to residents of the local community. We excluded these small squares with limited services. We chose a large area where the service area is facing the public square of the whole city. Finally, 3 squares were selected. The three squares are all within the second ring road, surrounded by commercial and residential areas, densely populated, and convenient transportation. The duration of our investigation was from September 2020 to November 2020, which is in the fall. According to the Wuhan Statistical Yearbook, from 2010 to 2019, the average temperature in Wuhan in the spring was 17.11 \\({}^{\\circ}\\)C and the average precipitation was 118.87 mm; the average temperature in summer was 27.85 \\({}^{\\circ}\\)C and the average precipitation was 191.41 mm; the average temperature in autumn was 17.72\\({}^{\\circ}\\)C and the average precipitation was 72.31 mm; the average temperature in winter was 4.97 \\({}^{\\circ}\\)C, and the average precipitation in winter was 38.94 mm. Wuhan is rainy in spring, hot in summer, cold in winter, and only sunny and warm in autumn. Therefore, it is especially suitable for residents to carry out physical activities outdoors in Wuhan's autumn. We obtained the weather data during the survey period (1 September 2020 to 1 December 2020). The data comes from the website: [https://rp5.ru/](https://rp5.ru/), which can obtain daily historical weather data in Wuhan. Since we chose to investigate on a sunny day and there is no record of precipitation, our average temperature is 18 \\({}^{\\circ}\\)C and the average wind speed is 0.48 m/s. The daily survey period lasted from 6 pm to 9 pm. At this time, the workers are off work, the students are home from school, and dinner is over, so most families can get together to participate in sports activities. As mentioned in previous studies, Chinese people are accustomed to going out for exercise after meals and consider this as a health need [50]. The questionnaire survey was conducted face-to-face by the members of the research team; 200 questionnaires were collected, of which 196 were valid questionnaires. The members of the research team clearly explained the whole process, and the subjects voluntarily filled out and responded to the anonymous questionnaire with informed consent. Subjects could suspend or revoke their right to agree to participate in the research at any time without explaining the reason for doing so. All procedures comply with the principles embodied in the \"Helsinki Declaration\" and were approved by the Ethics Committee of Wuhan University.
### Data Collection
#### 2.2.1 Socio-Demographic Data
Socio-demographic data include gender, age, occupation, education status, fitness, sedentary status, travel mode, and travel time from home to the square. As shown in Table 1, there were 79 (40.31%) males and 117 (59.69%) females. In terms of age group, there were 9 (4.59%) people under the age of 18, 84 (42.86%) people in the 19-44 age group, 38 (19.39%) people in the 45-59 age group; there are 65 (33.16%) people in elderly aged 60 years and above group. For _other_ instances, see the descriptive statistics in Table 1.
#### 2.2.2 Companion Patterns Data
In previous studies, the companionship patterns are mainly divided into the companionship of friends [55], the companionship of family members [55], and the companionship of pets [56]. Our companionship patterns were divided into minor companionship, adult companionship, and pet companionship. This division is more in line with the real situa
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Characteristic** & **Group** & **Number** & **Percentage** \\\\ \\hline \\multirow{3}{*}{Gender} & Male & 79 & 40.31\\% \\\\ & Female & 117 & 59.69\\% \\\\ & \\(\\leq\\)18 & 9 & 4.59\\% \\\\ & 19–44 & 84 & 42.86\\% \\\\ \\multirow{3}{*}{Age} & 45–59 & 38 & 19.39\\% \\\\ & \\(\\geq\\)60 & 65 & 33.16\\% \\\\ & Student & 16 & 8.16\\% \\\\ \\multirow{3}{*}{Occupation} & Employee & 68 & 34.69\\% \\\\ & Freelancer & 37 & 18.88\\% \\\\ \\multirow{3}{*}{Education} & Retired & 72 & 36.73\\% \\\\ & Others & 3 & 1.53\\% \\\\ \\multirow{3}{*}{Fitness frequency} & High school and below & 95 & 48.47\\% \\\\ & Undergraduate & 86 & 43.88\\% \\\\ \\multirow{3}{*}{Fitness frequency} & Master’s degree and above & 15 & 7.65\\% \\\\ & Never go & 137 & 69.90\\% \\\\ \\multirow{3}{*}{Fitness frequency} & Go often & 9 & 4.59\\% \\\\ & Go occasionally & 50 & 25.51\\% \\\\ \\multirow{3}{*}{Sedentary time} & \\(\\leq\\)3 & 86 & 43.88\\% \\\\ & 3–6 & 64 & 32.65\\% \\\\ \\multirow{3}{*}{Sedentary time} & 6–9 & 38 & 19.39\\% \\\\ & 9–12 & 7 & 3.57\\% \\\\ \\multirow{3}{*}{} & \\(\\geq\\)12 & 1 & 0.51\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Socio-economic characteristics of visitors to the square.
Figure 2: The location of the survey site in Wuhan. (**A**) Remote sensing image of Wuhan City; (**B**) remote sensing image map of the area within the third ring road in Wuhan; (**C**) Hongshan Square; (**D**) Shouyi Square; (**E**) Xibeihu Square.
tion of the square. It is worth noting that what we asked the subjects to answer was the daily companionship patterns. As shown in Table 2, in the companionship patterns, 96 (48.98%) people were accompanied by adults, 4 (2.04%) people were accompanied by pets, 56 (28.57%) people were accompanied by minors, and 40 (20.41%) people were alone.
#### 2.2.3 Physical Activity in the Public Squares
As shown in Table 3, the physical activity process in the public square includes transportation mode, travel time, visit frequency, activity type, and physical activity level. The level of physical activity on the square is represented by the stay time and is divided into four levels. The longer the stay, the higher the level of physical activity. The first level is 2 (1.02%) people, the activity time is between 3 and 4 h; the second level is 16 (8.16%) people, the activity time is between 2 and 3 h; the third level is 104 (53.06%) people, the activity time is between 1 and 2 h; the fourth level is 74 (37.76%) people, the activity time is between 0.5 and 1 h. In square activities, 7 (3.57%) people participated in running, 6 people (3.06%) participated in skateboarding, 155 (79.08%) people participated in walking, 26 (13.27%) people participated in dancing, and 22 (11.22%) people participated in others activity. For related content, see the descriptive statistics in Table 3.
#### 2.2.4 Data on Physical and Mental Health
Self-reported physical health and self-reported mental health are shown in Table 4. Mental and physical health was obtained through self-reports of visitors to the square. The physical health self-assessment status is divided into 5 levels, \"excellent,\" \"good,\" \"fair,\" \"poor,\" and \"bad\", corresponding to a score of 1 to 5. The first level represents excellent
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Characteristic** & **Group** & **Number** & **Percentage** \\\\ \\hline \\multirow{4}{*}{Companionship} & Adult & 96 & 48.98\\% \\\\ & Pet & 4 & 2.04\\% \\\\ & Minor & 56 & 28.57\\% \\\\ & Alone & 40 & 20.41\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Descriptive statistics of the companionship patterns.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Characteristic** & **Group** & **Number** & **Percentage** \\\\ \\hline \\multirow{4}{*}{Travel modes} & Walk & 136 & 69.39\\% \\\\ & Public transportation & 46 & 23.47\\% \\\\ & Cycling & 13 & 6.63\\% \\\\ & Others & 1 & 0.51\\% \\\\ & 1-5 min & 33 & 16.84\\% \\\\ & 6-10 min & 68 & 34.69\\% \\\\ & 11–20 min & 65 & 33.16\\% \\\\ & 21–30 min & 20 & 10.20\\% \\\\ & \\(\\geq\\)30 min & 10 & 5.10\\% \\\\ & Times per day & 81 & 41.33\\% \\\\ \\multirow{4}{*}{Visit frequency} & Several times weekly & 71 & 36.22\\% \\\\ & Several times monthly & 27 & 13.78\\% \\\\ & Others & 17 & 8.67\\% \\\\ & Running & 7 & 3.57\\% \\\\ & Skateboarding & 6 & 3.06\\% \\\\ Activity type & Walking & 155 & 79.08\\% \\\\ & Dancing & 26 & 13.27\\% \\\\ & Others & 22 & 11.22\\% \\\\ & Very high & 2 & 1.02\\% \\\\ \\multirow{2}{*}{Physical activity level} & High & 16 & 8.16\\% \\\\ & Moderate & 104 & 53.06\\% \\\\ & Low & 74 & 37.76\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Physical activity in the public squares.
health, with a total of 24 (12.24%) people. The second level represents good health, with a total of 83 (42.35%) people. The third level represents fair physical health, and there were 80 (40.82%) people at this level. The fourth level indicates poor physical health, and there were 7 (3.57%) people at this level. The fifth level represents bad physical health, and there were 2 (1.02%) people at this level. Mental health self-assessment is also divided into 5 levels, \"excellent,\" \"good,\" \"fair,\" \"poor,\" and \"bad\", corresponding to a score of 1 to 5. The first level indicates that the mental health status is excellent, and there were 43 (21.94%) people at this level. The second level indicates that the mental health status is good, and there were 99 (50.51%) people at this level. The third level indicates that the mental health status of the population is fair, and there were 50 (25.51%) people at this level. The fourth level indicates that the mental health of the population is poor, and there were 4 (2.04%) people at this level. The fifth level is bad mental health, and there is no distribution in this level.
## 3 Statistical Analysis
In the process of statistical analysis, we used the analysis method of ordered logistic regression, and this method was used three times. For the first time, we used demographic variables, different types of companionship and types of physical activity as covariates, and the level of physical activity as dependent variables. For the second time, we used demographic variables, different types of companionship and types of physical activity, the physical activity level, and mental health self-assessment scores as covariates, and self-assessed physical health as dependent variables. For the third time, we used demographic variables, different types of companionship, and types of physical activity. The physical activity level and physical health score were used as covariates, and self-evaluated mental health scores were used as dependent variables. The statistical analysis platform used SPSS Statistics 25.0 (IBM SPSS Statistics, New York, NY, USA).
## 4 Result
### Influencing Factors of Physical Activity
To find the influencing factors of the physical activity level, this study selected 33 variables from social-demographic indicators, companionship pattern indicators, and square physical activity indicators as covariates for an ordered logistic regression analysis (Table 5). The results show that the model had a good fitting and significance (Nagelkerke's pseudo R\\({}^{2}\\) = 0.413, \\(\\chi^{2}\\) = 85.092, df = 30, \\(p\\) < 0.001), and the parallel line test was not significant (\\(\\chi^{2}\\) = 72.060, df = 60, \\(p\\) = 0.136). First of all, in the companionship pattern, the companionship of pets (OR = 797.12), companionship of minors (OR = 7.43), and companionship of adults (OR = 2.44) showed a positive effect on increasing the level of physical activity. Secondly, sedentary time \\(\\leq\\) 3 (OR = 0.01), 3-6 h (OR = 0.01), 6-9 h (OR = 0.01) and 9-12 h (OR = 0.01) had a negative impact on increasing physical activity. Third, the frequency of daily visits (OR = 2.94) had a positive effect on the increase in physical activity levels. Fourth, the activity type dancing (OR = 2.78) had a positive effect on the improvement of
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Characteristic** & **Group** & **Number** & **Percentage** \\\\ \\hline \\multirow{6}{*}{Physical health} & Excellent & 24 & 12.24\\% \\\\ & Good & 83 & 42.35\\% \\\\ & Fair & 80 & 40.82\\% \\\\ & Poor & 7 & 3.57\\% \\\\ & Bad & 2 & 1.02\\% \\\\ & Excellent & 43 & 21.94\\% \\\\ & Good & 99 & 50.51\\% \\\\ & Fair & 50 & 25.51\\% \\\\ & Poor & 4 & 2.04\\% \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Descriptive characteristics of respondents’ mental health and physical health.
physical activity level. Fifth, in the square type, the Northwest Lake Square (OR = 2.27) had a positive effect on the improvement of the physical activity level.
### Influencing Factors of Mental Health
In order to find the influencing factors of mental health, this study selected 36 variables from socio-demographic indicators, companionship pattern indicators, physical health indicators, and square physical activity indicators as covariates for the ordered logistic regression analysis (Table 6). The results show that the model had a good fit and significance (Nagelkerke's pseudo R\\({}^{2}\\) = 0.413, \\(\\chi^{2}\\) = 85.092, df = 30, \\(p\\) < 0.001), and the parallel line test was not significant (\\(\\chi^{2}\\) = 72.060, df = 60, \\(p\\) = 0.136). First, the companionship of pets (OR = 69.20), companionship of minors (OR = 3.49), and companionship of adults (OR = 3.01) showed positive effects on the improvement of mental health. Second, excellent (OR = 41,982.16), good (OR = 2268.79), fair (OR = 280.90), and poor (OR = 206.23) physical health had a positive effect on the improvement of mental health. Third, among the ages, 19-44 years
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline
**Characteristic** & **Variable** & **Estimate** & **Exp(B)** & **95\\%CI** & **\\(p\\)-Value** \\\\ \\hline \\multirow{3}{*}{Gender} & Male (ref) & & & & \\\\ & Female & 0.39 & 1.47 & 0.72 & 3.00 & ns \\\\ & \\(\\geq\\)60 (ref) & & & & \\\\ & \\(\\leq\\)18 & \\(-\\)1.82 & 0.16 & 0.01 & 3.33 & ns \\\\ Age & 19–44 & 0.37 & 1.44 & 0.30 & 7.02 & ns \\\\ & 45–59 & 0.46 & 1.59 & 0.39 & 6.5 & ns \\\\ & Retired (ref) & & & & \\\\ Occupation & Fixed occupation & \\(-\\)0.21 & 0.81 & 0.17 & 3.87 & ns \\\\ & Student & 1.06 & 2.87 & 0.29 & 27.97 & ns \\\\ & Freeancer & \\(-\\)0.13 & 0.88 & 0.19 & 4.02 & ns \\\\ & High school and below (ref) & & & & \\\\ Education & Undergraduate & \\(-\\)0.39 & 0.68 & 0.29 & 1.58 & ns \\\\ & Master’s degree and above & 0.33 & 1.39 & 0.28 & 6.83 & ns \\\\ & Never go (ref) & & & & \\\\ Fitness frequency & Go occasionally & 0.19 & 1.21 & 0.56 & 2.62 & ns \\\\ & Go often & \\(-\\)0.76 & 0.47 & 0.09 & 2.53 & ns \\\\ & \\(\\geq\\)12 (ref) & & & & \\\\ & \\(\\leq\\)3 h & \\(-\\)4.57 & 0.01 & 0.00 & 1.44 & * \\\\ Sedentary time & 3–6 h & \\(-\\)4.32 & 0.01 & 0.00 & 1.90 & * \\\\ & 6–9 h & \\(-\\)4.99 & 0.01 & 0.00 & 1.00 & ** \\\\ & 9–12 h & \\(-\\)5.20 & 0.01 & 0.00 & 0.94 & ** \\\\ & \\(\\geq\\)30 min (ref) & & & & \\\\ & 1–5 min & 0.52 & 1.69 & 0.27 & 10.40 & ns \\\\ Travel time & 6–10 min & 1.27 & 3.56 & 0.69 & 18.41 & ns \\\\ & 11–20 min & 1.11 & 3.02 & 0.57 & 15.86 & ns \\\\ & 21–30 min & 0.63 & 1.87 & 0.30 & 11.81 & ns \\\\ & Alone (ref) & & & & \\\\ Companionship & Pet & 6.68 & 797.12 & 47.99 & 13,240.03 & *** \\\\ & Minor & 2.01 & 7.43 & 2.75 & 20.09 & *** \\\\ & Adult & 0.89 & 2.44 & 0.98 & 6.10 & * \\\\ & Several times monthly (ref) & & & & & \\\\ Visit frequency & Times per day & 1.08 & 2.94 & 1.08 & 7.99 & ** \\\\ & Several times weekly & 0.08 & 1.08 & 0.44 & 2.65 & ns \\\\ & Skateboarding (ref) & & & & \\\\ Activity type & Walking & \\(-\\)0.40 & 0.67 & 0.27 & 1.68 & ns \\\\ & Running & \\(-\\)0.75 & 0.47 & 0.08 & 2.97 & ns \\\\ & Dancing & 1.02 & 2.78 & 0.87 & 8.89 & * \\\\ & Shouyi Square (ref) & & & & & \\\\ Square name & Hongshan Square & 0.32 & 1.38 & 0.65 & 2.96 & ns \\\\ & Xibeihu Square & 0.82 & 2.27 & 0.92 & 5.63 & * \\\\ \\hline \\end{tabular} Note: The significant expressed as (\\({}^{*}p<0.1\\)), (\\({}^{**}p<0.05\\)), (\\({}^{***}p<0.001\\)), and ns (no significance).
\\end{table}
Table 5: Ordinal logistic regression analysis on the influencing factors of physical activity level.
(OR = 0.21) showed a negative effect on the improvement of mental health. Fourth, the visit frequency per day (OR = 3.00) showed a positive effect on the improvement of mental health. Fifth, extremely high (OR = 0.02) and average (OR = 0.42) levels of physical activity showed a negative effect on the improvement of mental health.
### Influencing Factors of Physical Health
To find the influencing factors of physical health, this study selected 40 variables from social-demographic indicators, companionship pattern indicators, mental health indicators, and square physical activity indicators as covariates for the ordered logistic regression analysis (Table 7). The results show that the model had good fit and significance (Nagelkerke's pseudo R\\({}^{2}\\) = 0.536, \\(\\chi^{2}\\) = 129.112, df = 30, \\(p\\) < 0.001) parallel line test was not significant (\\(\\chi^{2}\\) = 29.828, df = 90, \\(p\\) = 1) First, the companionship of minors in companionship (OR = 0.27) showed a negative effect on the improvement of physical health. Second, the general (OR = 9.23) and high (OR = 3.73) levels of physical activity showed a positive effect on the improvement of physical health. Third, in mental health, general (OR = 6.83), good (OR = 52.98), and excellent (OR = 716.23) showed a positive effect on the improvement of physical health. Fourth, the reduction in sedentary time \\(\\leq\\) 3 (OR = 6843.13), 3-6 h (OR = 3540.42), 6-9 h (OR = 5146.13), 9-12 h (OR = 2494.89) showed improvement in the
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multicolumn{1}{c}{**Characteristic**} & **Variable** & **Estimate** & **Exp(B)** & \\multicolumn{2}{c}{**95\\%CI**} & \\multicolumn{1}{c}{_p-Value_} \\\\ \\hline \\multirow{3}{*}{Gender} & Male (ref) & \\multirow{3}{*}{0.96} & \\multirow{3}{*}{0.48} & \\multirow{3}{*}{1.93} & \\multirow{3}{*}{ns} \\\\ & Female & & & & \\\\ & \\(\\geq\\)60 (ref) & & & & \\\\ \\cline{3-5} \\multirow{3}{*}{Age} & \\(\\leq\\)18 & \\(-\\)1.67 & 0.19 & 0.01 & 3.59 & ns \\\\ & 19–44 & \\(-\\)1.54 & 0.21 & 0.05 & 1.00 & ** \\\\ & 45–59 & \\(-\\)0.96 & 0.38 & 0.10 & 1.50 & ns \\\\ & Retired (ref) & & & & & \\\\ \\multirow{3}{*}{Occupation} & Fixed occupation & \\multirow{3}{*}{0.67} & \\multirow{3}{*}{1.95} & \\multirow{3}{*}{0.44} & \\multirow{3}{*}{8.59} & ns \\\\ & Student & \\multirow{3}{*}{1.31} & \\multirow{3}{*}{3.71} & \\multirow{3}{*}{0.35} & \\multirow{3}{*}{39.06} & ns \\\\ & Freelancer & \\multirow{3}{*}{0.14} & \\multirow{3}{*}{1.15} & \\multirow{3}{*}{0.27} & \\multirow{3}{*}{4.91} & ns \\\\ & High school and below (ref) & & & & & \\\\ \\multirow{3}{*}{Education} & Undergraduate & \\multirow{3}{*}{0.03} & \\multirow{3}{*}{1.03} & \\multirow{3}{*}{0.46} & \\multirow{3}{*}{2.32} & ns \\\\ & Master’s degree and above & \\multirow{3}{*}{0.69} & \\multirow{3}{*}{1.99} & \\multirow{3}{*}{0.42} & \\multirow{3}{*}{9.39} & ns \\\\ & Never go (ref) & & & & & \\\\ \\multirow{3}{*}{Fitness frequency} & Go occasionally & \\multirow{3}{*}{1.22} & \\multirow{3}{*}{3.39} & \\multirow{3}{*}{0.52} & \\multirow{3}{*}{2.13} & ns \\\\ & Go often & \\multirow{3}{*}{1.22} & \\multirow{3}{*}{3.39} & \\multirow{3}{*}{0.52} & \\multirow{3}{*}{22.13} & ns \\\\ & Alone (ref) & & & & & \\\\ \\multirow{3}{*}{Companionship} & Pet & \\multirow{3}{*}{4.24} & \\multirow{3}{*}{69.20} & \\multirow{3}{*}{1.71} & \\multirow{3}{*}{2801.75} & ** \\\\ & Minor & \\multirow{3}{*}{1.25} & \\multirow{3}{*}{3.49} & \\multirow{3}{*}{1.27} & \\multirow{3}{*}{9.57} & ** \\\\ & Adult & \\multirow{3}{*}{1.10} & \\multirow{3}{*}{3.01} & \\multirow{3}{*}{1.24} & \\multirow{3}{*}{7.28} & ** \\\\ & Several times monthly (ref) & & & & & \\\\ \\multirow{3}{*}{Visit frequency} & Times per day & \\multirow{3}{*}{1.10} & \\multirow{3}{*}{3.00} & \\multirow{3}{*}{1.12} & \\multirow{3}{*}{8.03} & ** \\\\ & Several times weekly & \\multirow{3}{*}{0.23} & \\multirow{3}{*}{1.26} & \\multirow{3}{*}{0.54} & \\multirow{3}{*}{2.97} & ns \\\\ & Skateboarding (ref) & & & & & \\\\ \\multirow{3}{*}{Activity type} & Walking & \\(-\\)0.90 & 0.41 & 0.16 & 1.04 & * \\\\ & Running & \\multirow{3}{*}{0.95} & \\multirow{3}{*}{2.58} & \\multirow{3}{*}{0.45} & \\multirow{3}{*}{14.61} & ns \\\\ & Dancing & \\multirow{3}{*}{0.12} & \\multirow{3}{*}{1.13} & \\multirow{3}{*}{0.35} & \\multirow{3}{*}{3.65} & ns \\\\ & Extremely poor (ref) & & & & & \\\\ \\multirow{3}{*}{Physical health} & Excellent & \\multirow{3}{*}{10.65} & \\multirow{3}{*}{41,982.16} & \\multirow{3}{*}{1215.61} & \\multirow{3}{*}{1,451,343.16} & ** \\\\ & Good & \\multirow{3}{*}{7.73} & \\multirow{3}{*}{2268.79} & \\multirow{3}{*}{85.97} & \\multirow{3}{*}{59,874.14} & ** \\\\ & Fair & \\multirow{3}{*}{5.64} & \\multirow{3}{*}{280.90} & \\multirow{3}{*}{11.78} & \\multirow{3}{*}{6707.62} & ** \\\\ & Poor & \\multirow{3}{*}{5.33} & \\multirow{3}{*}{206.23} & \\multirow{3}{*}{5.48} & \\multirow{3}{*}{7769.80} & ** \\\\ & Low (ref) & & & & & \\\\ \\multirow{3}{*}{Physical activity level} & Very high & \\(-\\)3.74 & 0.02 & 0.00 & 0.82 & ** \\\\ & High & \\(-\\)0.69 & 0.50 & 0.12 & 2.13 & ns \\\\ \\multirow{3}{*}{Note: The significant expressed as (\\({}^{*}p<0.1\\)), (\\({}^{**}p<0.05\\)), (\\({}^{***}p<0.001\\)), and ns (no significance).} \\\\ \\cline{1-1} & \\multirow{3}{*}{(OR = 0.27) showed a negative effect on the improvement of physical health. Second, the general (OR = 9.23) and high (OR = 3.73) levels of physical activity showed a positive effect on the improvement of physical health. Third, in mental health, general (OR = 6.83), good (OR = 52.98), and excellent (OR = 716.23) showed a positive effect on the improvement of physical health. Fourth, the reduction in sedentary time \\(\\leq\\) 3 (OR = 6843.13), 3-6 h (OR = 3540.42), 6–9 h (OR = 5146.13), 9–12 h (OR = 2494.89) showed improvement in the physical health impact. Fifth, in the type of physical activity, dancing (OR = 0.29) showed a negative effect on the improvement of physical health.
## 5 Discussion
### The Direct and Indirect Effects of Companionship on Mental Health
On the one hand, this study shows that increasing daily companionship has a direct and positive impact on improving mental health. Previous studies have shown that the companionship of friends and pets directly affects the mental health of the elderly [32; 57], which is similar to our research results. On the other hand, companionship improves physical health through physical activity, which indirectly contributes to the improvement of mental health. Previous studies have already shown evidence that the companionship of pet dogs, the companionship of friends, and the companionship of family members improve physical health through physical activities so that mental health is improved [58; 59; 60].
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline
**Characteristic** & **Variable** & **Estimate** & **Exp(B)** & **95\\%CI** & \\(p\\)**-Value** \\\\ \\hline \\multirow{3}{*}{Gender} & Male (ref) & & & & & \\\\ & Female & 0.11 & 1.12 & 0.56 & 2.24 & ns \\\\ & \\(\\geq\\)60 (ref) & & & & & \\\\ & \\(\\leq\\)18 & 1.80 & 6.06 & 0.31 & 117.10 & ns \\\\ & 19–44 & 0.61 & 1.84 & 0.39 & 8.59 & ns \\\\ & 45–59 & 0.06 & 1.06 & 0.27 & 4.17 & ns \\\\ & Retired (ref) & & & & & \\\\ Occupation & Fixed occupation & 0.46 & 1.59 & 0.36 & 7.07 & ns \\\\ & Student & 1.03 & 2.81 & 0.30 & 26.44 & ns \\\\ & Freelancer & 0.91 & 2.49 & 0.58 & 10.83 & ns \\\\ & High school and below (ref) & & & & & \\\\ Education & Undergraduate & & & & & ns \\\\ & Master’s degree and above & 0.09 & 1.09 & 0.22 & 5.30 & ns \\\\ & Never go (ref) & & & & & \\\\ Fitness frequency & Go occasionally & \\(-\\)0.29 & 0.75 & 0.35 & 1.60 & ns \\\\ & Go often & 0.77 & 2.16 & 0.43 & 10.97 & ns \\\\ & Alone (ref) & & & & & \\\\ Companionship & Pet & \\(-\\)2.09 & 0.12 & 0.01 & 2.31 & ns \\\\ & Minor & \\(-\\)1.32 & 0.27 & 0.10 & 0.72 & ** \\\\ & Adult & \\(-\\)0.68 & 0.51 & 0.21 & 1.23 & ns \\\\ & Several times monthly (ref) & & & & & \\\\ Visit frequency & Times per day & \\(-\\)0.62 & 0.54 & 0.20 & 1.43 & ns \\\\ & Several times weekly & \\(-\\)0.09 & 0.92 & 0.39 & 2.18 & ns \\\\ & Cycling (ref) & & & & & \\\\ Activity type & Walking & \\(-\\)0.32 & 0.73 & 0.29 & 1.86 & ns \\\\ & Running & \\(-\\)0.67 & 0.51 & 0.09 & 2.86 & ns \\\\ & Dancing & \\(-\\)1.24 & 0.29 & 0.09 & 0.91 & ** \\\\ & Low (ref) & & & & & \\\\ Physical activity & Very high & \\(-\\)0.10 & 0.90 & 0.02 & 38.86 & ns \\\\ level & High & 2.22 & 9.23 & 1.98 & 43.12 & ** \\\\ & Moderate & 1.32 & 3.73 & 1.74 & 7.97 & ** \\\\ & Poor (ref) & & & & & \\\\ & Excellent & 6.57 & 716.23 & 61.44 & 8341.51 & *** \\\\ Mental health & Good & 3.97 & 52.98 & 5.34 & 525.32 & ** \\\\ & Fair & 1.92 & 6.83 & 0.71 & 65.50 & * \\\\ & \\(\\geq\\)12 (ref) & & & & & \\\\ & 3\\(\\leq\\) & 8.83 & 6843.13 & 64.52 & 725,778.39 & *** \\\\ Sedentary time & 3–6 h & 8.17 & 3540.42 & 33.75 & 371,758.60 & ** \\\\ & 6–9 h & 8.55 & 5146.13 & 48.86 & 542,530.73 & *** \\\\ & 9–12 h & 7.82 & 2494.89 & 20.99 & 296,262.15 & ** \\\\ \\hline \\hline \\end{tabular} Note: The significant expressed as (”\\(p<0.1\\)), (\"\" \\(p<0.05\\)), (\"\" \\(p<0.001\\)), and ns (no significance).
\\end{table}
Table 7: Orderly logistic regression analysis of influencing factors of physical health.
### The Indirect Impact of Companionship on Physical Health
The indirect impact of companionship on physical health is reflected in two aspects. First of all, this study shows that increasing daily companionship indirectly affects physical health by increasing the level of physical activity. In addition, this study shows that mental health acts as an indirect factor of companionship and physical health, and companionship improves mental health, which will improve physical health. Previous research believed that the higher the activity level, the better the body function [61], which is similar to our research results. There are many reasons for the influence of companionship on the level of physical activity. In the case of females, companionship gives them a sense of security during night activities because they worry about crime, violence, and harassment at night [50]. In addition, emotional support and information feedback in daily companionship are positive factors for tourists to improve their physical exercise level [1]. What's more, our research results also show that sedentary behavior is extremely harmful to the body. Previous studies have shown that sedentary behavior is strongly correlated with obesity, sexual and reproductive diseases [62; 63]. Therefore, this study provides strategies to improve physical activity, reduce sedentary behavior, and improve health through companionship. Another factor affecting physical health is mental health. In this regard, previous studies have shown that the relationship between physical and mental health is interdependent and mutually reinforcing [64; 65]. This study enriches the relationship between physical and mental health from the perspective of companionship.
### Some Practical Implications for POS Design
We believe that two points should be given priority in the design of POS: In the first place, the accessibility and convenience of POS travel. Our survey respondents were mostly middle-aged and elderly people (53%), who mainly choose to travel by walking (69%). In the context of healthy aging, we must pay attention to the accessibility of the elderly to POS and the comfort of the walking environment. Moreover, we must pay attention to social elements in the design of POS, and consider the needs of companionship. For example, children like a flat and open POS where they can run and play [50]. The square dance activities of the elderly take up more space and need bright lights at night [50]. Meantime, some elderly people's rest areas require a lot of seats. Couples like quiet places with dense vegetation [50]. Finally, previous studies have shown that in addition to China, there are also different patterns of companionship in POS research in other countries [15; 62], such as the companionship of family and friends, and the companionship of pets. Therefore, the method of understanding and designing POS from the perspective of companionship should be extended to POS of other cultures (countries).
## 6 Conclusions
In this study, three squares in Wuhan, China were used to test the role of companionhip in public squares in improving physical activity and promoting physical and mental health. First of all, the influence of companionship on mental health was divided into direct influence and indirect influence. On the one hand, companionship has a direct and positive impact on good mental health. On the other hand, companionship improves physical health through physical activity, which indirectly affects the improvement of mental health. Furthermore, there are two paths for the indirect impact of companionship on physical health. One path indicates that companionship can improve physical health by increasing physical activity levels, and the other path indicates that companionship can achieve better physical health by improving mental health.
The advantage of this study is that it is the first to propose to understand the use of POS in China from the perspective of companionship. The square was used as a natural laboratory to evaluate the daily companionship and physical and mental health of visitors to the square. Meantime, this research further explored the method of designing POS from the perspective of companionship, which can be extended to POS of other cultures (countries).
This study has some limitations. In future research, more attention should be paid to the accuracy and diversity of indicators in terms of health measurement. For example, in the measurement, attention must be paid to the combination of the subject's self-evaluation and the monitoring data of the health equipment. In addition, this study adopts a cross-sectional study, and future studies should have long-term follow-up longitudinal studies.
Conceptualization, X.X. and L.L.; methodology, X.X. and L.L.; investigation, X.X.; software, X.X. and H.W. and X.X.; validation, H.W., L.L.; data curation, X.X.; writing--original draft preparation, X.X.; writing--review and editing, X.X., L.L., Z.P.; supervision, Z.P., L.L. and H.W. All authors have read and agreed to the published version of the manuscript.
This research was funded by National Natural Science Foundation of China (No. 51978535); Humanities and Social Science Project of the Ministry of Education (No. 19YJCZH187).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Data sharing not applicable.
The authors declare no conflict of interest.
## References
* (1) Duncan, S.C.; Duncan, T.E.; Strycker, L.A. Sources and types of social support in youth physical activity. _Health Psychol._**2005**, _24_, 3-10. [CrossRef]
* (2) Molloy, G.J.; Dixon, D.; Hamer, M.; Sniehotta, F.F. Social support and regular physical activity: Does planning mediate this link? _Br. J. Health Psychol._**2010**, _15_, 859-870. [CrossRef] [PubMed]
* (3) Teramoto, C.; Matsunaga, A.; Nagata, S. Cross-sectional study of social support and psychological distress among displaced earthquake survivors in Japan. _Jpn. J. Nurs. Sci._**2015**, _12_, 320-329. [CrossRef]
* (4) Chen, Y.; Liu, T.; Liu, W. Increasing the use of large-scale public open spaces: A case study of the North Central Axis Square in Shenzhen, China. _Habitat Int._**2016**, _53_, 66-77. [CrossRef]
* (5) Wendel-Vos, W.; Droomers, M.; Kremers, S.; Brug, J.; van Lenthe, F. Potential environmental determinants of physical activity in adults: A systematic review. _Obs. Rev._**2007**, \\(8\\), 425-440. [CrossRef]
* (6) Florez, K.R.; Richardson, A.S.; Ghosh-Dastidar, M.; Troxel, W.; DeSantis, A.; Colabianchi, N.; Dubowitz, T. The power of social networks and social support in promotion of physical activity and body mass index among African American adults. _SSM Popul. Health_**2018**, \\(4\\), 327-333. [CrossRef] [PubMed]
* (7) Charmaraman, L.; Mueller, M.K.; Richer, A.M. The Role of Pet Companionship in Online and Offline Social Interactions in Adolesence. _Child. Adoles. Soc. Work J._**2020**, _37_, 589-599. [CrossRef]
* (8) Tower, R.B.; Nokota, M. Pet companionship and depression: Results from a United States Internet sample. _Anthrozos_**2015**, _19_, 50-64. [CrossRef]
* (9) Zhao, J.; Liu, X.; Wang, M. Parent-child cohesion, friend companionship and left-behind children's emotional adaptation in rural China. _Child. Abus. Negl._**2015**, _48_, 190-199. [CrossRef]
* (10) Cheng, J.; Yu, H. The relationship between neighborhood environment and physical activity in Chinese youth: A retrospective cross-sectional study. _J. Public Health Heidelb._**2021**. [CrossRef]
* (11) Uchino, B.N. Social support and health: A review of physiological processes potentially underlying links to disease outcomes. _J. Behav. Med._**2006**, _29_, 377-387. [CrossRef]
* (12) Fitzgerald, A.; Fitzgerald, N.; Aherne, C. Do peers matter? A review of peer and/or friends' influence on physical activity among American adolescents. _J. Adoles._**2012**, _35_, 941-958. [CrossRef] [PubMed]
* (13) Berkman, L.F.; Glass, T.; Brissette, I.; Seeman, T.E. From social integration to health: Durkheim in the new millennium. _Soc. Sci. Med._**2000**, _51_, 843-857. [CrossRef]
* (14) Middelweed, A.; te Velde, S.J.; Abbott, G.; Timperio, A.; Brug, J.; Ball, K. Do intrapersonal factors mediate the association of social support with physical activity in young women living in socioeconomically disadvantaged neighbourhoods? A longitudinal mediation analysis. _PLoS ONE_**2017**, _12_. [CrossRef]
* (15) Harley, A.E.; Katz, M.L.; Heaney, C.A.; Duncan, D.T.; Buckworth, J.; Odoms-Young, A.; Willis, S.K. Social Support and Companionship Among Active African American Women. _Am. J. Health Behav._**2009**, _33_, 673-685. [CrossRef]
* (16) Cavallo, D.N.; Brown, J.D.; Tate, D.F.; DeVellis, R.F.; Zimmer, C.; Ammerman, A.S. The role of companionship, esteem, and informational support in explaining physical activity among young women in an online social network intervention. _J. Behav. Med._**2014**, _37_, 955-966. [CrossRef] [PubMed]
* (17) Silva, P.; Lott, R.; Mota, J.; Welk, G. Direct and Indirect Effects of Social Support on Youth Physical Activity Behavior. _Pediutr. Evere. Sci._**2014**, _26_, 86-94. [CrossRef] [PubMed]* Gill et al. (2018) Gill, M.; Chan-Golston, A.M.; Rice, L.N.; Roth, S.E.; Crespi, C.M.; Cole, B.L.; Koniak-Griffin, D.; Prelip, M.L. Correlates of Social Support and its Association With Physical Activity Among Young Adolescents. _Health Educ. Behav._**2018**, _45_, 207-216. [CrossRef] [PubMed]
* Smith et al. (2017) Smith, G.L.; Banting, L.; Eime, R.; O'Sullivan, G.; van Uffelen, J.G.Z. The association between social support and physical activity in older adults: A systematic review. _Int. J. Behav. Nutr. Phys. Act._**2017**, _14_. [CrossRef]
* Belanger and Patrick (2018) Belanger, N.M.S.; Patrick, J.H. The Influence of Source and Type of Support on College Students' Physical Activity Behavior. _J. Phys. Act. Health_**2018**, _15_, 183-190. [CrossRef]
* Shang (2020) Shang, Q.Q. Social support, rural/urban residence, and depressive symptoms among Chinese adults. _J. Community Psychol._**2020**, _48_, 849-861. [CrossRef]
* Oremus et al. (2020) Oremus, M.; Tyas, S.L.; Maxwell, C.J.; Konnert, C.; O'Connell, M.E.; Law, J. Social support availability is positively associated with memory in persons aged 45-85 years: A cross-sectional analysis of the Canadian Longitudinal Study on Aging. _Arch. Gerontol. Geriatr._**2020**, _86_, 103962. [CrossRef] [PubMed]
* Bryson and Bogart (2020) Bryson, B.A.; Bogart, K.R. Social support, stress, and life satisfaction among adults with rare diseases. _Health Psychol._**2020**, _39_, 912-920. [CrossRef] [PubMed]
* Zunzunegui et al. (2003) Zunzunegui, M.V.; Alvarado, B.E.; Del Ser, T.; Otero, A. Social networks, social integration, and social engagement determine cognitive decline in community-dwelling Spanish older adults. _J. Gerontol. Ser. B-Psychol. Sci. Soc. Sci._**2003**, _58_, S93-S100. [CrossRef] [PubMed]
* Yu et al. (2020) Yu, J.; Choe, K.; Kang, Y. Anxiety of Older Persons Living Alone in the Community. _Healthcare_**2020**, \\(8\\), 287. [CrossRef]
* Mitchell et al. (2018) Mitchell, J.A.; Cadet, T.; Burke, S.; Williams, E.D.; Alvarez, D. The Paradoxical Impact of Companionship on the Mental Health of Older African American Men. _J. Gerontol. B Psychol. Sci. Soc. Sci._**2018**, _73_, 230-239. [CrossRef]
* Ward et al. (2017) Ward, A.; Arola, N.; Bohnert, A.; Lieb, R. Social-emotional adjustment and pet ownership among adolescents with autism spectrum disorder. _J. Commun. Disor._**2017**, _65_, 35-42. [CrossRef]
* Lu et al. (2019) Lu, J.; Zhang, C.; Xue, Y.; Mao, D.; Zheng, X.; Wu, S.; Wang, X. Moderating effect of social support on depression and health promoting lifestyle for Chinese empty nesters: A cross-sectional study. _J. Affect. Disor._**2019**, _256_, 495-508. [CrossRef]
* Fan et al. (2017) Fan, X.; Yu, S.; Peng, J.; Fang, X. The Relationship between Perceived Life Stress, Loneliness and General Well-Being among the Left-behind Rural Children: Psychological Capital as a Mediator and Moderator. _J. Psychol. Sci._**2017**, _40_, 388-394.
* Turagabci et al. (2020) Turagabci, A.R.; Nakamura, K.; Kizuki, M.; Takano, T. Family structure and health, how companionship acts as a buffer against ill health. _Health Qual. Life Outcomes_**2007**, \\(5\\), 61. [CrossRef]
* Hui Gan et al. (2020) Hui Gan, G.Z.; Hill, A.M.; Yeung, P.; Keesing, S.; Netto, J.A. Pet ownership and its influence on mental health in older adults. _Aging Ment. Health_**2020**, _24_, 1605-1612. [CrossRef]
* Hughes et al. (2020) Hughes, M.J.; Verreyne, M.L.; Harpur, P.; Pachana, N.A. Companion Animals and Health in Older Populations: A Systematic Review. _Clin. Gerontol._**2020**, _43_, 365-377. [CrossRef] [PubMed]
* Dransart and Jaune (2020) Dransart, C.; Jaune, P.; Gourdin, M. Companion animals and pets: An underestimated medico-psychological support? _Ann. Med. Psychol._**2020**, _178_, 145-149. [CrossRef]
* Pels and Kleinert (2016) Pels, F.; Kleinert, J. Loneliness and physical activity: A systematic review. _Int. Rev. Sport Exerc. Psychol._**2016**, \\(9\\), 231-260. [CrossRef]
* Van Dyck et al. (2015) Van Dyck, D.; Teychenne, M.; McNaughton, S.A.; De Bourdeaudhuij, L.; Salmon, J. Relationship of the Perceived Social and Physical Environment with Mental Health-Related Quality of Life in Middle-Aged and Older Adults: Mediating Effects of Physical Activity. _PLoS ONE_**2015**, _10_. [CrossRef]
* Beyene Getahun et al. (2020) Beyene Getahun, K.; Ukke, G.G.; Alemu, B.W. Utilization of companionship during delivery and associated factors among women who gave birth at Arba Minch town public health facilities, southern Ethiopia. _PLoS ONE_**2020**, _15_, e0240239. [CrossRef]
* Gysai (2019) Gysai, R.M. Social support, physical activity and psychological distress among community-dwelling older Ghanaians. _Arch. Gerontol. Geriatr._**2019**, _81_, 142-148. [CrossRef] [PubMed]
* Faleschini et al. (2019) Faleschini, S.; Millar, L.; Rifas-Shiman, S.L.; Skouteris, H.; Hivert, M.F.; Oken, E. Women's perceived social support: Associations with postpartum weight retention, health behaviors and depressive symptoms. _BMC Women's Health_**2019**, _19_. [CrossRef]
* Carrapatoso et al. (2018) Carrapatoso, S.; Cardon, G.; Van Dyck, D.; Carvalho, J.; Gheysen, F. Walking as a Mediator of the Relationship of Social Support With Vitality and Psychological Distress in Older Adults. _J. Aging Phys. Act._**2018**, _26_, 430-437. [CrossRef]
* Carr et al. (2018) Carr, E.C.J.; Wallace, J.E.; Onyewuchi, C.; Hellyer, P.W.; Kogan, L. Exploring the Meaning and Experience of Chronic Pain with People Who Live with a Dog: A Qualitative Study. _Anthraczos_**2018**, _31_, 551-565. [CrossRef]
* Herbolsheimer et al. (2018) Herbolsheimer, F.; Ungar, N.; Peter, R. Why Is Social Isolation Among Older Adults Associated with Depressive Symptoms? The Mediating Role of Out-of-Home Physical Activity. _Int. J. Behav. Med._**2018**, _25_, 649-657. [CrossRef]
* Perissintto et al. (2012) Perissintto, C.M.; Stijacic Cenzer, I.; Covinsky, K.E. Loneliness in older persons: A predictor of functional decline and death. _Arch. Intern. Med._**2012**, _172_, 1078-1083. [CrossRef]
* Dong et al. (2012) Dong, X.; Chang, E.S.; Wong, E.; Simon, M. Perception and negative effect of loneliness in a Chicago Chinese population of older adults. _Arch. Gerontol. Geriatr._**2012**, _54_, 151-159. [CrossRef] [PubMed]
* Edney (1992) Edney, A.T.B. Companion Animals and Human Health. _Vet. Rec._**1992**, _130_, 285-287. [CrossRef]
* Gillum and O'Divesan (2010) Gillum, R.F.; Obsiesan, T.O. Living with Companion Animals, Physical Activity and Mortality in a US National Cohort. _Int. J. Environ. Res. Public Health_**2010**, \\(7\\), 2452-2459. [CrossRef]* Arunachalam and Nguyen (2015) Arunachalam, D.; Nguyen, D.Q.V. Family connectedness, school attachment, peer influence and health-compromising behaviours among young Vietnamese males. _J. Youth Stud._**2015**, _19_, 287-304. [CrossRef]
* Wood et al. (2015) Wood, L.; Martin, K.; Christian, H.; Nathan, A.; Lauritsen, C.; Houghton, S.; Kawachi, I.; McCune, S. The pet factor-companion animals as a conduit for getting to know people, friendship formation and social support. _PLoS ONE_**2015**, _10_, e0122085. [CrossRef]
* Chen et al. (2020) Chen, S.C.; Moyle, W.; Jones, C.; Petsky, H. A social robot intervention on depression, loneliness, and quality of life for Taiwanese older adults in long-term care. _Int. Psychogeriatr._**2020**, _32_, 981-991. [CrossRef]
* Abel et al. (2019) Abel, S.; Machin, T.; Brownlow, C. Support, socialise and advocate: An exploration of the stated purposes of Facebook autism groups. _Res. Autism Spectr. Disord._**2019**, _61_, 10-21. [CrossRef]
* Cao and Kang (2019) Cao, J.W.; Kang, J. Social relationships and patterns of use in urban public spaces in China and the United Kingdom. _Cities_**2019**, _93_, 188-196. [CrossRef]
* Kuroda et al. (2015) Kuroda, A.; Tanaka, T.; Hirano, H.; Ohara, Y.; Kikutani, T.; Furuya, H.; Obuchi, S.P.; Kawai, H.; Ishii, S.; Akishita, M.; et al. Eating Alone as Social Disengagement is Strongly Associated With Depressive Symptoms in Japanese Community-Dwelling Older Adults. _J. Am. Med. Dir. Assoc._**2015**, _16_, 578-585. [CrossRef] [PubMed]
* Chai and Na (2016) Chai, Y.; Ta, N.; Ma, J. The socio-spatial dimension of behavior analysis: Frontiers and progress in Chinese behavioral geography. _J. Geogr. Sci._**2016**, _26_, 1243-1260. [CrossRef]
* Hagerstrand (1989) Hagerstrand, T. Reflections on what about people in regional science. _Pap. Reg. Sci. Assoc._**1989**, _66_, 1-6. [CrossRef]
* Wang et al. (2013) Wang, D.; Li, F.; Chai, Y. Activity Spaces and Sociospatial Segregation in Beijing. _Urban. Geogr._**2013**, _33_, 256-277. [CrossRef]
* Cheng et al. (2014) Cheng, L.A.; Mendonca, G.; Farias Junior, J.C. Physical activity in adolescents: Analysis of the social influence of parents and friends. _J. Pediatr_**2014**, _90_, 35-41. [CrossRef]
* Anderson et al. (2015) Anderson, K.A.; Lord, L.K.; Hill, L.N.; McCune, S. Fostering the Human-Animal Bond for Older Adults: Challenges and Opportunities. _Act. Adapt. Aging_**2015**, _39_, 32-42. [CrossRef]
* Wang et al. (2019) Wang, R.Y.; Chen, H.S.; Liu, Y.; Lu, Y.; Yao, Y. Neighborhood social reciprocity and mental health among older adults in China: The mediating effects of physical activity, social interaction, and volunteering. _BMC Public Health_**2019**, _19_. [CrossRef]
* Mendonca and Junior (2015) Mendonca, G.; Junior, J.C. Physical activity and social support in adolescents: Analysis of different types and sources of social support. _J. Sports Sci._**2015**, _33_, 1942-1951. [CrossRef] [PubMed]
* Stubbs et al. (2016) Stubbs, B.; Williams, J.; Shannon, J.; Gaughran, F.; Craig, T. Peer support interventions seeking to improve physical health and lifestyle behaviours among people with serious mental illness: A systematic review. _Int. J. Ment Health Nurs._**2016**, _25_, 484-495. [CrossRef]
* Tzivian et al. (2015) Tzivian, L.; Friger, M.; Kushnir, T. The death and owning of the companion dog: Association between resource loss and stress in healthy Israeli women. _J. Vet. Behav._**2015**, _10_, 223-230. [CrossRef]
* Holstila et al. (2017) Holstila, A.; Manty, M.; Rahkonen, O.; Lahelma, E.; Lahti, J. Changes in leisure-time physical activity and physical and mental health functioning: A follow-up study. _Scand. J. Med. Sci. Sports_**2017**, _27_, 1785-1792. [CrossRef] [PubMed]
* Liangruem et al. (2019) Liangruem, N.; Craike, M.; Biddle, S.J.H.; Sutikasem, K.; Pedisic, Z. Correlates of physical activity and sedentary behaviour in the Thai population: A systematic review. _BMC Public Health_**2019**, _19_, 414. [CrossRef] [PubMed]
* Zhang et al. (2020) Zhang, T.; Lu, G.; Wu, X.Y. Associations between physical activity, sedentary behaviour and self-rated health among the general population of children and adolescents: A systematic review and meta-analysis. _BMC Public Health_**2020**, _20_, 1343. [CrossRef] [PubMed]
* Ohrberger et al. (2017) Ohrberger, J.; Fichera, E.; Sutton, M. The dynamics of physical and mental health in the older population. _J. Econ. Ageing_**2017**, \\(9\\), 52-62. [CrossRef] [PubMed]
* Ohrberger et al. (2017) Ohrberger, J.; Fichera, E.; Sutton, M. The relationship between physical and mental health: A mediation analysis. _Soc. Sci. Med._**2017**, _195_, 42-49. [CrossRef] | Companionship is the most important social support factor in physical activities, but the influence of companionship on the daily physical activities of Chinese people in the square is not clear. The ordered logistic regression was conducted to identify the companionship and physical activities associated with the physical and mental health of residents (\\(n=196\\)). The results show that companionship has direct and indirect effects on mental health, and companionship acts on physical health through physical activity in public squares. Our research understands the use of public open space (POS) from the perspective of companionship and provides a new perspective for improving the sociality of POS design.
physical activity; companionship; social support; health status 2021 | Provide a brief summary of the text. | 143 |
arxiv-format/2211_08672v3.md | # Mitigating Urban-Rural Disparities in Contrastive Representation Learning
with Satellite Imagery
Miao Zhang\\({}^{1}\\) Rumi Chunara\\({}^{2}\\)
\\({}^{1}\\)Tandon School of Engineering, New York University
\\({}^{2}\\) Tandon School of Engineering; School of Global Public Health, New York University
New York, USA
{miaozhong, rumi.chunara}@nyu.edu
## Introduction
Dense pixel-level image recognition via deep learning for tasks such as segmentation have a variety of applications in landscape feature analysis from satellite images. For example, regional water quality analysis [13] or dust emission estimation [23]. Success of the methods rely on powerful visual representations that include both local and global information. However, since pixel-level annotations are usually costly, fully supervised learning is challenging when the amount and variety of labeled data is scarce. Therefore, self-supervised learning is a promising alternative via pre-training a image encoder and transferring learnt representations to downstream problems. As a mainstream, contrastive self-supervised techniques have shown state-of-the-art performance in learning image representations for land cover semantic segmentation across locations [1, 16]. In particular, as labeled images are hard to obtain for satellite images, and contrastive approaches do not require labels, they have demonstrated benefits in many real-world tasks including monitoring dynamic land surface [15], irrigation detection from uncurated and unlabeled satellite images [1], and volcanic unrest detection with scarce image label [17].
Importantly, recent attention in machine learning systems has highlighted performance inequities including those by geographic area [14, 15, 16, 17, 18], and how prediction inequities would compromise policy-making goals [16]. Disparities at a geographic level fall into the fairness literature due to implications of unequal distribution of resources, opportunities, and essential services, leading to disparities in quality of life and opportunities for those who live in specific areas [14]. As recent work has reinforced, disparities of machine learning model prediction at a geographic level often show disparate performance with respect to minoritized groups or already under-resourced areas [16]. Therefore, given the increased potential of self-supervised contrastive learning, here we turn attention to disparity risks in recognition outcomes between urban and rural areas. The consequences of such recognition tasks have wide usage for societal decisions including urban planning, climate change and disaster risk assessment [15, 17], so disparity shapes an important concern. Further, while recent work has identified disparities with satellite image representation, specifically across urban and rural lines, and shown the negative consequence on poverty prediction [16], there is limited work on mitigating urban-rural disparities with state-of-the-art vision recognition schemes for landscape analysis, despite their wide applications.
To bridge this gap, we examine the task of land-cover segmentation and identify disparities across urban and rural areas on satellite images from different locations. As previous work shows, segmentation performance can be disparate across geography types. For example, in areas where land-cover objects have higher density or heterogeneity, performance will be lower even for similar training sample sizes [22]. Moreover, identifying and thus addressing disparities for geographic object segmentation is different from classification tasks in other image types such as facial images [23, 24, 25]. De-biasing classification outcomes relies on robust image-level global representations, which are not ideal for segmentation in which local features are important, thus may not apply to satellite data and relevant tasks. Instead, to our knowledge, we present the first exploration on learning generalizable and robust local landscape features, while reducing spurious features that are unequally correlated with areas of different urbanization or economic development. (referred to as \"bias\" or \"spurious information\"). In this way, our work addresses disparity issues in contrastive self-supervised learning for satellite image segmentation. The specific contributions are:
1. We propose a causal model depicting the relationship between landscape features and urban/rural property of images, to unravel the type of implicit bias that a model might learn from data. This framework enables us to identify and address unique disparity challenges in deep learning application for satellite images.
2. For the described bias scenario, we design a fair representation learning method which regularizes the statistical association between pixel-level image features and sensitive variables, termed FairDCL. The methods includes a novel feature map based local mutual information estimation module which incorporates layer-wise fairness regularization into the contrastive optimization objective. Given characteristics of satellite images, this work serves to mitigate performance disparities in downstream landscape segmentation tasks.
3. On real-world satellite datasets, FairDCL shows advantages for learning robust image representation in contrastive pre-training; it surpasses state-of-the-art methods demonstrating smaller urban-rural performance differences and higher worst-case performance, without sacrificing overall accuracy on the target tasks.
Scope and Limitation.This work specifically focuses on image representation learning without supervision of labels for the objects to be segmented (also referred to as pre-training), motivated by the approach's effectiveness and low annotation cost as described. Therefore, we do not cover other image analysis schemes, such as supervised or semi-supervised learning and focus on comparison to unsupervised robust representation learning baselines, including gradient reversal learning, domain independent learning, and global representation debiasing with mutual-information. The evaluation of learnt representation quality is achieved by applying a lightweight decoder for the semantic segmentation target to obtain the final downstream task performance, on segmenting common landscape objects, following previous work [23, 24].
## Related Work
### Self-Supervised Methods for Satellite Images
Semantic segmentation, which quantifies land-cover location and boundary at pixel level, is a fundamental problem in satellite data analysis [10]. Given that per-pixel segmentation annotation required for supervised training is expensive, a growing body of literature leverages self-supervised methods to extract useful image features from large-scale satellite image datasets [11, 24, 25]. Contrastive learning is used as a self-supervised pre-training approach for various downstream vision tasks including classification, detection and segmentation [2, 23, 26, 27, 28, 29]. Though most work in this area focuses on optimizing global representations for a single prediction for each image, such as presence of an animal species in an image, [23, 24], recent work has turned to learning representations suitable for dense predictions (i.e., a prediction for each pixel); such approaches train the model to compare local regions within images, thus preserving pixel-level information [23, 25, 26]. Other work [23] uses overlapped local blocks to increase depth and capacity for decoders that improves local learning. Such methods show the importance of local image representations on dense visual problems like satellite image segmentation, which we leverage for the first time to mitigate disparities on such problems.
When satellite data is collected in multiple temporal resolutions, studies have included contrastive learning methods to learn the representations invariant to subtle landscape variations across the short-term [15, 26]. However, this type of work requires multi-temporal satellite data and does not consider the same question regarding generalizability with respect to urbanization.
### Disparities in Image Recognition
Fairness-promoting approaches are being designed in multiple visual recognition domains, generally with human objects and demographic characteristics as the sensitive attributes. For example, in face recognition applications, methods are proposed for mitigating bias across groups like age, gender or race/ethnicity. Such methods include constraining models from learning sensitive information by adversarially training sensitive attribute classifiers [16, 27], using penalty losses [28, 29], sensitive information disentanglement [20, 21], and augmenting biased data using generative networks [22]. Related to healthcare data and practice, methods have shown reduction in bias by altering sensitive features such as skin color but preserve relevant features to the clinical tasks [23, 24], by augmentation [21], and by adversarial training [20, 22, 23].
Anton et al. 2021). In comparison, investigation on satellite imagery is limited; Xie _et al._ (Xie et al. 2022) formulate disparity among sub-units with linked spatial information, using spatial partitionings instead of sensitive attributes. Aiken _et al._ (Aiken, Rolf, and Blumenstock 2023) illustrate that urban-rural disparities exist in wealth prediction with satellite images. However, no work has explored disparity in image representation learning for satellite images nor proposed a method to mitigate the same.
A few recent studies have examined robustness and fairness in contrastive learning generally, including an adjusting sampling strategy to restrict models from leveraging sensitive information Tsai et al. (2021). However, this approach could lose task-specific information by only letting the model differentiate samples from the same group to avoid learning group boundaries. Two stage training with balanced augmentation Zhang et al. (2022), fairness-aware losses to penalize sensitive information used in positive and negative pair differentiation Park et al. (2022), and using hard negative samples for contrast to improve representation generalization Robinson et al. (2020) are other proposed approaches, yet such methods both only apply for classification based on image-level representations opposed to object-level segmentation which is the focus of this work.
### Summary of Gaps in the Literature
Existing work in robustness and disparity mitigation for image recognition tasks is limited in multiple ways, with important gaps specifically for satellite imagery. First, some robustness methods assume that spurious feature properties are known, such as skin colors, hair colors, presence of glasses Wang et al. (2020); Ramaswamy, Kim, and Russakovsky (2021); Yuan et al. (2022), and remove their influences on model performance. The analog of such a property is not available in satellite images, nor is it homogeneous (e.g. each country has unique landscapes relating to urban/rural). Therefore, we do not explicitly define spurious features in our model but automatically extract them with urban/rural discriminators during training. Second, since the existing methods are mostly designed for classification problems, they use image-level representation approaches. However, fairness at an image level would not necessarily extend to pixel-level dense predictions. Third, there is very little work on robust and fair satellite image analysis, for which biased features are harder to discover, interpret and remove, compared to human-object images. Existing work on generalizable satellite representations across temporal changes train models with acquisition date, which is not always available for satellite datasets.
## Problem Statement
### Selection of Sensitive Attributes
In algorithmic fairness studies, sensitive attributes are those that are historically linked to discrimination or bias, and should not be used as the basis for decisions Dwork et al. (2012). Commonly, for example in studies focused on face detection, demographic factors such as race and gender are used as sensitive attributes due to their potential, but unwanted influence on decision-making processes Lee (2018); Pessach and Shmueli (2022). For satellite imagery, while individual-level attributes such as race and gender are not of concern, there are geographical properties such as urban-rural disparities which have precedence both for historical disparities and legal precedence for the need for protection from such disparities Ananian and Dellaferrera (2024). Indeed, rural areas in the United States and globally have lower resources such as health care services Peek-Asa et al. (2011); Lin et al. (2014), higher education disadvantage Roscigno, Tomaskovic-Devey, and Crowley (2006); Li et al. (2022) and lower investment in other areas such as communication technology Nazem et al. (1996). These factors can all significantly impact outcomes for populations in these areas, and it is critical that future decisions impacting rural and urban places do not promulgate such disparities. In terms of operationalizing these attributes, while features of specific urban or rural areas can vary globally, there is consensus that urban areas demarcate cities and their surroundings. Urban areas are very developed, meaning there is a density of human structures, such as houses, commercial buildings, roads, bridges, and railways NationalGeographic (2020). In sum, examining urban and rural designations as sensitive attributes can unveil systemic inequalities and aid in creating more equitable algorithms and policies globally.
### Urban-Rural Disparities with Feature Encoders
We perform satellite image feature extraction with the studied contrastive self-supervised learning (SSL) method, MOCO-v2 Chen et al. (2020), and report the semantic segmentation fine-tuning results in Figure 1. There are several major disparities visible, especially for the class of \"Forest\" and \"Agricultural\". To further expose the issue, we evaluate
Figure 1: Model segmentation performance on urban and rural images of LoveDA Wang et al. (2021), measured by intersection-over-union (IoU). Two types of upstream feature encoders are used: (1) CNN encoder trained on unlabeled satellite images with contrastive self-supervised learning (SSL), and (2) pre-trained foundation model Segment Anything (SAM) Kirillov et al. (2023). Urban-rural disparities are observed for land-cover classes with both encoders, and the disadvantaged groups are consistent across learning models.
with a general-purpose feature encoder, Segmenting Anything Model (SAM) [14]. It is a vision foundation model trained on a large image dataset (11 millions) of wide geographic coverage for learning comprehensive features. Therefore, the model can transfer zero-shot to image segmentation for our dataset. The results show similar disparities to SSL for each land-cover class (Figure 1). Motivated by the problem, we propose a causal model to unravel feature relationships in satellite images and the design to utilize robust features to mitigate disparity.
### Causal Model for Feature Relationships
Land-cover objects in satellite images, such as residential building, roads, vegetation, etc, often have heterogeneous shapes and distributions in urban and rural areas even within the same geographic region. These distributions are affected by varying levels of development (infrastructure, greening, etc). Considering an attribute \\(S=\\{s_{0},s_{1}\\}\\) denoting urban/rural area, we define visual representation (high-dimensional embeddings output by model intermediate layers) as \\(X=\\{X_{spurious},X_{robust}\\}\\), where \\(X_{spurious}\\) includes information that varies across urban/rural groups in \\(S\\), for example, the contour, color, or texture of \"road\" or \"building\" class. \\(X_{robust}\\), on the other hand, includes generalizable information, for example, \"road\" segments are narrow and long, while \"building\" segments are clustered. When model output \\(Y\\) is drawn from both \\(X_{robust}\\) and \\(X_{spurious}\\), it can lead to biased performance. For example, roads in grey/blue color (Figure 3 A), with vehicles on them (Figure 3 B), and with lane markings (Figure 3 C), are segmented better (blue circle) than the others (red circle, Figure 3 D). Examples of more classes' spurious and robust components are in the supplementary material A 1.
Footnote 1: Code and supplementary material can be accessed at: [https://github.com/ChunaraLab/FairDCL-mitigating-urban-rural-disparity](https://github.com/ChunaraLab/FairDCL-mitigating-urban-rural-disparity)
As a result, urban and rural representations containing disproportionate spurious information levels will cause group-level model performance disparities in semantic segmentation. Note that there are other factors not uniformly distributed across urban and rural areas, such as the number of pixels by land object class. Since we focus on representation learning, we denote such factors as unmeasured confounders \\(U\\). The problem is illustrated, in terms of causal relationships, in Figure 2. Different from methods which directly alter spurious features which are defined a priori, _the goal here is to reduce the part of model representations that are correlated to the urban and rural split, an important delineation which has strong disparities globally_. That is, to obtain \\(\\hat{X}_{robust}\\) which promotes \\(\\hat{Y}\\perp S|\\hat{X}_{robust}\\), where the model prediction \\(\\hat{Y}\\) is independent to group discrepancy.
Accordingly, there is a need for model to (1) focus on robust and generalizable landscape features, and (2) capture local features of the image in the pre-training stage. With contrastive self-supervised pre-training as the framework, we propose an intervention algorithm to achieve the goals and promote urban-rural downstream segmentation equity.
## Methodology
### Datasets
While several standard image datasets used in fairness studies exist, datasets with linked group-level properties, specifically, urbanization, for real-world satellite imagery are very limited. We identified two datasets which had or could be linked with urban/rural annotations for disparity analyses, collected from Asia and Europe respectively, and with different spatial resolutions:
_LoveDA_[15] is composed of 0.3m spatial resolution RGB satellite images collected from three cities in China. Images are annotated at pixel-level into 7 land-cover object classes, also with a label based on whether they are from an urban or rural district. Notably, images from the two groups have different class distributions. For example, urban areas contain more buildings and roads, while rural areas contain larger amounts of agriculture [16]. Moreover, it has been shown that model segmentation performances differ across urban and rural satellite images [17]. We split the original images into 512\\(\\times\\)512 pixel tiles, take 18% of the data for testing,
Figure 3: Examples of segmentation bias for “road” class due to spurious landscape features; the model segments certain patterns well, like straight and paved road (blue circles), but segments the variations poorly, like curvy and sand road (red circles).
Figure 2: Diagram of defined causal relationships between representation \\(X\\) learnt with contrastive pre-training, target task prediction outputs \\(Y\\), and urban/rural attribute \\(S\\). \\(X\\) contains two parts, \\(X_{spurious}\\) generated from features spuriously correlated to \\(S\\) and \\(X_{robust}\\) generated from independent and unchangeable features. \\(U\\) is unmeasured confounders which cause both \\(S\\) and \\(X_{spurious}\\) thus result in correlations between \\(S\\) and \\(X_{spurious}\\).
and for the rest, 90% are for contrastive pre-training (5845 urban tiles and 5572 rural tiles) and 10% for fine-tuning the pre-trained representation to generate predictions.
_EOLearn Slovenia_ [21] is composed of 10m spatial resolution Sentinel-2 images collected from whole region of Slovenia for the year 2017, with pixel-wise land cover annotations for 10 classes. We only use the RGB bands for the consistency with other datasets, remove images that have more than 10% of clouds, and split images into 256\\(\\times\\)256 pixel tiles to enlarge the training set. Labels are assigned by assessing if the center of each tile is located in urban boundaries or not (using urban municipality information2 and administrative boundaries from OpenStreetMap3). This process generates 1760 urban tiles and 1996 rural tiles in total. Similar to the LoveDA process, 18% of the data are used for testing, and 90% of the rest of the data are used for pre-training and 10% for fine-tuning.
Footnote 2: [https://www.gov.si/en/topics/towns-and-protected-areas-in-slovenia/](https://www.gov.si/en/topics/towns-and-protected-areas-in-slovenia/)
Footnote 3: [https://www.openstreetmap.org/#map=12/40.7154/-74.1289](https://www.openstreetmap.org/#map=12/40.7154/-74.1289)
### Metrics
The quality of representations learnt from self-supervised pre-training is usually evaluated by its transfer-ability to downstream tasks [13, 14]. On the downstream semantic segmentation, we use Intersection-over-Union (IoU) as the accuracy metric, calculated using pixel-wise true positives (\\(TP\\)), false positives (\\(FP\\)), and false negatives (\\(FN\\)),
\\[\\text{IoU}:=\\frac{TP}{TP+FP+FN}.\\]
Group accuracy for group \\(g^{i}\\) is computed via the mean of class-wise IoUs (referred to as \\(\\mu_{g^{i}}\\)). Model overall accuracy is the averaged group results (mIoU).
We use two fairness metrics: First, the group difference with regard to accuracy [12, 13, 14] (Diff. Diff for a 2-element sensitive attribute group \\(\\{g^{1},g^{2}\\}\\) is defined as:
\\[\\text{Diff }\\{g^{1},g^{2}\\}:=\\frac{|\\mu_{g^{1}}-\\mu_{g^{2}}|}{\\min\\{\\mu_{g^{1}}. \\mu_{g^{2}}\\}}.\\]
Second, the worst group results (Wst), which is the lower group accuracy between urban and rural. This is motivated by the problem of worsening overall performance for zero disparity [14].
### Multi-Level Representation De-biasing
The idea of constraining mutual information between representation and sensitive attribute, also referred to as bias, to achieve attribute-invariant predictions has multiple applications [15, 16, 17], which all operate on a global representation \\(\\mathbf{z}=F(\\mathbf{d})\\), output from image encoder \\(F\\). However, invariance constraints only on the global output layer do not guarantee that sensitive information is omitted from representation hierarchies of intermediate layers or blocks in a model (herein we use the term \"multi-level representation\" for simplicity). As has been shown, the distribution of bias in terms of its category, number and strength is not constant across layers in contrastive self-supervised models [11]. Besides, layer-wise regularization is necessary to constrain the underlying representation space [13, 14, 15]. Pixel-level image features in representation hierarchies are important [13, 14], especially when transferring to dense downstream tasks such as semantic segmentation, where representations are aggregated at different resolution scales in order to identify objects in pixel space. Given the evidences in sum, we design a feature map based local mutual information estimation module and incorporate layer-wise regularization into the contrastive optimization objective.
To measure mutual information \\(MI(X,S)\\) between local feature \\(X\\) and the urban/rural attribute \\(S=\\{s_{0},s_{1}\\}\\), we adapt the concat-and-convolve architecture in [15]. Notating the \\(i^{th}\\) layer as \\(li\\), we first build a one-hot encoding map \\(\\mathbf{c}^{li}\\) for attributes \\(S\\) whose size is same as the feature map \\(\\mathbf{x}^{li}\\) output by \\(li\\), and channel is the size of \\(S\\). For each \\(\\mathbf{x}^{li}\\), a \\(\\mathbf{c}^{li}\\) is built from the joint distribution of representation space \\(X\\) and attribute space \\(S\\), and the marginal distribution of \\(S\\) separately, then the \\(\\mathbf{c}^{li}\\) built in the two ways are concatenated with \\(\\mathbf{x}^{li}\\) to form an \"aligned\" feature map pair, denoted as \\(P_{XS}(\\mathbf{x}^{li}\\,\\|\\,\\mathbf{c}^{li})\\), and a \"shuffled\" feature map pair, denoted as \\(P_{X}P_{S}(\\mathbf{x}^{li}\\,\\|\\,\\mathbf{c}^{li})\\). The mutual information between the aligned and shuffled feature map pairs will be estimated by a three-layer \\(1\\times 1\\) convolutional discriminator \\(D_{i}\\), using the JSD-derived formation [15]:
\\[MI_{JSD}(X^{li};S):=E_{P_{XS}}[-\\text{sp}(-D_{i}(\\mathbf{x}^{li} \\,\\|\\,\\mathbf{c}^{li}))]\\] \\[-E_{P_{XS}}[\\text{sp}(D_{i}(\\mathbf{x}^{li}\\,\\|\\,\\mathbf{c}^{li}))],\\]
where \\(\\text{sp}(a)=log(1+e^{a})\\), and \\(D_{i}\\) uses separate optimization to converge to the lower bound of \\(MI_{JSD}\\).
We empirically validate the necessity to apply multi-level
Figure 4: Bias accumulation during contrastive pre-training. (A) Sum of mutual information estimation, and (B) the contrastive loss of ResNet50 model with MoCo-V2 pre-training. The baseline method with no intervention (Baseline), regularizing only on the global feature vector (Global only), first two layers of feature maps (First-two only), last two layers of feature maps (Last-two only) all show bias residuals compared to the multi-level method proposed as part of FairDCL.
level constraints to reduce bias accumulation across layers. We run self-supervised contrastive learning on LoveDA data using MoCo-v2 Chen et al. (2020) with ResNet50 He et al. (2016) as the base model. Simultaneous to model contrastive training, four independent discriminators are optimized to measure the mutual information \\(MI_{JSD}(X^{l1};S), ,MI_{JSD}(X^{l4};S)\\) between representation output from the four residual layers and one output layer of ResNet50 and sensitive attributes: urban/rural. \\(MI_{JSD}\\) are summed to measure the total amount of model bias for the data batch. As plotted in Figure 4 (A), the baseline training without \\(MI_{JSD}\\) intervention shows continually increasing and significantly higher bias than other methods as the number of epochs increase. Adding a penalty loss which encourages minimizing \\(MI_{JSD}\\) only on the global representation or on subsets of layers both control bias accumulation, but their measurements are still high compared to multilevel, showing that global level regularization might remove partial bias but leave significant residual from earlier layers. The running loss during training indicates all methods' convergence (Figure 4 (B)); mutual information constraints in latent space do not affect the contrastive learning objective.
### FairDCL Pipeline
Figure 5 provides an overview of the proposed fair dense representations with contrastive learning (FairDCL) method and the training process (Detailed algorithm steps are in Algorithm 1). For each iteration of contrastive pre-training, latent space representation \\(\\textbf{x}^{li}\\) is yielded at layer \\(li\\) of the encoder \\(F\\). Layer discriminators \\(D_{i}\\) are optimized by simultaneously estimating and maximizing \\(MI_{JSD}\\) with the loss:
\\[\\mathcal{L}_{D_{i}}(\\textbf{x}^{li},S;D_{i})=-MI_{JSD}(\\textbf{x}^{li};S). \\tag{1}\\]
Following Ragonesi et al. (2021), each \\(MI\\) discriminator is optimized for multiple inner rounds before encoder weights get updated. More rounds are desirable for discriminators to estimate mutual information with increased accuracy, and based on resource availability, we set a uniform round number \\(B=20\\). After discriminator optimization completes, one iteration of image encoder training is conducted wherein discriminators infer the multi-stage mutual information by loss in (1), and the losses are combined with the contrastive learning loss with a hyper-parameter \\(\\alpha\\) adjusting the fairness constraint strength. The final training objective is:
\\[\\mathcal{L}_{F}(X,S;D,F)=\\mathcal{L}_{con}-\\alpha(\\sum_{li}\\mathcal{L}_{D_{i}} (X^{li},S;D_{i})), \\tag{2}\\]
With the training objective, the image encoder is encouraged to generate representation \\(X\\) with high \\(\\mathcal{L}_{D}\\), thus low \\(MI_{JSD}\\) (low spurious information). We apply FairDCL on the state-of-the-art contrastive learning framework MoCo-v2 Chen et al. (2020). The loss used for learning visual representation is InfoNCE Oord et al. (2018):
\\[\\mathcal{L}_{con}(F)=-log\\frac{\\text{exp}(qk/\\tau)}{\\text{exp}(qk/\\tau)+\\sum_{ j}(q\\hat{k_{j}}/\\tau)}. \\tag{3}\\]
Here \\(F\\) consists of a query encoder and a key encoder, which outputs representations \\(q\\) and \\(k\\) from two augmented views of the same image. \\(\\hat{k_{j}}\\) is a queue of representations encoded from different images in the dataset. \\(\\mathcal{L}_{con}\\) encourages the image encoder to distinguish positive and negative keys so it can extract useful visual representations.
_Generalizability to contrastive frameworks._ We note that the proposed locality-sensitive de-biasing scheme applying intervention on embedding space can be integrated with any state-of-the-art convolution feature extractors, thus has the potential to be further promoted with different contrastive learning frameworks. Empirically, we experiment with DenseCL Wang et al. (2021), which designs pixel-level positive and negative keys to better learn local feature correspondences. Since the method fills the gap between pre-training and downstream dense prediction, it is suitable as an alternative contrastive learning framework for our proposed method. The results are attached in the supplementary material.
Figure 5: Overview of FairDCL. It captures spurious information \\(X_{spurious}\\) learnt by urban/rural discriminators, and applies regularization on image representations at multiple levels. We build one-hot feature maps to encode urban/rural attribute and estimate mutual information by neural discriminators. Penalty loss \\(\\mathcal{L}_{D_{i}}\\) are computed accordingly and added into the final contrastive pre-training objective.
## Experiments
### Implementation Details
_The first stage of contrastive pre-training._ The base model for the image encoders is ResNet50 [10]. The mutual information discriminators \\(D_{i}\\) are built with \\(1\\times 1\\) convolution layers (architecture details in supplementary material D). The contrastive pre-training runs for 10k iterations for each dataset with a batch size of 64. Data augmentations used to generate positive and negative image view pairs are random greyscale conversion and random color jittering (no cropping, flips or rotations in order to retain local feature information). Hyper-parameter \\(\\alpha\\), which scales the amount of mutual information loss \\(\\mathcal{L}_{D}\\) in the total loss, is set to 0.5. Adam optimizer is used with a learning rate of \\(10^{-3}\\) and weight decay of \\(10^{-4}\\) for both encoders and discriminators.
_Comparison methods_ include state-of-the-art fair representation learning approaches: (1) gradient reversal training (GR) [13], which follows the broad approach of removing bias or sensitive information from learnt representations by inverse gradients of attribute classifiers. This approach has been adapted to multiple image recognition tasks [14, 15, 16]. (2) Domain independent training (DI) which samples data with a consistent group attribute in each training iteration to avoid leveraging spurious group boundaries [15, 16]. (3) Unbiased representation learning (UnbiasedR) [1] which uses single-level de-biasing only for the global image representation. All comparison methods use the same learning architectures, and are trained with the same settings.
_The second stage of semantic segmentation fine-tuning._ Following the protocol of previous work [15, 16, 17], we train a FCN-8s [15] model on top of the fixed ResNet50 backbone learnt from the pre-training stage for 60 epochs with a batch size of 16, and evaluate on the testing split for each dataset. We use cross-entropy (CE) loss as the training objective, and stochastic gradient descent (SGD) as the optimizer with a learning rate of \\(10^{-3}\\) and a momentum of 0.9. The learning rate is decayed using a polynomial learning rate scheduler implemented in PyTorch. Image data augmentations used in the fine-tuning include random horizontal/vertical flips and random rotations. For both stages, the dimension of the input image to the model is 512\\(\\times\\)512\\(\\times\\)3 where 3 indicates the RGB bands. NVIDIA RTX8000 GPU is used for training.
\"mIoU\" metric) on all cases. Obtaining comparable or better overall accuracy to Baseline demonstrates robustness in addition to disparity reduction. In contrast, DI allows image contrastive pairs only from a fraction of data which can discount model learning [22]. The adversarial approach used in GR can be counter-productive if the adversary is not trained enough to achieve the infimum [17], which could all potentially degrade model quality for group equalization. Our adapted mutual information constraints use information-theoretic objectives, proved to be able to optimize without competing with the encoder so can match or exceed state-of-the-art adversarial de-biasing methods [16, 15]. FairDCL further shows that applying the mutual information constraints on multi-level latent representations can better extend fairness to pixel-level applications, which outperforms the image-level only constraints used in UnbiasedR.
Embedding SpacesTo further trace how the image representations learnt with proposed method improves fairness, we analyze a linear separation property [14] on model embedding spaces. Specifically, we assess how well a linear model can differentiate urban/rural sensitive attributes using learnt representations. High separation degree indicates that the encoder model's embedding space and attribute are differentiable [1, 15, 16, 17], which could be used as a short-cut in prediction and cause bias, thus is not desirable here. We freeze the trained ResNet50 encoder and use a fully connected layer on top of representation output from different layers for urban/rural attribute classification. Figure 7 (A) presents the classification score on urban/rural attribute on LoveDA: FairDCL obtains the lowest attribute differentiation results for all embedding stages and global stage of representation, indicating that the encoder trained with FairDCL has favorably learnt the least sensitive information at pixel-level features during the contrastive pre-training. Though we focus on satellite images, we check the method's generalizability to a different image domain by conducting contrastive pre-training on MSCOCO [13], a dataset commonly used in fairness studies [22, 23, 24]. Sensitive attribute gender, categorized as \"women\" or \"men\", is obtained from [15]. There are 2901 images with \"women\" and 6567 images with \"men\" labels. Linear analysis results show that FairDCL again produces the desired lowest classification accuracies (Figure 7 (B)), but unlike in Figure 7 (A), it does not surpass the other comparison methods much. This is likely because while geographic attributes are represented at a pixel-level, human face/object, as a foreground, may not be represented through local features throughout the image, thus gender attributes are less pronounced as pixel-level biases in dense representation learning, which is what our proposed approach focuses on.
Ablation StudiesWe perform an ablation study for hyper-parameter \\(\\alpha\\) which scales discriminator loss \\(\\mathcal{L}_{D}\\), thus the fairness regularization strength. The method is overall robust to the parameter (Table 2); a large \\(\\alpha\\) like 10 will not corrupt the downstream accuracy, and a small \\(\\alpha\\) like \\(0.1\\) has lower fairness gain but still shows advantage over comparison methods in Table 1. We select \\(\\alpha=0.5\\) for a bal
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline & \\(\\alpha\\) = 0.1 & & \\multicolumn{3}{c}{\\(\\alpha\\) = 0.5} \\\\ \\hline Diff & Wst & mIoU & Diff & Wst & mIoU \\\\ \\hline
0.138 & 0.506 & **0.541** & 0.127 & **0.508** & 0.540 \\\\ \\hline & \\(\\alpha\\) = 1 & & \\multicolumn{3}{c}{\\(\\alpha\\) = 10} \\\\ \\hline Diff & Wst & mIoU & Diff & Wst & mIoU \\\\ \\hline
0.127 & 0.506 & 0.538 & **0.126** & 0.506 & 0.538 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Ablation study for discriminator weights. The fine-tuning result is shown for the representation pre-trained with \\(\\alpha=0.1,0.5,1,10\\), of the proposed fairness objective.
Figure 6: Class activation mapping. Detailed image locations that most impact model’s prediction for “road” (first row) and “forest” (second row). FairDCL better recognizes land-cover segments particular to sensitive attributes.
Figure 7: Linear separation evaluation: We train a linear neural layer on top of each level of representations output from different model layers. They include four residual modules (“layer1” - “layer4”) that encode intermediate representations and a global output layer (“fc”) that encodes the global representation. The linear layer is to classify sensitive attributes: urban/rural on (A) LoveDA dataset, and women/men on (B) MS-COCO dataset. _Lower accuracy is good_: it indicates harder to predict sensitive attributes using the pre-trained representations, thus lower bias residual.
ance. Furthermore, urban and rural groups have comparable training samples in earlier experiments (LoveDA is 5.8k and 5.5k, Slovenia is 1.7k and 1.9k for urban/rural). We intentionally reduce pre-training samples for certain groups to generate more unbalanced subsets. Shown in Table 3, the proposed method shows robustness under the two less even group distributions.
## Discussion and Conclusion
Among the broader fairness literature in visual recognition, work focusing on satellite imagery that depicts physical environments has been limited. This limitation is largely due to the difficulty in identifying population level biased landscape features. Also, disparity problems in satellite image recognition may get categorized as domain adaptation or transfer learning problems, other popular computer vision fields; though they share similar technical methods in bias mitigation and invariant representation learning, the specific objective of fair urban/rural satellite image recognition is to remove spatially disproportionate features that favor one subgroup over the others, beyond addressing covariate shift.
Here we define the scenario with a causal graph, showing that contrastive self-supervised pre-training can utilize spurious land-cover object features, thus accumulate urban/rural attribute-correlated bias. The biased image representations will result in disparate downstream segmentation accuracy between subgroups within a specific geographic area. Then, we address the problem via a mutual information training objective to learn robust local features with minimal spurious representations. Experimental results show fairer segmentation results pre-trained with the proposed method on real-world satellite datasets. In addition to disparity reduction, the method consistently avoids a trade-off between model fairness and accuracy.
As future directions, a wider set of satellite datasets can be explored. The fairness analysis can be scaled to a greater number of attributes relevant to geography, in addition to urbanization. Methods to encode sensitive attributes in the model embedding space in addition to one-hot feature maps can also be explored. We encourage experimenting with different encoding mechanisms and mutual information estimators to improve fairness regularization performance across different real-world settings.
## Acknowledgments
We acknowledge funding from NSF award number 1845487. We also thank Harvineet Singh and Vishwali Mhasawade for helpful discussions.
## References
* S. Abbasi-Sureshjani, R. Raumanns, B. E. Michels, G. Schouten, and V. Cheplygina (2020)Risk of training diagnostic algorithms on data with demographic bias. In In Interpretable and Annotation-Efficient Learning for Medical Image Computing: Third International Workshop, iMMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Connection with MICCAI 2020, Lima, Peru, October 4-8, 2020, Proceedings 3, pp. 183-192. Cited by: SS1.
* C. Agastya, S. Ghebremusse, I. Anderson, H. Vahabi, and A. Todeschini (2021)Self-supervised contrastive learning for irrigation detection in satellite imagery. arXiv preprint arXiv:2108.05484. Cited by: SS1.
* E. Aiken, E. Rolf, and J. Blumenstock (2023)Fairness and representation in satellite-based poverty maps: evidence of urban-rural disparities and their impacts on downstream policy. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23) Special Track on AI for Good. Cited by: SS1.
* S. Ananian and G. Dellaferrera (2024)Employment and wage disparities between rural and urban areas. Vol. 107,, pp.. External Links: Document Cited by: SS1.
* K. Ayush, B. Uzkent, C. Meng, K. Tanmay, M. Burke, D. Lobell, and S. Ermon (2021)Geography-aware self-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10181-10190. Cited by: SS1.
* N. I. Bountos, I. Papoutsis, D. Michail, and N. Anantrasirichai (2021)Self-supervised contrastive learning for volcanic unrest detection. IEEE Geoscience and Remote Sensing Letters19, pp. 1-5. Cited by: SS1.
* P. Burlina, N. Joshi, W. Paul, K. D. Pacheco, and N. M. Bressler (2021)Addressing artificial intelligence bias in retinal diagnostics. Translational Vision Science & Technology10 (2), pp. 13-13. Cited by: SS1.
* K. Chaitanya, E. Erdil, N. Karani, and E. Konukoglu (2020)Contrastive learning of global and local features for medical image segmentation with limited annotations. Advances in Neural Information Processing Systems33, pp. 12546-12558. Cited by: SS1.
* T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020)A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. Cited by: SS1.
* X. Chen, H. Fan, R. Girshick, and K. He (2020)Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297. Cited by: SS1.
* E. Creager, D. Madras, J. Jacobsen, M. Weis, K. Swersky, T. Pitassi, and R. Zemel (2019)Flexibly fair representation learning by disentanglement. In International conference on machine learning, pp. 1436-1445. PMLR.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & \\multicolumn{3}{c}{Urban:68\\% Rural-32\\%} & \\multicolumn{3}{c}{Urban:35\\% Rural:65\\%} \\\\ \\cline{2-7} Method & Diff(\\(\\downarrow\\)) & Wst (\\(\\uparrow\\)) & mIoU(\\(\\uparrow\\)) & Diff(\\(\\downarrow\\)) & Wst (\\(\\uparrow\\)) & mIoU(\\(\\uparrow\\)) \\\\ \\hline \\hline Baseline & 0.147 & 0.500 & 0.537 & 0.170 & 0.497 & 0.539 \\\\ GR & 0.154 & 0.497 & 0.535 & 0.144 & 0.511 & 0.547 \\\\ DI & 0.145 & 0.498 & 0.534 & 0.148 & 0.499 & 0.535 \\\\ UnbiasedR & 0.146 & 0.500 & 0.537 & 0.145 & 0.503 & 0.539 \\\\ FairDCL & **0.128** & **0.511** & 0.543 & **0.122** & **0.518** & 0.549 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Ablation study for unbalanced group data. The proportion of urban/rural samples in the pre-training data is adjusted such that one group has much less samples. FairDCL performs consistently with the data distribution shifts.
de Vries, T.; Misra, I.; Wang, C.; and van der Maaten, L. 2019. Does Object Recognition Work for Everyone? In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_.
* Deng et al. (2023) Deng, W.; Zhong, Y.; Dou, Q.; and Li, X. 2023. On fairness of medical image classification with multiple sensitive attributes via learning orthogonal representations. In _International Conference on Information Processing in Medical Imaging_, 158-169. Springer.
* Dwork et al. (2012) Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness through awareness. In _Proceedings of the 3rd innovations in theoretical computer science conference_, 214-226.
* Gong et al. (2021) Gong, S.; Liu, X.; and Jain, A. K. 2021. Mitigating face recognition bias via group adaptive classifier. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 3414-3424.
* Griffith (2002) Griffith, J. A. 2002. Geographic techniques and recent applications of remote sensing to landscape-water quality studies. _Water, Air, and Soil Pollution_, 138: 181-197.
* Hay (1995) Hay, A. M. 1995. Concepts of equity, fairness and justice in geographical studies. _Transactions of the Institute of British Geographers_, 500-508.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, 770-778.
* Hendrycks et al. (2019) Hendrycks, D.; Mazeika, M.; Kadavath, S.; and Song, D. 2019. Using self-supervised learning can improve model robustness and uncertainty. _Advances in neural information processing systems_, 32.
* Hjelm et al. (2018) Hjelm, R. D.; Fedorov, A.; Lavoie-Marchildon, S.; Grewal, K.; Bachman, P.; Trischler, A.; and Bengio, Y. 2018. Learning deep representations by mutual information estimation and maximization. _arXiv preprint arXiv:1808.06670_.
* Jiang et al. (2017) Jiang, Z.; Wang, Y.; Davis, L.; Andrews, W.; and Rozgic, V. 2017. Learning discriminative features via label consistent neural network. In _2017 IEEE Winter Conference on Applications of Computer Vision (WACV)_, 207-216. IEEE.
* Jin et al. (2016) Jin, X.; Chen, Y.; Dong, J.; Feng, J.; and Yan, S. 2016. Collaborative layer-wise discriminative learning in deep neural networks. In _European Conference on Computer Vision_, 733-749. Springer.
* Jing and Tian (2020) Jing, L.; and Tian, Y. 2020. Self-supervised visual feature learning with deep neural networks: A survey. _IEEE transactions on pattern analysis and machine intelligence_, 43(11): 4037-4058.
* Jung et al. (2021) Jung, S.; Lee, D.; Park, T.; and Moon, T. 2021. Fair feature distillation for visual recognition. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 12115-12124.
* Kim et al. (2019) Kim, B.; Kim, H.; Kim, K.; Kim, S.; and Kim, J. 2019. Learning not to learn: Training deep neural networks with biased data. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 9012-9020.
* Kirillov et al. (2023) Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; et al. 2023. Segment anything. _arXiv preprint arXiv:2304.02643_.
* Kondmann and Zhu (2021) Kondmann, L.; and Zhu, X. X. 2021. Under the Radar-Auditing Fairness in ML for Humanitarian Mapping. _arXiv preprint arXiv:2108.02137_.
* Lee (2018) Lee, N. T. 2018. Detecting racial bias in algorithms and machine learning. _Journal of Information, Communication and Ethics in Society_, 16(3): 252-260.
* Li et al. (2022a) Li, H.; Li, Y.; Zhang, G.; Liu, R.; Huang, H.; Zhu, Q.; and Tao, C. 2022a. Global and local contrastive self-supervised learning for semantic segmentation of HR remote sensing images. _IEEE Transactions on Geoscience and Remote Sensing_, 60: 1-14.
* Li et al. (2022b) Li, H.; Zeng, Y.; Gan, L.; Tuersun, Y.; Yang, J.; Liu, J.; and Chen, J. 2022b. Urban-rural disparities in the healthy ageing trajectory in China: a population-based study. _BMC Public Health_, 22(1): 1406.
* Li et al. (2021) Li, W.; Chen, H.; and Shi, Z. 2021. Semantic segmentation of remote sensing images with self-supervised multitask representation learning. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 14: 6438-6450.
* Li et al. (2019) Li, Z.; Brendel, W.; Walker, E.; Cobos, E.; Muhammad, T.; Reimer, J.; Bethge, M.; Sinz, F.; Pitkow, Z.; and Tolias, A. 2019. Learning from brains how to regularize machines. _Advances in neural information processing systems_, 32.
* Lin et al. (2014) Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In _European conference on computer vision_, 740-755. Springer.
* Long et al. (2015) Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, 3431-3440.
* Lv et al. (2023) Lv, J.; Shen, Q.; Lv, M.; Li, Y.; Shi, L.; and Zhang, P. 2023. Deep learning-based semantic segmentation of remote sensing images: a review. _Frontiers in Ecology and Evolution_, 11: 1201125.
* Majumdar et al. (2022) Majumdar, S.; Flynn, C.; and Mitra, R. 2022. Detecting Bias in the Presence of Spatial Autocorrelation. In _Algorithmic Fairness through the Lens of Causality and Robustness workshop_, 6-18. PMLR.
* Mall et al. (2023) Mall, U.; Hariharan, B.; and Bala, K. 2023. Change-aware sampling and contrastive learning for satellite images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 5261-5270.
* Mehrabi et al. (2021) Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; and Galstyan, A. 2021. A survey on bias and fairness in machine learning. _ACM Computing Surveys (CSUR)_, 54(6): 1-35.
* Misra and Maaten (2020) Misra, I.; and Maaten, L. v. d. 2020. Self-supervised learning of pretext-invariant representations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 6707-6717.
Morales, A.; Fierrez, J.; Vera-Rodriguez, R.; and Tolosana, R. 2020. Sensitivenets: Learning agnostic representations with application to face images. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 43(6): 2158-2164.
* Moyer et al. (2018) Moyer, D.; Gao, S.; Brekelmans, R.; Galstyan, A.; and Ver Steeg, G. 2018. Invariant representations without adversarial training. _Advances in Neural Information Processing Systems_, 31.
* NationalGeographic (2020) NationalGeographic. 2020. Urban Area. [https://education.nationalgeographic.org/resource/urban-area/](https://education.nationalgeographic.org/resource/urban-area/). Accessed: 2024-08-03.
* Nazem et al. (1996) Nazem, S. M.; Liu, Y.-H.; Lee, H.; and Shi, Y. 1996. Implementing telecommunications infrastructure: a rural America case. _Telematics and Informatics_, 13(1): 23-31.
* O Pinheiro et al. (2020) O Pinheiro, P. O.; Almahairi, A.; Benmalek, R.; Golemo, F.; and Courville, A. C. 2020. Unsupervised learning of dense visual representations. _Advances in Neural Information Processing Systems_, 33: 4489-4500.
* Oord et al. (2018) Oord, A. v. d.; Li, Y.; and Vinyals, O. 2018. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_.
* Park et al. (2021) Park, S.; Hwang, S.; Kim, D.; and Byun, H. 2021. Learning disentangled representation for fair facial attribute classification via fairness-aware information alignment. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, 2403-2411.
* Park et al. (2022) Park, S.; Lee, J.; Lee, P.; Hwang, S.; Kim, D.; and Byun, H. 2022. Fair Contrastive Learning for Facial Attribute Classification. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 10389-10398.
* Peek-Asa et al. (2011) Peek-Asa, C.; Wallis, A.; Harland, K.; Beyer, K.; Dickey, P.; and Saftlas, A. 2011. Rural disparity in domestic violence prevalence and access to resources. _Journal of women's health_, 20(11): 1743-1749.
* Pessach and Shmueli (2022) Pessach, D.; and Shmueli, E. 2022. A review on fairness in machine learning. _ACM Computing Surveys (CSUR)_, 55(3): 1-44.
* Puyol-Anton et al. (2021) Puyol-Anton, E.; Ruijsink, B.; Piechnik, S. K.; Neubauer, S.; Petersen, S. E.; Razavi, R.; and King, A. P. 2021. Fairness in cardiac MR image analysis: an investigation of bias due to data imbalance in deep learning based segmentation. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, 413-423. Springer.
* Raff and Sylvester (2018) Raff, E.; and Sylvester, J. 2018. Gradient reversal against discrimination: A fair neural network learning approach. In _2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)_, 189-198. IEEE.
* Ragonesi et al. (2021) Ragonesi, R.; Volpi, R.; Cavazza, J.; and Murino, V. 2021. Learning unbiased representations via mutual information backpropagation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2729-2738.
* Ramaswamy et al. (2021) Ramaswamy, V. V.; Kim, S. S.; and Russakovsky, O. 2021. Fair attribute classification through latent space de-biasing. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 9301-9310.
* Reed et al. (2022) Reed, C. J.; Yue, X.; Nrusimha, A.; Ebrahimi, S.; Vijaykumar, V.; Mao, R.; Li, B.; Zhang, S.; Guillory, D.; Metzger, S.; et al. 2022. Self-supervised pretraining improves self-supervised pretraining. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2584-2594.
* Robinson et al. (2020) Robinson, J.; Chuang, C.-Y.; Sra, S.; and Jegelka, S. 2020. Contrastive learning with hard negative samples. _arXiv preprint arXiv:2010.04592_.
* Roscigno et al. (2006) Roscigno, V. J.; Tomaskovic-Deevey, D.; and Crowley, M. 2006. Education and the inequalities of place. _Social forces_, 84(4): 2121-2145.
* Saha et al. (2020) Saha, S.; Mou, L.; Qiu, C.; Zhu, X. X.; Bovolo, F.; and Bruzzone, L. 2020. Unsupervised deep joint segmentation of multitemporal high-resolution images. _IEEE Transactions on Geoscience and Remote Sensing_, 58(12): 8780-8792.
* Scheibenreif et al. (2022) Scheibenreif, L.; Hanna, J.; Mommert, M.; and Borth, D. 2022. Self-Supervised Vision Transformers for Land-Cover Segmentation and Classification. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 1422-1431.
* Selvaraju et al. (2017) Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proceedings of the IEEE international conference on computer vision_, 618-626.
* Serna et al. (2022) Serna, I.; Morales, A.; Fierrez, J.; and Obradovich, N. 2022. Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning. _Artificial Intelligence_, 305: 103682.
* Setianto and Gamal (2021) Setianto, M.; and Gamal, A. 2021. Spatial justice in the distribution of public services. In _IOP Conference Series: Earth and Environmental Science_, volume 673, 012024. IOP Publishing.
* Sinergise (2022) Sinergise. 2022. Modified Copernicus Sentinel data 2017/Sentinel Hub. [https://sentinel-hub.com/](https://sentinel-hub.com/). Accessed: 2024-08-03.
* Sirotkin et al. (2022) Sirotkin, K.; Carballeira, P.; and Escudero-Vinolo, M. 2022. A study on the distribution of social biases in self-supervised learning visual models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 10442-10451.
* Soden et al. (2019) Soden, R.; Wagenaar, D.; Luo, D.; and Tijssen, A. 2019. Taking ethics, fairness, and bias seriously in machine learning for disaster risk management. _arXiv preprint arXiv:1912.05538_.
* Szabo et al. (2021) Szabo, A.; Jamali-Rad, H.; and Mannava, S.-D. 2021. Tilted cross-entropy (TCE): Promoting fairness in semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2305-2310.
* Tang et al. (2021) Tang, R.; Du, M.; Li, Y.; Liu, Z.; Zou, N.; and Hu, X. 2021. Mitigating gender bias in captioning systems. In _Proceedings of the Web Conference 2021_, 633-645.
* Tsai et al. (2021) Tsai, Y.-H. H.; Ma, M. Q.; Zhao, H.; Zhang, K.; Morency, L.-P.; and Salakhutdinov, R. 2021. Conditional contrastive learning: Removing undesirable information in self-supervised representations. _arXiv preprint arXiv:2106.02866_.
* Von Holdt et al. (2019) Von Holdt, J.; Eckardt, F.; Baddock, M.; and Wiggs, G. F. 2019. Assessing landscape dust emission potential using combined ground-based measurements and remote sensing data. _Journal of Geophysical Research: Earth Surface_, 124(5): 1080-1098.
* Vu et al. (2021) Vu, Y. N. T.; Wang, R.; Balachandar, N.; Liu, C.; Ng, A. Y.; and Rajpurkar, P. 2021. Medaug: Contrastive learning leveraging patient metadata improves representations for chest x-ray interpretation. In _Machine Learning for Healthcare Conference_, 755-769. PMLR.
* Wang et al. (2021) Wang, J.; Liu, Y.; and Wang, X. E. 2021. Are gender-neutral queries really gender-neutral? mitigating gender bias in image search. _arXiv preprint arXiv:2109.05433_.
* Wang et al. (2021a) Wang, J.; Zheng, Z.; Ma, A.; Lu, X.; and Zhong, Y. 2021a. LoveDA: A remote sensing land-cover dataset for domain adaptive semantic segmentation. _arXiv preprint arXiv:2110.08733_.
* Wang et al. (2019) Wang, T.; Zhao, J.; Yatskar, M.; Chang, K.-W.; and Ordonez, V. 2019. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 5310-5319.
* Wang et al. (2021b) Wang, X.; Zhang, R.; Shen, C.; Kong, T.; and Li, L. 2021b. Dense contrastive learning for self-supervised visual pretraining. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 3024-3033.
* Wang et al. (2022a) Wang, Y.; Albrecht, C. M.; Braham, N. A. A.; Mou, L.; and Zhu, X. X. 2022a. Self-supervised learning in remote sensing: A review. _arXiv preprint arXiv:2206.13188_.
* Wang et al. (2022b) Wang, Z.; Dong, X.; Xue, H.; Zhang, Z.; Chiu, W.; Wei, T.; and Ren, K. 2022b. Fairness-aware adversarial perturbation towards bias mitigation for deployed deep models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 10379-10388.
* Wang et al. (2020) Wang, Z.; Qinami, K.; Karakozis, I. C.; Genova, K.; Nair, P.; Hata, K.; and Russakovsky, O. 2020. Towards fairness in visual recognition: Effective strategies for bias mitigation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 8919-8928.
* Wu et al. (2018) Wu, Z.; Xiong, Y.; Yu, S. X.; and Lin, D. 2018. Unsupervised feature learning via non-parametric instance discrimination. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, 3733-3742.
* Xie et al. (2022) Xie, Y.; He, E.; Jia, X.; Chen, W.; Skakun, S.; Bao, H.; Jiang, Z.; Ghosh, R.; and Ravirathinam, P. 2022. Fairness by \"Where\": A Statistically-Robust and Model-Agnostic Bi-level Learning Framework. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, 12208-12216.
* Xie et al. (2021) Xie, Z.; Lin, Y.; Zhang, Z.; Cao, Y.; Lin, S.; and Hu, H. 2021. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 16684-16693.
* Xiong et al. (2020) Xiong, Y.; Ren, M.; and Urtasun, R. 2020. Loco: Local contrastive representation learning. _Advances in neural information processing systems_, 33: 11142-11153.
* Xu et al. (2021) Xu, X.; Huang, Y.; Shen, P.; Li, S.; Li, J.; Huang, F.; Li, Y.; and Cui, Z. 2021. Consistent instance false positive improves fairness in face recognition. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 578-586.
* Yuan et al. (2022) Yuan, H.; Hadzic, A.; Paul, W.; de Flores, D. V.; Mathew, P.; Aucott, J.; Cao, Y.; and Burlina, P. 2022. EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation. _arXiv preprint arXiv:2202.13883_.
* Zhang et al. (2018) Zhang, B. H.; Lemoine, B.; and Mitchell, M. 2018. Mitigating unwanted biases with adversarial learning. In _Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society_, 335-340.
* Zhang et al. (2022a) Zhang, F.; Kuang, K.; Chen, L.; Liu, Y.; Wu, C.; and Xiao, J. 2022a. Fairness-aware contrastive learning with partially annotated sensitive attributes. In _The Eleventh International Conference on Learning Representations_.
* Zhang et al. (2022b) Zhang, H.; Dullerud, N.; Roth, K.; Oakden-Rayner, L.; Pfohl, S.; and Ghassemi, M. 2022b. Improving the Fairness of Chest X-ray Classifiers. In _Conference on Health, Inference, and Learning_, 204-233. PMLR.
* Zhang et al. (2022c) Zhang, M.; Singh, H.; Chok, L.; and Chunara, R. 2022c. Segmenting across places: The need for fair transfer learning with satellite imagery. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2916-2925.
* Zhang et al. (2021) Zhang, Y.; Hooi, B.; Hu, D.; Liang, J.; and Feng, J. 2021. Unleashing the power of contrastive self-supervised visual models via contrast-regularized fine-tuning. _Advances in Neural Information Processing Systems_, 34: 29848-29860.
* Zhao et al. (2021) Zhao, D.; Wang, A.; and Russakovsky, O. 2021. Understanding and evaluating racial biases in image captioning. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 14830-14840.
* Zhu et al. (2021) Zhu, W.; Zheng, H.; Liao, H.; Li, W.; and Luo, J. 2021. Learning bias-invariant representation by cross-sample mutual information minimization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 15002-15012.
* Ziegler and Asano (2022) Ziegler, A.; and Asano, Y. M. 2022. Self-supervised learning of object parts for semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 14502-14511.
* Zietlow et al. (2022) Zietlow, D.; Lohaus, M.; Balakrishnan, G.; Kleindessner, M.; Locatello, F.; Scholkopf, B.; and Russell, C. 2022. Leveling Down in Computer Vision: Pareto Inefficiencies in Fair Deep Classifiers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 10410-10421. | Satellite imagery is being leveraged for many societally critical tasks across climate, economics, and public health. Yet, because of heterogeneity in landscapes (e.g. how a road looks in different places), models can show disparate performance across geographic areas. Given the important potential of disparities in algorithmic systems used in societal contexts, here we consider the risk of urban-rural disparities in identification of land-cover features. This is via semantic segmentation (a common computer vision task in which image regions are labelled according to what is being shown) which uses pre-trained image representations generated via contrastive self-supervised learning. We propose fair dense representation with contrastive learning (FairDCL) as a method for de-biasing the multi-level latent space of convolution neural network models. The method improves feature identification by removing spurious model representations which are disparately distributed across urban and rural areas, and is achieved in an unsupervised way by contrastive pre-training. The obtained image representation mitigates downstream urban-rural prediction disparities and outperforms state-of-the-art baselines on real-world satellite images. Embedding space evaluation and ablation studies further demonstrate FairDCL's robustness. As generalizability and robustness in geographic imagery is a nascent topic, our work motivates researchers to consider metrics beyond average accuracy in such applications. | Summarize the following text. | 270 |
arxiv-format/2307_04101v1.md | Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets
Zhiling Guo\\({}^{l,2}\\), Xiaodan Shi\\({}^{2}\\), Haoran Zhang\\({}^{2}\\), Dou Huang\\({}^{2}\\), Xiaoya Song\\({}^{3}\\),
Jinyue Yan\\({}^{l}\\), Ryosuke Shibasaki\\({}^{2}\\)
\\({}^{1}\\)Department of Building Environment and Energy Engineering,
The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
\\({}^{2}\\)Center for Spatial Information Science, The University of Tokyo, Kashiwa, Japan
\\({}^{3}\\)School of Architecture, Harbin Institute of Technology, Harbin, China
## 1 Introduction
Buildings semantic segmentation via remote sensing imagery has become an important research topic in recent years [4]. With the rapid development of data acquisition systems and machine learning, the ever-expanding choices of datasets with very high resolution (VHR) [10] and deep learning methods [3] expand the opportunities for researchers to conduct more accurate analysis.
Although VHR imagery would express finer information contents of the landscape, it requires higher cost, longer processing time, and bigger storage space. Thus, the evaluation of the technical and economic trade-offs associated with using different resolution imagery is essential. The previous scholars have studied the impact of resolution in plant species [8], land use [6], and water [1] pattern recognition based on coarser-resolution or conventional machine learning methods. In this study, we investigate the impact of spatial resolution for building semantic segmentation via VHR imagery and deep learning methods, as shown in figure 1.
To compare the segmentation accuracy under different resolutions, we created remote sensing imagery in a specific area with resolutions from 0.075m to 2.4m by super-resolution (SR) [2] and down-sampling processing. The experimental results obtained from three different study areas via two deep learning models reveal that the finer the spatial resolution may not be the best in building semantic segmentation tasks, and the relatively low-cost imagery would be sufficient in many study cases. Thus, choosing a cost-effective spatial resolution for different scenarios is worth
Figure 1: The impact of spatial resolution on deep-learning based building semantic segmentation. The different colors: green, red, blue, and white, are used to indicate the true positive (tp), false negative (fn), false positive (fp), and true negative (tn) pixels in segmentation results, respectively.
discussing.
The main contributions of this study can be highlighted as two folds. First, to the best of our knowledge, it is the first investigation for the impact of spatial-resolution on deep learning-based building semantic segmentation. Second, the resolution is not the higher the better for segmentation accuracy. According to our dataset, a resolution around 0.3m is better for cost-effectiveness, which enables researchers and developers to conduct their research efficiently.
## 2 Data
We analyzed the impact of spatial resolution for building semantic segmentation over three representative study areas: Austin, Christchurch, and Tokyo. The original resolutions of the datasets mentioned above are about 0.075m, 0.150m, and 0.300m, respectively.
## 3 Methods
The variation of spatial resolution will lead to differences in semantic segmentation results. At first, we resampled the imagery to a total of six pixel scales according to the spatial resolution range of most VHR images in data preprocessing, as shown in figure 2. After that, two representative semantic segmentation models are applied for building semantic segmentation. Finally, the comparison is conducted based on four assessment criteria.
### Preprocessing
Compared with upscaling low-resolution imagery to HR space using a single filter such as bicubic interpolation, SR could increase the image resolution while providing finer spatial details than those captured by the original acquisition sensors. In this study, one of the typical deep learning SR models: ESPCN [9] is utilized to perform SR. In terms of the resample to lower-resolution, the pixel aggregate method is adopted. After that, six pixel scales in 0.075m, 0.150m, 0.300m, 0.600m, 1.200m, 2.400m can be generated.
### Semantic Segmentation
As the representative deep learning models, in this study, we propose to adopt UNet [7] and FPN [5] to conduct the building semantic segmentation and investigate the impact of spatial Resolution in results. In general, Unet applies multiple skip connections between upper and downer layers, while FPN obtains features in bottom-up and top-down pathways. Both models have shown the high feasibility and robustness in many segmentation tasks. It should be noted, the data augmentation methods are adopted without random scaling in training, and a model trained by a specific area and resolution is applied to test the corresponding area and resolution for a fair comparison.
## 4 Results and Discussions
After testing, we generated segmentation results in three cities with different resolutions by two deep learning architectures. Figure 3 illustrates the impact of spatial res
Figure 3: The impact of spatial resolution on building semantic segmentation via (a) UNet and (b) FPN in Austin, Christchurch, and Tokyo, respectively.
Figure 2: Building illustrated in remote sensing images with different resolutions. Samples from Austin, Christchurch, and Tokyo are shown in the first, second, and third rows. The original resolution was up- and down-scaled by SR and down-sampling, respectively.
olution on deep learning-based building semantic segmentation, and the detailed quantitative results in IoU can be found in Table 1. It can be seen that resolution significantly influences the segmentation results, although images in some resolutions are generated by resampling methods. With the decrease of spatial resolution, in the beginning, the IoU increases slightly in Austin and is stable in both Christchurch and Tokyo. After a certain threshold of 0.300m, the IoU drops rapidly in all study areas. Importantly, both UNet and FPN show a similar tendency. This makes sense, as building features have specific physical size, and the spatial resolution is significantly finer than the certain threshold, which may not help the segmentation performance while providing redundant information. Therefore, the spatial resolution should reach a certain threshold to achieve decent accuracy, and the excessively pursue of finer resolution than the threshold is no need in many cases. Such a trade-off should be involved while selecting an appropriate data source. The experimental results obtained from three cities with two deep learning models demonstrate that the resolution is not the higher the better, and 0.3m resolution would be a better cost-effective choice for data selection and preparation in building semantic segmentation tasks.
## 5 Conclusion
In this study, we have investigated the impact of spatial resolution on deep learning-based building semantic segmentation and demonstrated the effectiveness of super resolution techniques in enhancing segmentation accuracy. Our results suggest that spatial resolution plays a critical role in the accuracy and generalization capability of deep learning models for building semantic segmentation, and that super resolution techniques can help to overcome the limitations of low-resolution data.
To further advance this line of research, future work could extend our empirical evaluation to other deep learning models, study areas, and data sources.
## 6 Acknowledgement
We are grateful for the support and funding provided by the JSPS 21K14261 grant.
## References
* [1] Jonathan RB Fisher, Eileen A Acosta, P James Dennedy-Frank, Timm Kroeger, and Timothy M Boucher. Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. _Remote Sensing in Ecology and Conservation_, 4(2):137-149, 2018.
* [2] Daniel Glasner, Shai Bagon, and Michal Irani. Super-resolution from a single image. In _2009 IEEE 12th international conference on computer vision_, pages 349-356. IEEE, 2009.
* [3] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. _Deep learning_, volume 1. MIT press Cambridge, 2016.
* [4] Zhiling Guo, Guangming Wu, Xiaoya Song, Wei Yuan, Qi Chen, Haoran Zhang, Xiaodan Shi, Mingzhou Xu, Yongwei Xu, Ryosuke Shibasaki, et al. Super-resolution integrated building semantic segmentation for multi-source remote sensing imagery. _IEEE Access_, 7:99381-99397, 2019.
* [5] Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2117-2125, 2017.
* [6] Miao Liu, Tao Yu, Xingfa Gu, Zhensheng Sun, Jian Yang, Zhuowei Zhang, Xiaofei Mi, Weijia Cao, and Juan Li. The impact of spatial resolution on the classification of vegetation types in highly fragmented planting areas based on unmanned aerial vehicle hyperspectral images. _Remote Sensing_, 12(1):146, 2020.
* [7] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical image computing and computer-assisted intervention_, pages 234-241. Springer, 2015.
* [8] Keely L Roth, Dar A Roberts, Philip E Dennison, Seth H Peterson, and Michael Alonzo. The impact of spatial resolution on the classification of plant species and functional types within imaging spectrometer data. _Remote sensing of environment_, 171:45-57, 2015.
* [9] Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1874-1883, 2016.
* [10] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3974-3983, 2018.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{Resolution} & \\multicolumn{2}{c}{Austin} & \\multicolumn{2}{c}{Christchurch} & \\multicolumn{2}{c}{Tokyo} \\\\ \\cline{2-7} & UNet & FPN & UNet & FPN & UNet & FPN \\\\ \\hline
0.075 & 0.656 & 0.658 & 0.865 & 0.862 & **0.725** & 0.728 \\\\
0.150 & **0.701** & 0.716 & **0.876** & **0.882** & 0.721 & 0.728 \\\\
0.300 & **0.701** & **0.719** & 0.857 & 0.867 & 0.720 & **0.729** \\\\
0.600 & 0.613 & 0.66
| The development of remote sensing and deep learning techniques has enabled building semantic segmentation with high accuracy and efficiency. Despite their success in different tasks, the discussions on the impact of spatial resolution on deep learning based building semantic segmentation are quite inadequate, which makes choosing a higher cost-effective data source a big challenge. To address the issue mentioned above, in this study, we create remote sensing images among three study areas into multiple spatial resolutions by super-resolution and down-sampling. After that, two representative deep learning architectures: UNet and FPN, are selected for model training and testing. The experimental results obtained from three cities with two deep learning models indicate that the spatial resolution greatly influences building segmentation results, and with a better cost-effectiveness around 0.3m, which we believe will be an important insight for data selection and preparation. | Condense the content of the following passage. | 166 |
arxiv-format/2004_04281v1.md | # A comparative analysis for SARS-CoV-2
Goksel Misirli
[email protected]
## Introduction
Severe acute respiratory syndrome (SARS)-related coronaviruses have previously caused two pandemics [1]. The recent SARS Coronavirus 2 (SARS-CoV-2) outbreak has had unprecedented effects so far. It is essential to develop data integration mechanisms in order to gain insights using data that already exists. For example, genome-wide comparisons can be used to inform subsequent computational analyses which can potentially be used to search for drugs and to develop computational models.
Taxonomy classifications offer a systematic approach to link data from different organisms. The taxonomy id for SARS-CoV-2 is reported as 2697049 by the National Center for Biotechnology (NCBI) taxonomy browser [2] which also lists synonyms that can be used when searching for information in different databases ([https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=2697049](https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=2697049)). Moreover, the taxonomy id 694009 is used to group all SARS-related coronavirus taxonomies. This list can especially be useful to search for related sequences in order to infer information via comparative genomics approaches. For example, the NCBI Virus [3] database can be queried with these taxonomy ids to retrieve SARS-CoV-2 related nucleotide and protein sequences.
The SARS-CoV-2 genome sequences can also be accessed directly from the NCBI's GenBank database ([https://www.ncbi.nlm.nih.gov/genbank/sars-cov-2-seqs](https://www.ncbi.nlm.nih.gov/genbank/sars-cov-2-seqs)). Related information includes where a virus was isolated from. Some of these GenBank files include only sequences and some of them provide annotations about genome locations of important sequence features.
This tutorial initially uses two genome sequences: one for SARS-CoV-2 and another one for SARS-CoV. Regarding the former, the GenBank entry LC528232 was selected. LC528232 was created for a strain which was isolated in Japan. Regarding the latter, the GenBank entry AP006557, which was deposited in 2006, was selected.
## Genomic features
The GenBank files include annotations about sequence features, which can be visualised and analysed using various tools. In this report, the analyses were carried out using CLC Genomics Workbench 20.0.3. GenBank files were initially imported into this tool which was then used to analyse the annotated sequence features in more detail. For example, the surface glycoprotein denoted as 'S' is shown in Figure 1.
In order understand the mutations in the surface glycoprotein, a new entry for the related coding sequence (CDS) was created in CLC Genomics Workbench. The corresponding CDS feature was selected and the nucleotides were copied to create this new entry. CLC Genomics Workbench was then used to display restriction sites and protein translation information (Figure 2).
Figure 1: SARS-CoV-2 sequence features that are captured in GenBank files are shown.
## Predicting secondary structures
Understanding secondary structure formations and where mutations occur can provide insights into the effects of these mutations in the SARS-CoV-2 surface glycoprotein CDS. Hence, a new entry for the corresponding amino acid sequences was created in CLC Genomics Workbench by using the 'Toolbox - Classical Sequence Analysis - Nucleotide Analysis - Translate to Protein' option. Alpha-helix and beta-strand formations were predicted using the 'Toolbox - Classical Sequence Analysis - Protein Analysis - Predict Secondary Structure' option. The inferred information was then incorporated as annotations into the entry for the amino acid sequences (Figure 3).
Figure 3: The surface glycoprotein secondary structure predictions. Red arrows represent beta-strands and blue arrows represent alpha-helices.
Figure 2: Restriction sites in the surface glycoprotein CDS, labelled as ‘S’.
### Sequence alignment
It is reported that the SARS-CoV-2 surface glycoprotein is optimised to bind to human ACE2 receptors [4]. In order to analyse the effects of these mutations further, the protein sequence can be aligned to previously known similar sequences. Figure 4 shows the alignment of the surface glycoprotein amino acid sequences from SARS-CoV-2 (LC528232) and SARS-CoV (AP006557). Although the alignment shows high similarities between the amino acid sequences, gaps and mutations also exist.
A more detailed view of the alignment of the two protein sequences can be seen in Figure 5. Secondary structures are integrated into the view. The surface exposed regions are compared using different options such as the Kyte-Doolittle scale. Additional options can be configured from the 'Alignment Settings - Protein info' tab.
Wan and co-workers reported that mutations in the surface glycoprotein's five amino acids can play a critical role in binding to the human ACE2 receptors [5]. Anderson and co-workers provide their own insights, including another 6th critical amino acid. Here, we integrated secondary structure predictions and realigned the sequences in order
Figure 4: Aligned surface glycoprotein amino acid sequences from SARS-CoV-2 (LC528232) and SARS-CoV (AP006557).
Figure 5: Aligned surface glycoprotein amino acid sequences from SARS-CoV-2 (LC528232) and SARS-CoV (AP006557). Red blocks at the top represent beta-strands and blue blocks represent alpha-helices.
mutations may affect the binding of the SARS-CoV-2 protein (Figure 6). The alignment of secondary structure predictions show the additions of three beta strands and the loss of one beta strand. These changes may affect the folding of the protein and hence its binding.
Predictions for secondary structures may reveal some information about the binding of the surface glycoprotein. However, 3D models may help understanding the surfaces that are exposed and are likely to bind to other molecules. In order to understand the effects of the mutated region in Figure 6 (shown using the dashed box), an existing 3D model of the SARS-CoV protein was searched for. CLC Genomics Workbench's the 'Toolbox - Classical Sequence Analysis - Protein Analysis - Find and Model Structure' option was used to search for existing 3D models. The first ranked entry with the most 'Match identity' and 'Coverage' was selected. The Protein Data Bank (PDB) [6] identifier of this entry is '5X58 Prefusion Structure of SARS-CoV Spike Glycoprotein, Conformation 1' [7]. Using the CLC Genomics Workbench's 'Project Settings - Sequence tools - Show Sequence' option, the 5X58's 'Chain C' sequence was first displayed and it was then aligned to the SARS-CoV protein sequence (AP006557) using the 'Align to Existing Sequence' option. The first picture on the left in Figure 7 shows the structure of the chain. The SARS-CoV amino acid sequences from the area with the key mutations (shown using the dashed box in Figure 6) were highlighted using the sequence editor. The second picture on the right in Figure 7 shows the corresponding amino acid areas highlighted in the 3D structural view.
A similar analysis was also carried out for the SARS-CoV-2 Chain C. The '6W41' entry [8], including the details about the crystal structure of the SARS-CoV-2 receptor binding domain, was downloaded from the Protein Data Bank. A collection of related-entries can be found
Figure 6: Five key mutations are shown using the blue boxes. These mutations are reported to change the binding affinity of the surface glycoprotein to the human ACE2 receptors. Comparisons are displayed for SARS-CoV-2 (LC528232) and SARS-CoV (AP006557).
at the European Bioinformatics Institute's COVID-19 page [9]. The sequence from the '6W41' model was then aligned to the SARS-CoV-2 sequence (LC528232). Compared to the SARS-CoV secondary structures, both the CLC Genomics Workbench predictions and the SARS-CoV-2 '6W41' model reveal additional beta strands in the mutated region (shown using the dashed box in Figure 6) of the surface glycoprotein. The 3D view of the binding region is shown in Figure 8. The second picture on the right in Figure 8 shows the corresponding amino acid areas highlighted in the 3D structural view.
CLC Genomic Workbench was then used to align the two SARS-CoV-2 and SARS-CoV 3D structures using the 'Project Settings - Structure tools - Align Protein Structure' option. Figure 9 shows the alignment using red and blue colours for SARS-CoV-2 and SARS-CoV respectively. The third picture on the right shows the region that aligns with the ACE2 receptors during binding [10]. The middle picture shows the four key amino acid sequences that mutated in SARS-CoV-2. These key mutations affect the binding affinity [10].
Figure 7: The binding region in SARS-CoV. The left picture shows the SARS-CoV Chain C, which binds to the human ACE2 receptors. The right picture shows the area corresponding to the highlighted sequence below, which represent the mutation area shown using the dashed box in Figure 6.
Figure 8: The binding region in SARS-CoV-2. The sequences from 6W41 and LC528232 were aligned. The left picture shows the SARS-CoV-2 Chain C, which binds to the human ACE2 receptors. The right picture shows the area corresponding to the highlighted sequence below, which represents the mutation area shown using the dashed box in Figure 6.
Figure 9: SARS-CoV-2 (in red colour) and SARS-CoV (in blue colour) 3D models of the binding regions to ACE2 sites are aligned. The middle picture shows the locations of the four key mutations in SARS-CoV-2. The third picture on the right shows the ACE2 binding area. SARS-CoV-2 sequences are shown at the bottom and the highlighted sequences are displayed using blue boxes.
## Discussion
In this report, insights from the sequence analysis of SARS-CoV-2 were presented. Genome-wide comparisons between SARS-CoV-2 and SARS-CoV further shed lights into understanding the potential effects of key mutations in the surface glycoprotein amino acid sequences. Our analysis is inline with the current reports [10, 11]. Although, some of the mutations may play an important role in binding to the human ACE2 receptors, there are several mutations that may have strengthened the binding affinity of the surface glycoprotein. Both the predictions and the 3D models reveal additional beta strands and hydrogen bonds. These additional mutations may cause changes in structural conformations and cause a higher binding affinity.
This report has been prepared in a tutorial style using CLC Genomics Workbench. The approach can be adopted by biologists and others to have initial understanding of SARS-CoV-2 mutations, and their relationships to SARS-CoV and other closely related viruses. Initial findings can then be explored further by using other specialised tools and approaches.
Computational methods are especially promising. Machine learning, artificial intelligence, data integration and mining, visualisation, computational and mathematical modelling for key biochemical interactions and disease control mechanisms can usefully be applied to provide solutions in a cost- and time- effective manner. We hope that this report is useful to those who wish to understand essential information about SARS-CoV-2 for subsequent analyses.
## Acknowledgements
We thank QIAGEN for providing two months extended license beyond the two-week trial licence in order to utilise the CLC Genomics Workbench tool. The extended license was advertised by QIAGEN and was subsequently requested by the author.
## References
* Zhou et al. 2020 Zhou, P.; Yang, X.-L.; Wang, X.-G.; Hu, B.; Zhang, L.; Zhang, W.; Si, H.-R.; Zhu, Y.; Li, B.; Huang, C.-L.; Chen, H.-D.; Chen, J.; Luo, Y.; Guo, H.; Jiang, R.-D.; Liu, M.-Q.; Chen, Y.; Shen, X.-R.; Wang, X.; Zheng, X.-S.; Zhao, K.; Chen, Q.-J.; Deng, F.; Liu, L.-L.; Yan, B.; Zhan, F.-X.; Wang, Y.-Y.; Xiao, G.-F.; Shi, Z.-L. A pneumonia outbreak associated with a new coronavirus of probable bat origin. _Nature_**2020**, _579_, 270-273.
* The National Center for Biotechnology 2000 The National Center for Biotechnology, Taxonomy Browser. [https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi](https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi).
* The National Center for Biotechnology 2000 The National Center for Biotechnology, Virus Database. [https://www.ncbi.nlm.nih.gov/labs/virus/vssi](https://www.ncbi.nlm.nih.gov/labs/virus/vssi).
* Andersen et al. 2020 Andersen, K. G.; Rambaut, A.; Lipkin, W. I.; Holmes, E. C.; Garry, R. F. The proximal origin of SARS-CoV-2. _Nature Medicine_**2020**,
* Wan et al. 2020 Wan, Y.; Shang, J.; Graham, R.; Baric, R. S.; Li, F. Receptor Recognition by the Novel Coronavirus from Wuhan: an Analysis Based on Decade-Long Structural Studies of SARS Coronavirus. _Journal of Virology_**2020**, _94_.
* Berman et al. 2003 Berman, H.; Henrick, K.; Nakamura, H. Announcing the worldwide Protein Data Bank. _Nature Structural & Molecular Biology_**2003**, _10_, 980.
* Yuan et al. 2017 Yuan, Y.; Cao, D.; Zhang, Y.; Ma, J.; Qi, J.; Wang, Q.; Lu, G.; Wu, Y.; Yan, J.; Shi, Y.; Zhang, X.; Gao, G. F. Cryo-EM structures of MERS-CoV and SARS-CoV spike glycoproteins reveal the dynamic receptor binding domains. _Nature Communications_**2017**, \\(8\\), 15092.
* Protein Data Bank 2019 Protein Data Bank, Structure of 2019-nCoV chimeric receptor-binding domain complexed with its receptor human ACE2. [http://www.rcsb.org/structure/6VW1](http://www.rcsb.org/structure/6VW1).
* The European Bioinformatics Institute, COVID-19 Outbreak. [https://www.ebi.ac.uk/ena/pathogens/covid-19](https://www.ebi.ac.uk/ena/pathogens/covid-19).
* Shang et al. 2020 Shang, J.; Ye, G.; Shi, K.; Wan, Y.; Luo, C.; Aihara, H.; Geng, Q.; Auerbach, A.; Li, F. Structural basis of receptor recognition by SARS-CoV-2. _Nature_**2020**,
* Wu et al. 2020 Wu, F.; Zhao, S.; Yu, B.; Chen, Y.-M.; Wang, W.; Song, Z.-G.; Hu, Y.; Tao, Z.-W.; Tian, J.-H.; Pei, Y.-Y.; Yuan, M.-L.; Zhang, Y.-L.; Dai, F.-H.; Liu, Y.; Wang, Q.-M.; Zheng, J.-J.; Xu, L.; Holmes, E. C.; Zhang, Y.-Z. A new coronavirus associated with human respiratory disease in China. _Nature_**2020**, _579_, 265-269. | COVID-19 has affected the world tremendously. It is critical that biological experiments and clinical designs are informed by computational approaches for time- and cost-effective solutions. Comparative analyses particularly can play a key role to reveal structural changes in proteins due to mutations, which can lead to behavioural changes, such as the increased binding of the SARS-CoV-2 surface glycoprotein to human ACE2 receptors. The aim of this report is to provide an easy to follow tutorial for biologists and others without delving into different bioinformatics tools. More complex analyses such as the use of large-scale computational methods can then be utilised. Starting with a SARS-CoV-2 genome sequence, the report shows visualising DNA sequence features, deriving amino acid sequences, and aligning different genomes to analyse mutations and differences. The report provides further insights into how the SARS-CoV-2 surface glycoprotein mutated for higher binding affinity to human ACE2 receptors, compared to the SARS-CoV protein, by integrating existing 3D protein models.
SAR-CoV-2, SARS-CoV, Comparative Genomics, Surface Glycoprotein, CLC Genomics, Workbench | Provide a brief summary of the text. | 237 |
arxiv-format/2405_19413v1.md | # VisTA-SR: Improving the Accuracy and Resolution of Low-Cost Thermal Imaging Cameras for Agriculture
Heesup Yun, Sassoum Lo, Christine H. Diepenbrock, Brian N. Bailey, J. Mason Earles
University of California, Davis
1 Shields Ave, Davis, CA 95616
{hspyun, ssslo, chdiepenbrock, bnbailey, jmearles}@ucdavis.edu
## 1 Introduction
Agricultural research often uses crop temperature measurement to detect abnormal plant characteristics, calculate crop water stress indices, or model complex biophysical interactions. Since various methods have been attempted to measure crop temperature, thermal imaging cameras are widely used because they can quickly measure the temperature at many points in the image [20]. Also, thermal imaging can quickly and non-invasively measure crop temperature compared to other temperature measurement devices. These benefits can help identify areas where crops are experiencing disease or stress, allowing for timely intervention.
Previous studies using thermal cameras in agriculture have utilized high-resolution industrial-grade thermal cameras [16]. However, these cameras are very expensive, often costing over $10,000 which limits their accessibility. This low accessibility can restrict the widespread deployment of thermal cameras in agriculture, especially for researchers who cannot afford costly sensors.
An alternative approach is to use low-cost sensors. Recent developments in thermal image sensors and image processing technologies have made various affordable consumer-grade thermal cameras available. These thermal cameras have the advantages of being relatively lightweight and easy to operate. Therefore, there have been attempts to use low-cost thermal cameras in agriculture [5, 14, 15]. For example, Bhandari [4] obtained an image mask from visible light images and applied it to thermal images to measure wheat canopy temperature and estimate water stress. Another study used a low-cost thermal camera to calculate crop canopy temperature automatically [15]. However, these low-cost thermal cameras have not been able to completely replace high-resolution industrial thermal cameras due to their lower pixel count and resolution.
Thermal camera resolution has a significant impact on the capability and accuracy of agricultural research. For example, low-resolution thermal cameras may only be able to recognize crops at the plant-level rather than organ-level, making it challenging to observe temperature differences between leaves, stems, flowers, and fruits, for instance. This limited feature resolution will limit the temperature measurement capability at various phenological stages, which is essential for developing precise crop biophysical models. Therefore, improving the quality of low-cost thermal images can increase the feasibility of using low-cost thermal cameras in agriculture.
Enhancing the resolution of low-resolution thermal images is a challenging task. It is an ill-posed problem, asmultiple high-resolution ground truths can exist for a single low-resolution image. Nevertheless, various computer vision and machine learning techniques have been proposed to overcome the challenge. Particularly with the recent advancements in deep learning, there have been many reported cases of upsampling low-resolution images to high-resolution. Some researchers have used ResNet and GANs to perform image super-resolution [21]. Others have combined multiple low-resolution images to create a single high-resolution image [26]. Some have also used multi-modal data to improve the resolution of the data [2, 8, 17]. However, research on improving the quality of thermal images in the agricultural domain has been limited. Applying these techniques to agricultural thermal images could potentially improve the image quality of low-resolution thermal cameras, allowing them to replace high-resolution thermal cameras.
Therefore, this paper studies how computer vision techniques can improve the image quality of low-resolution thermal cameras for agricultural applications. We propose a deep learning network that leverages complementary information from RGB and thermal image domains for both image alignment and super-resolution enhancement.
The specific contributions of this paper are as follows:
* Calibration and validation of the temperature measurement of a low-cost thermal camera in the agricultural domain
* Acquisition of a paired low-resolution thermal camera image dataset, as well as RGB and high-resolution thermal camera data in the agricultural domain
* Proposal of an integrated image alignment and super-resolution deep learning algorithm to improve the image quality of low-resolution thermal cameras by combining RGB and thermal images
## 2 Related Work
### Traditional Image Enhancement
Before the advent of deep learning-based image sharpening approaches, filter-based techniques were used to enhance image quality, including fundamental Gaussian kernels and image sharpening kernels such as Bilinear filtering [28], Bilateral filtering [30], and Lanczos filtering [11]. Despite their ability to reduce image noise and enhance object edges, these approaches have been criticized for introducing artificial noises not present in the original image or producing unsatisfactory sharpness.
### Deep Learning Based Super Resolution
Recently, there have been attempts to improve sharpness using deep learning. These attempts include making super-resolution images from low-resolution images using ResNet and GANs, resulting in various developed methods [9, 21, 31]. These methods have shown the ability to restore low-resolution images with higher quality compared to traditional filter-based algorithms. However, they still face challenges in overcoming the ill-posed problem of creating shapes that do not exist in the original image.
### Multi-Image or Multi-Modal Super Resolution
To address the ill-posed problem in super-resolution methods, attempts have been made to create a single high-resolution image using various low-resolution or complementary information. For example, one approach is to utilize the high-resolution panchromatic channel of satellite imagery to enhance the sharpness of lower-resolution channels [10]. Another approach is to combine information from multiple frames to improve the sharpness of thermal im
Figure 1: Structure of the proposed VisTA-SR network. The network has two main stages: the Image Alignment and the Super-Resolution Network. The Image Alignment aligns the RGB and thermal images, while the Super-Resolution Network enhances the resolution of the thermal image.
ages [7]. Additionally, multi-modal super-resolution techniques that combine RGB and thermal information have also been tried [2, 8, 17].
### Use Cases of Thermal Cameras in Agriculture
Most agricultural research studies have traditionally relied on high-resolution thermal cameras in their research. For example, Gonzalez-Dugo [16] showed promising results assessing water stress within a commercial orchard using a high-resolution thermal camera, which costs more than $20,000. Yan [33] recently employed a Pro SC TIR camera (640x512 resolution, $17,250) to estimate evaporation, transpiration, and evapotranspiration, crucial parameters for understanding water dynamics in agricultural systems. However, these cameras can be prohibitively expensive, limiting their accessibility for many researchers and farmers.
In recent years, the emergence of low-cost thermal cameras has opened up new possibilities for agricultural applications. Several studies have explored the use of low-cost thermal cameras in agricultural research. Garcia-Tejero [13] compared the performance of a low-cost FLIR One camera (80x60 resolution, $400) with a high-end FLIR SC660 camera (640x480 resolution, $20,000) for assessing crop water status. They found that the low-cost camera was able to provide valuable insights, demonstrating the potential for more affordable thermal imaging solutions. Similarly, Iseki [18] used a FLIR C2 camera (80x60 resolution, $500) to estimate leaf stomatal conductance, a key indicator of plant water status. Parihar [25] utilized a FLIR E6 camera (240x180 resolution, $2,000) for irrigation scheduling of horticultural plants, demonstrating its utility in optimizing water use. While low-cost thermal cameras offer an attractive alternative, their lower resolution and image quality than their high-end counterparts may limit their ability to provide the same level of detailed information. Additionally, the temperature accuracy of low-cost cameras in various environmental conditions and crop types needs further investigation. Nonetheless, the studies reviewed here highlight the potential of low-cost thermal cameras in agricultural research.
## 3 Materials and Methods
### Thermal Cameras
In this study, three types of thermal cameras were utilized. Table 1 shows the specifications of the thermal cameras. The VarioCAM HD camera, known for its high spatial resolution and temperature accuracy, was primarily used to create a dataset for temperature accuracy validation. The VarioCAM HD images were collected using their proprietary software on the Windows Operating System. The FLIR Boson camera, with a resolution of 640x512, was employed to capture high-resolution thermal image data in the field. Positioned between high-end and consumer-grade thermal cameras in terms of price, the FLIR Boson camera offered a lightweight form factor and flexible video output interface for easy field image capture. FLIR Boson images were collected from the ROS-based system on Ubuntu PC. Lastly, the FLIR One Pro, a low-cost and low-resolution thermal camera, was used in this study. It has a thermal resolution of 160x120 and an RGB camera resolution of 1440x1080. FLIR One Pro image acquisition and storage were performed using a custom Swift-based app developed with the FLIR Mobile API on an iPhone.
### Low Cost Thermal Camera Calibration
Radiometric thermal cameras have a logarithmic relationship between the digital number and temperature [23, 29]. The parameters for converting the digital number to temperature are stored in the EXIF tag information of the FLIR radiometric JPEG images. These parameters, which are pre-calibrated values from the factory, are used to convert the digital numbers of the thermal imaging camera to temperatures using Equation 1. Upon comparing the factory parameters of different thermal imaging cameras, it was observed that only the values of \\(R_{1}\\) and \\(O\\) differed, while the values of \\(R_{2}\\), \\(B\\), and \\(F\\) remained constant. The parameter \\(B\\) is derived from the Planck constant \\(h\\) and Boltzmann constant \\(k_{b}\\), and the parameter F value is 1. For the FLIR One Pro cameras, \\(R_{2}\\) was fixed at 0.0125, and \\(R_{1}\\) and \\(O\\) are empirically calibrated depending on the individual camera.
\\[\\text{Temperature}\\,(^{\\circ}\\mathrm{C})=\\frac{B}{\\ln(\\frac{R_{1}}{R_{2}(DN+O)} )+F}-273.15 \\tag{1}\\]
However, the accuracy of these factory parameters cannot be fully trusted as the manufacturer does not fully guarantee the temperature accuracy of the low-cost thermal cameras. To ensure the accuracy of temperature measurements, it is necessary to recalibrate the parameters of the thermal imaging camera. Therefore, the optimization process focused on optimizing the values of \\(R_{1}\\) and \\(O\\). The optimization was performed using the Nelder-Mead method [24], which is a widely used optimization algorithm, with a tolerance of \\(1e-6\\). The optimization process was implemented using the'scipy.optimize.minimize' function in Python.
Experiments were conducted to verify the temperature accuracy of the FLIR One Pro thermal imaging camera. The surface temperature of a controlled water bath was measured using a thermocouple with a digital data logger. The thermocouple measured the temperature starting from \\(4.0\\,^{\\circ}\\mathrm{C}\\), the initial temperature of the cold water, and reaching \\(100.0\\,^{\\circ}\\mathrm{C}\\), the boiling point of water. The air temperature and relative humidity were maintained during the experiment at \\(24.0\\,^{\\circ}\\mathrm{C}\\) and 40%.
pair, which led to unstable matching and alignment results.
Since the field of view difference between the two images is only due to the scale difference based on the image resolution and the transitional offsets caused by the capture timings, template matching [27] was performed to robustly match the images by setting the high-resolution image as the template image \\(T\\) and calculating the Normalized Cross Correlation (NCC) [6] between the template image and the low-resolution image \\(I\\), finding the coordinates \\(x^{*}\\) and \\(y^{*}\\) where the NCC value was maximized, and resizing the template image to a predefined scale for this process.
\\[R(x,y)=\\sqrt{\\frac{\\sum_{x^{\\prime},y^{\\prime}}(T(x^{\\prime},y^{\\prime})-I(x+x ^{\\prime},y+y^{\\prime}))^{2}}{\\sum_{x^{\\prime},y^{\\prime}}T(x^{\\prime},y^{ \\prime})^{2}\\cdot\\sum_{x^{\\prime},y^{\\prime}}I(x+x^{\\prime},y+y^{\\prime})^{2}}} \\tag{2}\\]
\\[(x^{*},y^{*})=\\underset{0\\leq y<N}{\\text{argmax}_{0\\leq x<M}}R(x,y) \\tag{3}\\]
The template matching was performed using Python OpenCV code. Figure 4 illustrates matching the low-resolution and high-resolution thermal imaging cameras. For cases where the NCC value was 0.75 or higher, the bounding box was calculated and limited to the area within the padding of the low-resolution thermal image coordinate system. Then, it was converted to the coordinate system before resizing the template image. Image cropping was performed using the original resolution of the template and background images. The FLIR One Pro also has an integrated RGB camera, allowing simultaneous acquisition of RGB images. Therefore, the RGB images were also cropped using the Template Matching results.
### Improving Image Resolution by Combining RGB and Thermal Imaging
In this paper, complementary information from the RGB image's structural details and the thermal imaging camera's intensity information is utilized to enhance the resolution of the low-resolution thermal imaging camera. The RGB and thermal images obtained from the FLIR One Pro have the same field of view, but they are not perfectly pixel-aligned due to differences in camera lens position and video stream delays, which poses a challenge in combining the two modalities for resolution improvement. We tested deep-learning based image registration methods such as Spatial Transformer Networks [19] and Deformable Field-based approaches [35]. However, those methods tended to learn a shortcut existing in the dataset, which is a mean offset of the images rather than the differences between the input images, resulting in unstable experimental results.
Therefore, a template matching method based on image intensity was employed to align the domain-transformed image and the thermal image, which yielded more stable results compared to other methods. Figure 5 illustrates the input RGB image, the RGB-to-thermal image translated by Cycle GAN, and the low-resolution thermal image to be aligned. Inspired by the approach of Arar et al. [3], the RGB image was first translated to the thermal imaging camera's domain using Cycle GAN [34]. Then, template matching was performed between the domain-translated RGB image and the input low-resolution thermal image. The maximum correlation value was calculated based on the image convolution operation from one image to another, which can be hardware-accelerated and integrated into a super-resolution module using PyTorch.
After aligning the domain-transformed RGB image with the thermal image, the original RGB image was also transformed using the alignment result. Subsequently, the RGB, domain-transformed, and low-resolution thermal images were combined and inputted into a ResNet-based Convolutional Neural Network (CNN). The output image was then fed into a Discriminator CNN for Generative Adversarial Network (GAN) training. This architecture is depicted in Figure 1, referred to as VisTA SR.
Except for CycleGAN [34] and Template Matching, the implementation followed that of SRGAN [21] and ESRGAN [31], and the loss function used is as follows:
Cycle Consistency Loss [34]:
\\[l_{\\text{Consi}}^{\\text{Cycle}}=|I_{\\text{RGB}}-G_{\\text{IR2RGB}}(G_{\\text{RGB 2IR}}(I_{\\text{RGB}}))| \\tag{4}\\]
Identity Loss [34]:
\\[l_{\\text{MSE}}^{\\text{Cycle}}=||I_{\\text{HR}}-G_{\\text{RGB2IR}}(I_{\\text{RGB }})|| \\tag{5}\\]
MSE Loss [21, 31]:
\\[l_{\\text{MSE}}^{\\text{SR}}=||I_{\\text{HR}}-G_{\\text{SR}}(I_{\\text{LR}},I_{ \\text{RGB}})|| \\tag{6}\\]
Content Loss ([21, 31]):
\\[l_{\\text{VGG}}^{\\text{SR}}=||\\phi_{\\text{VGG}}(I_{\\text{HR}})-\\phi_{\\text{VGG }}(G_{\\text{SR}}(I_{\\text{LR}},I_{\\text{RGB}}))|| \\tag{7}\\]
Adversarial Loss [21, 31]):
\\[l_{\\text{Adv}}^{\\text{SR}}=-\\log D_{\\text{SR}}(G_{\\text{SR}}(I_{\\text{LR}},I_{ \\text{RGB}})) \\tag{8}\\]
Total Loss:
\\[l_{\\text{total},G}=(l_{\\text{Re}}^{\\text{Cycle}}+l_{\\text{MSE}}^{\\text{Cycle} })+(l_{\\text{MSE}}^{\\text{SR}}+l_{\\text{VGG}}^{\\text{SR}}+\\alpha l_{\\text{Adv}} ^{\\text{SR}}) \\tag{9}\\]
## 4 Results
### Low-Cost Thermal Camera Field Validation with High Fidelity Thermocouple Camera
Field data was collected to validate the temperature accuracy of the low-resolution thermal imaging camera in a real-world environment with crops and soil. The data was collected in the Garbanzo bean (_Cicer arietinum_) field located in Davis, California. The ground truth temperature values were measured using a VarioCAM HD camera and compared with the temperature measured by the FLIR One Pro thermal camera, and a total of 170 image pairs were collected on April 5, 2022. Image feature points were extracted from both images using the SIFT [22] feature extractor, and they were matched using the Flann matching algorithm [1]. Then, the homography between the two images was calculated, and outliers were removed using the RANSAC algorithm [12]. As a result, the temperature values from the corresponding points in the two images were compared (Figure 6).
The matching result for the 170 image pairs is shown in Figure 7, and Table 3 summarizes the results. It indicates that the temperature measurement accuracy was improved from \\(R^{2}=0.86\\) to \\(R^{2}=0.89\\) after calibration, and the Root Mean Square Error (RMSE) was also improved from \\(1.52\\,{}^{\\circ}\\mathrm{C}\\) to \\(1.40\\,{}^{\\circ}\\mathrm{C}\\). Since using the factory parameters tends to overestimate the temperature when it is below \\(20\\,{}^{\\circ}\\mathrm{C}\\), as shown in Figure 3, the temperature values obtained using the factory parameters in Figure 6 also showed higher temperature measurements than the actual temperatures.
Table 3 also indicates that when calculating RMSE and \\(R^{2}\\) using only data between \\(15\\,{}^{\\circ}\\mathrm{C}\\) and \\(30\\,{}^{\\circ}\\mathrm{C}\\), the temperature measurements with calibrated parameters showed better accuracy. Considering the typical leaf temperature of plants, the accuracy within this temperature range is crucial for thermal cameras used in agriculture. Therefore, the thermal camera calibration in this study demonstrates the potential to enhance temperature measurement accuracy in agricultural research.
### VisTA SR Result
In 2022, a total of 2612 image pairs were collected from a warm-season grain legume field across the growing season by matching low-resolution(160x120, FLIR One Pro) thermal images with high-resolution (640X512, FLIR Boson)
Figure 4: Matching and aligning process of low-resolution and high-resolution thermal images
Figure 5: Matching RGB and thermal images using CycleGAN and template matching
Figure 6: An example of feature matching based temperature comparison between FLIR One Pro and VarioCam HD Camera
thermal images. 80% of these pairs were used as training data, while the remaining 20% were used for validation. The network was trained over 200 epochs with a batch size of 4. Figure 8 demonstrates the image conversion quality and image alignment performance of the CycleGAN module, which was trained simultaneously with the SR Network. As depicted in the example images, CycleGAN successfully translated the image domain and template matching successfully aligned low-resolution thermal images based on image intensity.
Figure 9 presents the results from the multiple input image scales obtained from the VisTA SR algorithm using the input of the combined RGB image aligned with CycleGAN and Template Matching, compared to the results of the Super-Resolution Generative Adversarial Network (SRGAN) algorithm [21, 31] that utilizes only the existing thermal image modality. Our VisTA-SR demonstrated higher sharpness by leveraging higher-frequency structural information from the RGB image. This demonstrates that VisTA-SR improved the performance of capturing thermal properties of smaller features at the organ level, as opposed to the plant level.
Table 4 compares the performance of Bilinear interpolation, Super-Resolution Generative Adversarial Network (SRGAN), and our proposed VisTA SR algorithm, where the Bilinear interpolation method exhibited the highest Root Mean Square Error (RMSE) but the highest Structural Similarity Index (SSIM[32]) and the lowest Peak Signal-to-Noise Ratio (PSNR), while SRGAN and VisTA SR demonstrated similar performance with an RMSE of \\(2.75\\,^{\\circ}\\mathrm{C}\\). It can be inferred that the higher RMSE value of the Bilinear algorithm is because SRGAN and VisTA SR learned the temperature distribution of the training dataset, and Bilinear's higher SSIM value is believed to be a result of the original dataset already being aligned with the template matching process. Additionally, SRGAN showed a higher PSNR value than VisTA SR, but VisTA SR exhibited excellent visual quality, indicating that evaluating the performance of the Super-Resolution (SR) algorithm solely based on these image metrics is not ideal.
## 5 Conclusion & Future Work
This paper proposes a method to enhance temperature accuracy and image sharpness using a low-resolution thermal imaging camera for agricultural image acquisition. First, we conducted a calibration process to improve the temperature accuracy of the low-resolution thermal imaging camera, followed by field experiments for validation. It is confirmed that the temperature accuracy improved when using the calibrated parameters. We propose the VisTA-SR algorithm for converting low-resolution thermal images to high-resolution ones by aligning and combining RGB and low-resolution images. Through such improvements in temperature accuracy and image sharpness, we will be able to detect small temperature differences between crop tissues or parts, and analyze them in relation to genotypes, growth environments, growth stages, and various other factors.
One limitation was the difficulty of evaluating the performance of super-resolution algorithms in agricultural data using existing image metrics. Since most super-resolution studies generate low-resolution images by down-sampling the high-resolution images. In this case, the pixels of the low-resolution and high-resolution pairs are perfectly aligned, so the image metrics are proportion to the algorithm's super-resolution performance. However, in our study, low-resolution thermal images were actually collected with high-resolution images. Therefore, the output result of our algorithm from low-resolution input may not have a perfect pixel match with the high-resolution image. Considering the characteristics of those image metrics that
\\begin{table}
\\begin{tabular}{l c c c} \\hline \\hline Technique & RMSE (\\({}^{\\circ}\\)C) & SSIM & PSNR \\\\ \\hline Bilinear & 2.84 & 0.74 & 23.84 \\\\ SRGAN[31] & 2.74 & 0.63 & 24.26 \\\\ VisTA SR (Ours) & 2.75 & 0.63 & 23.67 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: RMSE, SSIM, and PSNR comparison of Bilinear, SRGAN, and VisTA SR algorithms
\\begin{table}
\\begin{tabular}{l c c c c} \\hline \\hline & \\multicolumn{3}{c}{All data} & \\multicolumn{2}{c}{\\(15\\,^{\\circ}\\mathrm{C}\\) - \\(30\\,^{\\circ}\\mathrm{C}\\)} \\\\ & \\(R^{2}\\) & RMSE (\\({}^{\\circ}\\)C ) & \\(R^{2}\\) & RMSE (\\({}^{\\circ}\\)C) \\\\ \\hline Factory & 0.86 & 1.52 & 0.83 & 1.52 \\\\ Calibrated & 0.89 & 1.40 & 0.86 & 1.39 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Low-cost thermal camera (FLIR One Pro) temperature accuracy validation result before and after parameter calibration
Figure 7: Feature matching based temperature comparison result. Plotted all matched temperature points for a total of 170 images
change significantly even by a few pixel changes, it can be inferred that the image evaluation metrics used in Table 4 reflected errors derived from multiple camera systems problem, even if the VisTA-SR had excellent visual quality result than others. However, from an agricultural research perspective, temperature accuracy and the ability to detect plants are important for understanding their complex biophysical characteristics. In other words, developing specialized thermal image metrics for agricultural data that reflect these features for performance evaluation in future research is necessary. In future studies, we will examine whether the thermal image improvement algorithm maintains, improves, or hallucinates temperature information in thermal images. Also, using the thermal images processed with the algorithm developed in this paper, we will estimate biophysical parameters such as stomatal conductance in plants and compare accuracy with original and high-resolution image inputs.
## 6 Acknowledgement
This work was financially supported by the Bill and Melinda Gates Foundation, Project ID: INV- 002830, GxExM Innovation in Intelligence for Climate Adaptation.
## References
* Science and and Technology Publications.
* [2] Feras Almasri and Olivier Debeir. RGB Guided Thermal Super-Resolution Enhancement. In _2018 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech)_, pages 1-5, 2018.
* [3] Moab Arar, Yifach Ginger, Dov Danon, Amit H. Bermano, and Daniel Cohen-Or. Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image
Figure 8: Input low-resolution images, domain translated images, and aligned images using CycleGAN and template matching
Figure 9: Comparison of input RGB, low-resolution thermal image input, SRGAN[21] output in multiple image scales (64x64, 128x128, and 256x256), VisTA-SR output, and ground truth high-resolution thermal image
Translation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 13410-13419, 2020.
* [4] Mahendra Bhandari. Use of infrared thermal imaging for estimating canopy temperature in wheat and maize. Master's thesis, West Texas A&M University, 2016.
* [5] Pedro Jose Blaya-Ros, Victor Blanco, Rafael Domingo, Fulgencio Soto-Valles, and Roque Torres-Sanchez. Feasibility of Low-Cost Thermal Imaging for Monitoring Water Stress in Young and Mature Sweet Cherry Trees. _Applied Sciences_, 10(16):5461, 2020.
* [6] Kai Briechle and Uwe D. Hanebeck. Template matching using fast normalized cross correlation. In _Aerospace/Defense Sensing, Simulation, and Controls_, pages 95-102, Orlando, FL, 2001.
* [7] Pasquale Cascarano, Francesco Corsini, Stefano Gandolfi, Elena Loli Piccolomini, Emanuele Mandanici, Luca Tavasci, and Fabiana Zama. Super-Resolution of Thermal Images Using an Automatic Total Variation Based Method. _Remote Sensing_, 12(10):1642, 2020.
* [8] Xiaohui Chen, Guangtao Zhai, Jia Wang, Chunjia Hu, and Yuanchun Chen. Color guided thermal image super resolution. In _2016 Visual Communications and Image Processing (VCIP)_, pages 1-4, 2016.
* [9] Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao, and Chao Dong. Activating More Pixels in Image Super-Resolution Transformer. In _2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 22367-22377, Vancouver, BC, Canada, 2023. IEEE.
* [10] Farzaneh Dadras Javan, Farhand Samadzadegan, Soroosh Mehravar, Ahmad Toosi, Reza Khatami, and Alfred Stein. A review of image fusion techniques for pan-sharpening of high-resolution satellite imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 171:101-117, 2021.
* [11] Claude E. Duchon. Lanczos Filtering in One and Two Dimensions. _Journal of Applied Meteorology and Climatology_, 18(8):1016-1022, 1979.
* [12] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. _Communications of the ACM_, 24(6):381-395, 1981.
* [13] Ivan Garcia-Tejero, Carlos Ortega-Arevalo, Manuel Iglesias-Contreras, Jose Moreno, Luciene Souza, Simon Tavira, and Victor Duran-Zuazo. Assessing the Crop-Water Status in Almond (Prunus dulcis Mill.) Trees via Thermal Imaging Camera Connected to Smartphone. _Sensors_, 18(4):1050, 2018.
* [14] Jaime Gimenez-Gallego, Juan D. Gonzalez-Teruel, Fulgencio Soto-Valles, Manuel Jimenez-Buendia, Honorio Navarro-Hellin, and Roque Torres-Sanchez. Intelligent thermal image-based sensor for affordable measurement of crop canopy temperature. _Computers and Electronics in Agriculture_, 188:106319, 2021.
* [15] Jaime Gimenez-Gallego, Juan D. Gonzalez-Teruel, Pedro J. Blaya-Ros, Ana B. Toledo-Moreo, Rafael Domingo-Miguel, and Roque Torres-Sanchez. Automatic Crop Canopy Temperature Measurement Using a Low-Cost Image-Based Thermal Sensor: Application in a Pomegranate Orchard under a Permanent Shade Net House. _Sensors_, 23(6):2915, 2023.
* [16] V. Gonzalez-Dugo, P. Zarco-Tejada, E. Nicolas, P. A. Nortes, J. J. Alarcon, D. S. Intrigliolo, and E. Frereres. Using high resolution UAV thermal imagery to assess the variability in the water status of five fruit tree species within a commercial orchard. _Precision Agriculture_, 14(6):660-678, 2013.
* [17] Honey Gupta and Kaushik Mitra. Toward Unaligned Guided Thermal Super-Resolution. _IEEE Transactions on Image Processing_, 31:433-445, 2022.
* [18] Kohtaro Iseki and Olajumoke Olaleye. A new indicator of leaf stomatal conductance based on thermal imaging for field grown cowpea. _Plant Production Science_, 23(1):136-147, 2020.
* [19] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and koray kavukcuoglu. Spatial Transformer Networks. In _Advances in Neural Information Processing Systems_. Curran Associates, Inc., 2015.
* [20] Azar Khorsandi, Abbas Hemmat, Seyed Ahmad Mireei, Rasoul Amirfattahi, and Parviz Ehsanzadeh. Plant temperature-based indices using infrared thermography for detecting water status in sesame under greenhouse conditions. _Agricultural Water Management_, 204:222-233, 2018.
* [21] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 4681-4690, 2017.
* [22] David G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. _International Journal of Computer Vision_, 60(2):91-110, 2004.
* [23] Waldemar Minkina and Sebastian Dudzik. _Infrared Thermography: Errors and Uncertainties_. Wiley, 1 edition, 2009.
* [24] J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. _The Computer Journal_, 7(4):308-313, 1965.
* [25] Gunjan Parihar, Sumit Saha, and Lalat Indu Giri. Application of infrared thermography for irrigation scheduling of horticulture plants. _Smart Agricultural Technology_, 1:100021, 2021.
* [26] Rafael Rivadeneira, Angel Sappa, and Boris Vintimilla. Multi-Image Super-Resolution for Thermal Images.. In _Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications_, pages 635-642. SciTePress, 2022.
* [27] J.N. Sarvaiya, Suprava Patnaik, and Salman Bombaywala. Image Registration by Template Matching Using Normalized Cross-Correlation. In _2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies_, pages 819-822, 2009.
* [28] P. R. Smith. Bilinear interpolation of digital images. _Ultrac Microscopy_, 6(2):201-204, 1981.
* [29] Glenn J. Tattersall. Infrared thermography: A non-invasive window into thermal physiology. _Comparative Biochemistryand Physiology Part A: Molecular & Integrative Physiology_, 202:78-98, 2016.
* [30] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In _Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)_, pages 839-846, 1998.
* [31] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Chen Change Loy, Yu Qiao, and Xiaoou Tang. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks, 2018.
* [32] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image Quality Assessment: From Error Visibility to Structural Similarity. _IEEE Transactions on Image Processing_, 13(4):600-612, 2004.
* [33] Chunhua Yan, Jiao Xiang, Longjun Qin, Bei Wang, Zhe Shi, Weiyang Xiao, Muhammad Hayat, and Guo Yu Qiu. High temporal and spatial resolution characteristics of evaporation, transpiration, and evapotranspiration from a subalpine wetland by an advanced UAV technology. _Journal of Hydrology_, 623:129748, 2023.
* [34] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 2223-2232, 2017.
* [35] Jing Zou, Bingchen Gao, Youyi Song, and Jing Qin. A review of deep learning-based deformable medical image registration. _Frontiers in Oncology_, 12, 2022. | Thermal cameras are an important tool for agricultural research because they allow for non-invasive measurement of plant temperature, which relates to important photochemical, hydraulic, and agronomic traits. Utilizing low-cost thermal cameras can lower the barrier to introducing thermal imaging in agricultural research and production. This paper presents an approach to improve the temperature accuracy and image quality of low-cost thermal imaging cameras for agricultural applications. Leveraging advancements in computer vision techniques, particularly deep learning networks, we propose a method, called **VisTA-SR** (**Visual & Thermal Alignment** and **Sure-R**esolution Enhancement) that combines RGB and thermal images to enhance the capabilities of low-resolution thermal cameras. The research includes calibration and validation of temperature measurements, acquisition of paired image datasets, and the development of a deep learning network tailored for agricultural thermal imaging. Our study addresses the challenges of image enhancement in the agricultural domain and explores the potential of low-cost thermal cameras to replace high-resolution industrial cameras. Experimental results demonstrate the effectiveness of our approach in enhancing temperature accuracy and image sharpness, paving the way for more accessible and efficient thermal imaging solutions in agriculture. | Condense the content of the following passage. | 225 |
arxiv-format/1801_00620v3.md | # Enhanced specular Andreev reflection in bilayer graphene
Abhiram Soori\\({}^{1,2}\\), Manas Ranjan Sahu\\({}^{1}\\), Anindya Das\\({}^{1}\\), and Subroto Mukerjee\\({}^{1}\\)
\\({}^{1}\\) Department of Physics, Indian Institute of Science, Bengaluru 560012, India.
\\({}^{2}\\) International Centre for Theoretical Sciences, Survey No. 151, Tata Institute of Fundamental Research, Shivakote, Hesaraghatta Hobli, Bengaluru 560089, India.
## I Introduction
Andreev reflection (AR) - a scattering process by which a current can be driven into a superconductor (SC) from a normal metal (NM) by applying a bias within the superconducting gap - was first discovered by Andreev [1] and has been extensively studied for several decades [2; 3]. Graphene on the other hand has attracted a huge interest in the past decade owing to its electronic and material properties [4; 5; 6; 7]. Graphene is a semimetal whose electronic structure can be described by a Dirac Hamiltonian (with a vanishingly small mass). Andreev reflection has been studied both theoretically [8; 9; 10; 11; 12] and experimentally [13] in graphene. What makes Andreev reflection in graphene special is that it can be of two types: one where the reflected hole retraces the path of the incident electron (called retro-) and another where the reflected hole moves away not tracing back the path of the incident electron (called specular-) [9; 13]. Specular Andreev reflection has not been observed in graphene due to charge density fluctuations across the sample [13], but a weak qualitative agreement is observed in bilayer graphene [14; 15]. Bilayer graphene (BLG) [16] is a better candidate to observe specular Andreev reflection since charge density fluctuations are much smaller than in monolayer graphene. In the experimental setup, a part of the BLG is kept in proximity to a SC, which induces superconducting correlations on BLG. It can be seen in Fig. 3(a) of Ref. [14] which shows only a weak qualitative agreement between the experimental observations and underlying theoretical calculations (note also the very different color scales of the experimental and theoretical plots required to arrive at even this level of agreement).
Generally speaking, Andreev reflection is a process where an electron incident from a normal metal into the superconductor results in a reflected hole. This is equivalent to saying that two electrons on the normal metal side- one from above the Fermi energy and one from below the Fermi energy pair up and go into the superconductor as a Cooper pair [2]. We use the latter convention for our analysis.
In a manner similar to that for Andreev reflection in monolayer graphene [9], retro- and specular- Andreev reflection can also be understood in bilayer graphene [14; 15]. If both the electrons participating in the reflection come from the same side of the charge neutrality point (CNP), the Andreev reflection is of the retro type, while if the two electrons come from opposite sides of the CNP, the Andreev reflection is of the specular type. This is because,
Figure 1: The subgap bandstructures of the NM part of the NM-SC setup. (a) Zero Zeeman field in the NM part. The points \\(R\\) and \\(R^{\\prime}\\) correspond to two electrons contributing to retro Andreev reflection, while the points \\(S\\) and \\(S^{\\prime}\\) correspond to electrons contributing to specular Andreev reflection. (b) Finite Zeeman field \\(E_{z0}\\) in the NM part. The dispersion for up-spin and down-spin have the CNP’s separated well energetically. Both the electron states shown contribute to specular Andreev reflection.
the momentum of the reflected hole along the \\(y\\)-direction has to be same as that of the incident electron. This means that when the hole originates from the same side of the CNP as that of the incident electron, the velocities along the \\(y\\)-direction of the two electrons participating in Andreev reflection have opposite signs. On the other hand, when the hole originates from the opposite side of the CNP as that of the incident electron, the velocities along the \\(y\\)-direction of the two electrons participating in Andreev reflection have the same sign. This is shown in Fig. 1(a).
Furthermore, the two electrons must have opposite spin. This allows us to separate the CNPs for the up-spin and the down-spin bands by applying a Zeeman field. In this work, we add a Zeeman field \\(E_{z0}\\) to the NM part of the NM-SC junction on BLG and calculate the conductance spectrum as a function of chemical potential and bias energy. As shown in Fig. 1(b), for small chemical potential (\\(|\\mu|<E_{z0}\\)) and small bias (\\(|eV_{bias}|<E_{z0}-|\\mu|\\)) the Andreev reflection is specular. We discuss several features of the conductance spectrum in the presence of a Zeeman field, where the main highlight is the enhanced specular Andreev reflection (SAR) at zero chemical potential and zero bias energy.
The paper is organized as follows. In Sec. II, the calculation is presented. In Sec. III, we show the main results. In Sec. IV, a comparative analysis replacing the superconductor with normal metal is discussed. In Sec. V, connection to experiments is discussed. Finally, in Sec. VI, the work is summarized. In Appendix A, calculations for the system where the superconductor is replaced with normal metal are shown. In Appendix B, the system where the effect of step height is extended in the normal metal region is studied.
## II Calculation
The BLG Hamiltonian at either of the two degeneracy points is:
\\[H_{0}=\\hbar v(k_{x}\\sigma_{x}-k_{y}\\sigma_{y}\\lambda_{z})-t_{\\perp}(\\lambda_{ x}+\\lambda_{x}\\sigma_{z})/2, \\tag{1}\\]
where \\(\\vec{k}=(k_{x},k_{y})\\) is the momentum with respect to the \\(\\vec{K}\\) point at the top layer and for the bottom layer, \\(\\vec{k}=(k_{x},k_{y})\\) is the momentum with respect to \\(\\vec{K}^{\\prime}\\), \\(v\\) is the Fermi velocity and \\(t_{\\perp}\\) is the coupling between the two layers. The layer asymmetry term is absent in this Hamiltonian. The choice of basis is \\([u_{A1},u_{A2},u_{B1},u_{B2}]\\). \\(A\\) and \\(B\\) refer to two kinds of lattice points in each layer of graphene, while 1 and 2 refer to the two layers of graphene. \\(\\sigma\\)'s are the Pauli matrices in the \\(A,B\\)-basis, while \\(\\lambda\\)'s are the Pauli matrices in the \\(1,2\\)-basis. This Hamiltonian can be diagonalized to get the eigenspectrum \\(E(\\vec{k})=\
u_{\\sigma}\\sqrt{(\\hbar v\\vec{k})^{2}+t_{\\perp}^{2}/2+\
u_{\\lambda }t_{\\perp}\\sqrt{(\\hbar v\\vec{k})^{2}+t_{\\perp}^{2}/4}}\\), where \\(\
u_{\\lambda},\\ \
u_{\\sigma}=\\pm 1\\). The index \\(\\sigma\\) corresponds to the bipartite pseudospin in graphene and the index \\(\\lambda\\) corresponds to the two layers of BLG.
The eigenvector at an energy \\(E\\) and momentum \\((k_{x},k_{y})\\) is:
\\[\\vec{u}(E,k_{x}) = \\frac{1}{N}\\begin{bmatrix}-t_{\\perp}E^{2}\\\\ [E^{2}-(\\hbar v\\vec{k})^{2}]E\\\\ -t_{\\perp}\\hbar vk_{-}E\\\\ \\hbar vk_{+}[E^{2}-(\\hbar v\\vec{k})^{2}]\\end{bmatrix}, \\tag{2}\\]
where \\(k_{\\pm}=k_{x}\\pm ik_{y}\\) and \\(N\\) is the normalization factor for the pseudospin such that \\(\\vec{u}^{\\dagger}\\vec{u}=1\\).
The Hamiltonian for the NM-SC junction on BLG is:
\\[H=[H_{0}-\\mu-U(x)]\\tau_{z}-E_{z}(x)s_{z}+\\Delta(x)\\tau_{x}, \\tag{3}\\]
where \\(U(x)=U_{0}\\eta(-x)\\), \\(s_{z}\\) corresponds to the real spin, \\(E_{z}(x)=E_{z0}\\eta(x)\\) is the Zeeman field and can be nonzero only on the NM side, \\(\\Delta(x)=\\Delta\\eta(-x)\\), \\(\\eta(x)\\) is the Heaviside step function, and the \\(\\tau\\)-matrices act in the particle-hole sector. The wavefunction for an electron at energy \\(E\\) (in the range: \\(|E|<\\Delta\\ll t_{\\perp}\\)) and spin \\(s\\) (\\(s=\\pm 1\\) is the eigenvalue of the operator \\(s_{z}\\)), incident from the NM side onto the SC has the form \\(\\psi_{s}(x)e^{ik_{y}y}\\), such that
\\[\\psi_{s}(x) = \\left(e^{-ik_{x}^{x}x}\\ \\vec{u}_{N,s}(\\epsilon,-k_{x}^{c})+r_{N} \\ e^{ik_{x}^{x}x}\\ \\vec{u}_{N,s}(\\epsilon,k_{x}^{c})\\right)\\begin{bmatrix}1\\\\ 0\\end{bmatrix} \\tag{4}\\] \\[+\\ r_{A}\\ e^{-ik_{x}^{h}x}\\ \\vec{v}_{N,s}(\\epsilon_{h},-k_{x}^{h}) \\begin{bmatrix}0\\\\ 1\\end{bmatrix}\\] \\[+\\ \\tilde{r}_{N}\\ e^{-\\kappa x}\\ \\vec{u}_{N,s}(\\epsilon,i\\kappa) \\begin{bmatrix}1\\\\ 0\\end{bmatrix}\\] \\[+\\ \\tilde{r}_{A}\\ e^{-\\kappa^{h}x}\\ \\vec{v}_{N,s}(\\epsilon_{h},i\\kappa^{h}) \\begin{bmatrix}0\\\\ 1\\end{bmatrix},\\ \\ \\text{for}\\ \\ x>0,\\] \\[= \\sum_{j=1}^{4}w_{j,s}\\ e^{ik_{j}^{S}x}\\ \\vec{u}_{S}(k_{j}^{S}),\\ \\ \\text{for}\\ \\ x<0,\\]
where \\(\\vec{u}_{N,s}(\\tilde{\\epsilon},k_{x})\\) and \\(\\vec{v}_{N,s}(\\tilde{\\epsilon},k_{x})\\) are the electron and hole sector eigenspinors of the Hamiltonian on the NM side [given by Eq. (2)] with \\(x\\)-component of momentum \\(k_{x}\\), and \\(\\vec{u}_{S}(k_{j}^{S})\\) is the eigenspinor on the SC side with \\(x\\)-component of momentum \\(k_{j}^{S}\\); furthermore, the \\(x\\)-component of the electron and hole momenta on the NM side are given by:
\\[\\hbar vk_{x}^{e} = sign(\\epsilon)\\sqrt{\\epsilon^{2}+2t_{\\perp}|\\epsilon|-(\\hbar vk_{y })^{2}}\\] \\[\\hbar vk_{x}^{h} = sign(\\epsilon_{h})\\sqrt{\\epsilon_{h}^{2}+2t_{\\perp}|\\epsilon_{h}|- (\\hbar vk_{y})^{2}}\\] \\[\\hbar v\\kappa = \\sqrt{(\\hbar vk_{y})^{2}+2t_{\\perp}|\\epsilon|-\\epsilon^{2}}\\] \\[\\hbar v\\kappa^{h} = \\sqrt{(\\hbar vk_{y})^{2}+2t_{\\perp}|\\epsilon_{h}|-\\epsilon_{h}^{2}}, \\tag{5}\\]
where \\(\\epsilon=(E+\\mu+sE_{z0})\\) and \\(\\epsilon_{h}=(\\mu-sE_{z0}-E)\\). On the SC side, \\(k_{j}^{S}\\) has a nonzero imaginary part at subgap energies. The complex values of \\(k_{j}^{S}\\) arise as complex conjugates and thus there are eight in all. Normalizability allows only four modes (out of eight) which have a negative imaginary part. Different values of \\(k_{j}^{S}\\) are obtained numerically from the eigenvalue-eigenvector equation. We shall employ the boundary condition that the wavefunction is continuous at \\(x=0\\) to solve for the scattering coefficients.
_Current operator and the conductance_ : From the Hamiltonian, it can be shown that the current for the NM part of the BLG has the form \\(\\vec{J}_{s}=ev\\psi_{s}^{\\dagger}(\\sigma_{x},-\\sigma_{y}\\lambda_{z})\\psi_{s}\\). The differential conductance is obtained by summing over \\(J_{s}\\) for all possible values of \\((k_{x},k_{y})\\) and \\(s=\\pm 1\\) at a given energy \\(E\\) such that the \\(x\\)-component of the velocity of the incident electron points along the \\(-\\hat{x}\\) direction. We calculate the scattering amplitudes and the conductance of the junction. The cross terms (\\(r_{N}r_{A}\\)) drop out while calculating the conductance and only the terms proportional to \\(|r_{N}|^{2}\\) and \\(|r_{A}|^{2}\\) contribute to the current. The total current is \\(\\vec{I}=\\int dk_{x}\\int dk_{y}\\sum_{s}\\vec{J}_{s}(k_{x},k_{y})\\). and the only nonzero component of \\(\\vec{I}\\) is along \\(-\\hat{x}\\) (i.e., \\(\\vec{I}=-\\hat{x}\\cdot I\\)). We are interested in calculating the conductance \\(G=dI/dV\\), which is given by the expression [17]
\\[G = \\frac{2e^{2}}{h}\\sum_{s}\\frac{W(\\mu+sE_{z0}+E+t_{\\perp}/2)}{hv} \\int_{-\\theta_{c,s}}^{\\theta_{c,s}}d\\theta\\;\\psi_{s}^{\\dagger}\\sigma_{x}\\psi_{ s},\\]
where \\(W\\) is the width of the bilayer graphene-superconductor interface and the factor of 2 is for valley degeneracy. The critical angle for spin \\(s\\) is given by \\(\\theta_{c,s}=\\sin^{-1}\\left[\\min\\left\\{(k_{h,\\bar{s}}/k_{e,s}),1\\right\\}\\right]\\) where \\(k_{h,\\bar{s}}\\) and \\(k_{e,s}\\) are the magnitudes of the momenta \\(\\vec{k}\\) in the hole band with spin \\(\\bar{s}\\) and the electron band with spin \\(s\\) (\\(\\bar{s}\\) is opposite to \\(s\\)) at energy \\(E=eV_{bias}\\) respectively.
## III Results
Results of the conductance calculation for two choices of parameters have been plotted as contour plots in Figs. 2 (a) and 2 (b). We discuss the features observed in the contour plots below.
_Zero Zeeman field_ : In Fig. 2 (a), a dominant feature is two dark-thick lines that appear along the diagonals: \\(eV_{bias}=\\pm\\mu\\). These correspond to one of the two electron Fermi surfaces participating in Andreev reflection at \\(eV_{bias}=\\pm\\mu\\) having zero circumference. The lines \\(eV_{bias}=\\pm\\mu\\) correspond to crossover from retro- to specular- Andreev reflection. Another feature is that there are two islands of light-blue color around \\(\\mu=0,eV_{bias}\\sim\\pm 0.8\\Delta\\). This corresponds to specular Andreev reflection since the two electrons participating in the Andreev reflection come from above and below the CNP. All the data-points in the region \\(|eV_{bias}|>|\\mu|\\) correspond to specular Andreev reflection. Similarly, all the data points in the region \\(|eV_{bias}|<|\\mu|\\) correspond to retro Andreev reflection. We also notice an asymmetry in \\(\\mu\\rightarrow-\\mu\\), which is due to a finite \\(U_{0}\\). These results and the discussion agree with that in Ref. [15].
_Nonzero Zeeman field_ : In Fig. 2 (b), the Zeeman field in the normal metal region \\(E_{z0}\\) is chosen to be \\(0.5\\Delta\\). The striking features of this contour plot are: (i) three light blue islands, two of which are located around \\(\\mu=0,eV_{bias}\\sim\\pm 0.8\\Delta\\) and one located around \\(\\mu=0,eV_{bias}\\sim 0\\), and (ii) two dark blue patches located around \\(\\mu=0.5\\Delta,eV_{bias}\\sim 0\\).
To understand the features of Fig. 2 (b), let us define different points on the contour plot: \\(A=(-0.5\\Delta,0)\\), \\(B=(0,0.5\\Delta)\\), \\(C=(0.5\\Delta,0)\\), \\(D=(0,-0.5\\Delta)\\), \\(P=(-\\Delta,0.5\\Delta)\\), \\(Q=(-\\Delta,-0.5\\Delta)\\), \\(R=(\\Delta,0.5\\Delta)\\) and \\(S=(\\Delta,-0.5\\Delta)\\) [each of these points is written in the form \\((eV_{bias},\\mu)\\) ]. Now, within the diamond \\(ABCDA\\), both the electrons contributing to Andreev reflection lie on different sides of the charge neutrality point. So, Andreev reflection is specular within this diamond. Also, in the triangles \\(PAQ\\) and \\(RCS\\) the two electrons contributing to Andreev reflection lie on different sides of the CNP. Hence, Andreev reflection is specular in these regions. Outside of the two triangles and the diamond, the two electrons contributing to Andreev reflection lie on the same side of the charge neutrality point. Hence, in these regions, Andreev reflection is retro. In each of the two dark blue patches around the points \\(B\\) and \\(D\\) the data points are in proximity to CNP for both the electrons participating in the Andreev reflection. Since the size of the Fermi surface approaches zero as one tends to the CNP, the conductance is suppressed around points \\(B\\) and \\(D\\). In contrast, along the lines \\(PA\\), \\(QA\\), \\(AB\\), \\(BC\\), \\(CD\\), \\(DA\\), \\(RC\\) and \\(CS\\) away from the points \\(B\\) and \\(D\\), data points for only one of the two participating electrons (in Andreev reflection) is at the charge neutrality point.
More generally, for a given choice of \\(E_{z0}\\), the diamond \\(ABCDA\\) is formed by the points \\(A=(-E_{z0},0)\\), \\(B=(0,E_{z0})\\), \\(C=(E_{z0},0)\\), and \\(D=(0,-E_{z0})\\), and the points \\(P=(-\\Delta,\\Delta-E_{z0})\\), \\(Q=(-\\Delta,-\\Delta+E_{z0})\\), \\(R=(\\Delta,\\Delta-E_{z0})\\), and \\(S=(\\Delta,-\\Delta+E_{z0})\\) form the triangles \\(PAQ\\) and \\(RCS\\). Hence, in the case when \\(E_{z0}=0\\), the diamond \\(ABCDA\\) has zero area as can be seen in Fig. 2 (a). And the regions inside the two triangles \\(PAQ\\) and \\(RCS\\) are described by the inequalities \\(-(eV_{bias}+E_{z0})>|\\mu|\\) and \\((eV_{bias}-E_{z0})>|\\mu|\\), respectively. These are the regions where the Andreev reflection is specular. Outside these regions, the Andreev reflection is retro.
Zero bias cuts of Figs. 2(a) and 2(b) have been plotted in Fig. 2 (c). These clearly show that around the CNP, the zero-bias conductance is enhanced under an applied Zeeman field, while in the case of zero Zeeman field, the zero bias conductance is suppressed.
_Choice of the parameter \\(U_{0}\\)_ : Previously, we chose \\(U_{0}=\\Delta\\) so as to allow for significant conductance despite accounting for a work function mismatch [modeled by the step function \\(U(x)\\)]. Now, we examine the features of the conductance spectrum for different choices of \\(U_{0}\\) and make a connection to previous works.
The step height \\(U_{0}\\) essentially captures the junction transparency. For larger magnitudes of \\(U_{0}\\), the junction is less transparent and has a high resistance. We can see from Fig. 3 that for larger values of \\(U_{0}\\), the features of crossover from retro- to specular- Andreev reflection discussed earlier get blurred. From the works of Efetov et al.[14; 15], we note that when NbSe\\({}_{2}\\) is used as the superconductor on top of the BLG, the parameters are \\(U_{0}\\)=5 meV and \\(\\Delta\\)=1.2 meV. This closely corresponds to Fig. 3 (d) and we see that the features of the crossover from retro- to specular- Andreev reflection begin to vanish for the value of \\(U_{0}=5\\Delta\\). To see the features for higher values of \\(U_{0}\\), we plot the conductance on a logarithmic scale in Fig.4. We see that the features discussed earlier vanish smoothly over the values of \\(U_{0}=5\\Delta,10\\Delta,100\\Delta\\), and \\(t_{\\perp}\\), except for two dips at \\((eV_{bias},\\mu)=(0,\\pm E_{z0})\\). However, the dips correspond to orders of magnitude smaller conductance. Thus, we find that a transparent junction is very crucial to observing the features of crossover from retro- to specular- Andreev reflection.
## IV Comparative analysis of the results replacing the superconductor with normal metal
In this section, we discuss the results of the system, where superconductivity in the system is absent, and make comparison to the results with the system containing superconductivity. We denote the part of the system having a nonzero Zeeman field by F (ferromagnet), and N refers to the normal metal part which has no Zeeman field. \\(\\Delta(x)=0\\) for all \\(x\\) in the NF junction. The calculation for the NF junction is presented in Appendix A. As can be seen from the calculations, the bias \\(eV_{bias}\\) and the chemical potential \\(\\mu\\) enter the equations as \\((eV_{bias}+\\mu)\\). Hence, the conductance depends only on the linear combination \\((eV_{bias}+\\mu)\\) in the contour plot which is apparent in Fig. 5.
In Fig. 6, the conductance is plotted as a function of \\((eV_{bias}+\\mu)\\), for different values of step height \\(U_{0}\\). For \\(U_{0}=0\\), the conductance goes to zero at \\((\\mu+eV_{bias})=0\\), since the size of the Fermi surface on the normal metal side goes to zero, and there are no momentum modes to carry the current. For finite values of \\(U_{0}\\), the situation changes since at \\((eV_{bias}+\\mu)=0\\), the Fermi surface hasa finite size, and the current can flow from the F-side to the N-side. The asymmetry around \\((\\mu+eV_{bias})=0\\) is because of a finite value of \\(U_{0}\\).
Now, we turn to the comparison of conductances of different systems (NN, NF, SN, and SF) for a given choice of \\(U_{0}\\) and other parameters. For \\(U_{0}=0\\) (Fig. 7, top), all the curves are symmetric, while for \\(U_{0}=\\Delta\\) the curves are not symmetric (except for SN and SF). For SF, the minima at \\(eV_{bias}=\\pm E_{z0}\\) and maximum at \\(eV_{bias}=0\\) are due to the dispersions displaced due to Zeeman fields. This bump is where the specular Andreev reflection is enhanced by the Zeeman field. For NN, NF, and SN in the case \\(U_{0}=0\\), the conductance is zero at \\(eV_{bias}=0\\), which is due to zero size of the Fermi surface of the N region. When \\(U_{0}=\\Delta\\), (see Fig. 7, bottom) the size of Fermi surface is nonzero in the N region to the left in the NN and NF configurations, and there is a finite conductance even at \\(eV_{bias}=0\\) for the NF configuration.
Now, we compare different curves in the bottom panel of Fig. 7. For NN and SN configurations, the N-region for \\(x>0\\) has zero sized Fermi surface at zero bias. Hence the conductance at zero bias is zero (despite a nonzero sized Fermi surface in the region \\(x<0\\) for NN). Now, when we turn to the case of NF, the Fermi surfaces on both sides of the junction at \\(eV_{bias}=0\\) have nonzero size. Hence, the conductance is finite around \\(eV_{bias}=0\\). The conductance for NF approaches zero as \\(eV_{bias}\\rightarrow\\Delta\\) since the size of Fermi surface approaches zero on the N-side of the junction as we have chosen \\(U_{0}=\\Delta\\). For the case of SF, the conductance is nonzero in the entire range shown since the size of the Fermi surface on F-side is always nonzero due to a finite value of the Zeeman field (\\(E_{z0}=0.5\\Delta\\)), and on the \\(\\mathbf{S}\\)-side there is superconducting gap which favors Andreev reflection. Finally, the conductances in the lower panel are smaller than those in the upper panel since the step height \\(U_{0}\\) is zero in the upper panel and is \\(\\Delta\\) in the lower panel, reducing the transparency of the junctions studied in the lower panel.
## V Experimental relevance
To implement our scheme experimentally, it is important to apply a Zeeman field in the NM part of the junction. An in-plane magnetic field which is less than the critical field to kill the superconductivity of the SC part
Figure 7: Conductance \\(Ghv/W\\) in units of \\(t_{\\perp}2e^{2}/h\\) is plotted for different configurations of the setup: normal-normal (NN), normal-ferromagnet (NF), superconductor-normal (SN), and superconductor-ferromagnet (SF). See text for further information. Top: \\(U_{0}=0\\), bottom: \\(U_{0}=\\Delta\\). Parameters: \\(E_{z0}=0.5\\Delta\\) (for F), where \\(\\Delta=0.003t_{\\perp}\\) and \\(\\mu=0\\) (for all curves).
in the system will achieve this. Another way to implement a Zeeman field is to bring a ferromagnetic insulator in proximity to the NM-side of the junction. It has been shown that ferromagnetism can be induced in graphene by such proximity coupling with several materials such as EuO, YIG and EuS [19; 20; 21].
A typical sample will have a disorder which manifests as Fermi energy broadening \\(\\delta\\epsilon_{F}\\). This means that the BLG sample must be of a sufficiently high quality so that the Fermi energy broadening \\(\\delta\\epsilon_{F}\\) is small (\\(\\delta\\epsilon_{F}\\ll\\Delta\\)). Furthermore, observing the features of crossover for a fixed bias \\(eV_{bias}\\ll\\Delta\\) as \\(\\mu\\) is varied is important as the quasiparticle contribution to transport is the least in this regime. In addition, a finite temperature will result in thermal broadening and hence, performing the experiment at a low temperature is necessary to observe the features discussed here. The temperature has to be low compared to both the superconducting gap (\\(\\sim 14\\ K\\) in NbSe\\({}_{2}\\)[14]) and Zeeman energy (\\(\\sim 10\\ K\\)). Experimentally, reaching temperatures of about \\(100\\ mK\\) is possible and hence temperature does not pose a hindrance to implementing our scheme in realistic systems.
In a realistic system, the work function mismatch between the NM and SC regions can result in the formation of a NM region having a length-scale \\(a\\) at the interface as discussed in Ref. [22]. Also, from the value of the work functions of NbSe\\({}_{2}\\) and BLG, the step height \\(U_{0}\\) is chosen to be \\(1eV>t_{\\perp}\\) in Ref. [22] in contrast to the limit \\(U_{0}\\ll t_{\\perp}\\) in Ref. [14; 15] where the value of \\(U_{0}\\) is chosen to match the experimental results. Our calculations combined with the choice of \\(U_{0}\\) in Ref. [14; 15] point to a small value of \\(a\\) (\\(a\\ll 100nm\\)) in contrast to the assertion made in Ref. [22]. This means that the effects of a p-n junction formed at the NM-SC interface may be negligible. In Appendix B, we study the effect of having a finite \\(a\\) and show that it can be negligible.
## VI Summary and conclusion
We have studied Andreev reflection at a junction of bilayer graphene and a superconductor. Since our main objective has been to observe the enhanced signatures of specular Andreev reflection, we introduce a Zeeman field and study the features on a contour plot of conductance versus chemical potential and bias voltage when these two energy scales are less than the superconducting gap. We find that a finite Zeeman field produces a diamond shaped region at the center where the Andreev reflection is purely specular. Furthermore, the lines bordering the diamond shaped region and two patches around the low bias region at the corners of the diamond show a low conductance, where the crossover from specular- to retro- type Andreev reflection occurs. Importantly, we find that for a barrier step-height that is of the same order of magnitude as the superconducting gap, the features of the crossover from retro- to specular- Andreev reflection are observable and for a barrier step-height much larger than the superconducting gap, the features vanish except for small regions of low conductance at \\((eV_{bias},\\mu)=(0,\\pm E_{z0})\\). We have also analyzed the relative contributions from normal state conductance, where the superconductivity is switched off. Furthermore, we have discussed how our calculations can be tested in an experimental system.
###### Acknowledgements.
AD thanks Nanomission, Department of Science and Technology (DST) for the financial support under grants - DSTO1470 and DSTO1597. AS thanks DST Nanomission (DSTO1597) for funding. SM thanks the Indo-Israeli UGC-ISF project for funding.
## Appendix A
In this section, we give details of the calculation for the system comprising of a Zeeman field induced ferromagnetic region in contact with the normal metal region. This is simply the limit of the NM-SC junction described by Eq. (3) where \\(\\Delta(x)=0\\) for all \\(x\\). The wavefunction for an electron incident on the junction from \\(x>0\\) onto \\(x<0\\), with energy \\(E\\) has the form \\(\\phi_{s}(x)e^{ik_{y}y}\\), where
\\[\\phi_{s}(x) = e^{-ik_{x}^{e}x}\\ \\vec{u}_{N,s}(\\epsilon,-k_{x}^{e})+r_{N}\\ e^{ik_{x}^{e} x}\\ \\vec{u}_{N,s}(\\epsilon,k_{x}^{e}) \\tag{7}\\] \\[+\\ \\tilde{r}_{N}\\ e^{-\\kappa x}\\ \\vec{u}_{N,s}(\\epsilon,i\\kappa),\\ \\ \\mbox{for}\\ \\ x>0\\] \\[= t_{N}e^{-i\\tilde{k}_{x}^{e}x}\\ \\vec{u}_{N,s}(\\tilde{\\epsilon},- \\tilde{k}_{x}^{e})+\\ \\tilde{t}_{N}e^{\\tilde{k}x}\\ \\vec{u}_{N,s}(\\tilde{\\epsilon},-i \\tilde{\\kappa})\\] \\[\\ \\ \\mbox{for}\\ \\ x<0.\\]
Here, \\(\\epsilon=E+\\mu+sE_{z0}\\), \\(\\tilde{\\epsilon}=E+\\mu+U_{0}\\), \\(k_{y}=\\sqrt{\\epsilon^{2}+t_{\\perp}|\\epsilon|}\\sin\\theta/(\\hbar v)\\) (\\(\\theta\\) is the angle of incidence so that the normal incidence corresponds to \\(\\theta=0\\)),
\\[k_{x}^{e} = sign(\\epsilon)\\sqrt{\\epsilon^{2}+t_{\\perp}|\\epsilon|-(\\hbar vk_{ y})^{2}}/(\\hbar v),\\] \\[\\kappa = \\sqrt{(\\hbar vk_{y})^{2}+t_{\\perp}|\\epsilon|-\\epsilon^{2}}/(\\hbar v),\\] \\[\\tilde{k}_{x}^{e} = sign(\\tilde{\\epsilon})\\sqrt{\\tilde{\\epsilon}^{2}+t_{\\perp}| \\tilde{\\epsilon}|-(\\hbar vk_{y})^{2}}/(\\hbar v),\\] \\[\\mbox{and}\\ \\ \\tilde{\\kappa} = \\sqrt{(\\hbar vk_{y})^{2}+t_{\\perp}|\\tilde{\\epsilon}|-\\tilde{ \\epsilon}^{2}}/(\\hbar v).\\]
Now, using the boundary condition, which is continuity of the wavefunction at \\(x=0\\), one can determine the scattering amplitudes \\(r_{N}\\), \\(\\tilde{r}_{N}\\), \\(t_{N}\\), and \\(\\tilde{t}_{N}\\). With this, the wavefunction is determined and using a formula similar to Eq. (6), the conductance can be calculated.
## Appendix B
In this part, we study the effect of having a finite region of length \\(a\\) on the NM part of the junction where \\(0\\). The Hamiltonian has the same form as in Eq. (3), except for two changes: \\(U(x)=U_{0}\\eta(a-x)\\) and \\(E_{z}(x)=E_{z0}\\eta(x-a)\\), where \\(\\eta(x)\\) is a Heavyside step function. The wavefunction for an electron at energy \\(E\\) (in the range: \\(|E|<\\Delta\\ll t_{\\perp}\\)) and spin \\(s\\) (\\(s=\\pm 1\\) is the eigenvalue of the operator \\(s_{z}\\)), incident from the NM side onto the SC has the form \\(\\psi_{s}(x)e^{ik_{y}y}\\), such that
\\[\\psi_{s}(x) = \\left(e^{-ik_{x}^{x}x}\\ \\vec{u}_{N,s}(\\epsilon,-k_{x}^{\\epsilon})+r_{N }\\ e^{ik_{x}^{x}x}\\ \\vec{u}_{N,s}(\\epsilon,k_{x}^{\\epsilon})\\right)\\begin{bmatrix}1\\\\ 0\\end{bmatrix} \\tag{9}\\] \\[+\\ r_{A}\\ e^{-ik_{x}^{x}x}\\ \\vec{v}_{N,s}(\\epsilon_{h},-k_{x}^{h}) \\begin{bmatrix}0\\\\ 1\\end{bmatrix}\\] \\[+\\ \\tilde{r}_{N}\\ e^{-\\kappa x}\\ \\vec{u}_{N,s}(\\epsilon,i\\kappa) \\begin{bmatrix}1\\\\ 0\\end{bmatrix}\\] \\[+\\ \\tilde{r}_{A}\\ e^{-\\kappa^{x}x}\\ \\vec{v}_{N,s}(\\epsilon_{h},i \\kappa^{h})\\begin{bmatrix}0\\\\ 1\\end{bmatrix},\\ \\ \\text{for}\\ \\ x>a,\\] \\[= \\left(s_{e-}\\ e^{-ik_{x}^{\\epsilon^{\\prime}}x}\\vec{u}_{N^{\\prime},s}(\\epsilon^{\\prime},-k_{x}^{\\epsilon^{\\prime}})\\right.\\] \\[\\left.+s_{e+}\\ e^{ik_{x}^{\\epsilon^{\\prime}}x}\\vec{u}_{N^{\\prime},s}(\\epsilon^{\\prime},k_{x}^{\\epsilon^{\\prime}})\\right)\\begin{bmatrix}1\\\\ 0\\end{bmatrix}\\] \\[+\\Big{(}s_{h-}\\ e^{-ik_{x}^{h^{\\prime}}x}\\vec{v}_{N^{\\prime},s}( \\epsilon^{\\prime},-k_{x}^{h^{\\prime}})\\] \\[+s_{h+}\\ e^{ik_{x}^{h^{\\prime}}x}\\vec{v}_{N^{\\prime},s}( \\epsilon^{\\prime},k_{x}^{h^{\\prime}})\\Big{)}\\begin{bmatrix}0\\\\ 1\\end{bmatrix}\\] \\[+\\Big{(}\\tilde{s}_{e-}\\ e^{-\\kappa_{x}^{\\epsilon^{\\prime}}x}\\vec{u }_{N^{\\prime},s}(\\epsilon^{\\prime},i\\kappa_{x}^{\\epsilon^{\\prime}})\\] \\[+\\tilde{s}_{e+}\\ e^{\\kappa_{x}^{\\epsilon^{\\prime}}x}\\vec{u}_{N^{ \\prime},s}(\\epsilon^{\\prime},-i\\kappa_{x}^{\\epsilon^{\\prime}})\\Big{)}\\begin{bmatrix} 1\\\\ 0\\end{bmatrix}\\] \\[+\\Big{(}\\tilde{s}_{h-}\\ e^{-\\kappa_{x}^{h^{\\prime}}x}\\vec{v}_{N^ {\\prime},s}(\\epsilon^{\\prime},i\\kappa_{x}^{h^{\\prime}})\\] \\[+\\tilde{s}_{h+}\\ e^{\\kappa_{x}^{h^{\\prime}}x}\\vec{v}_{N^{\\prime},s}(\\epsilon^{\\prime},-i\\kappa_{x}^{h^{\\prime}})\\Big{)}\\begin{bmatrix}0\\\\ 1\\end{bmatrix},\\ \\text{for}\\ \\ 0<x<a,\\] \\[= \\sum_{j=1}^{4}w_{j,s}\\ e^{ik_{j}^{S}x}\\ \\vec{u}_{S}(k_{j}^{S}),\\ \\ \\text{for}\\ \\ x<0,\\]
where \\(\\vec{u}_{N,s}(\\tilde{\\epsilon},k_{x})\\) and \\(\\vec{v}_{N,s}(\\tilde{\\epsilon},k_{x})\\) are the electron- and hole- sector eigenspinors of the Hamiltonian on the NM side [given by Eq. (2)] with \\(x\\)-component of momentum \\(k_{x}\\), and \\(\\vec{u}_{S}(k_{j}^{S})\\) is the eigenspinor on the SC side with \\(x\\)-component of momentum \\(k_{j}^{S}\\). Furthermore, the \\(x\\)-component of electron and hole momenta on the NM side are given by:
\\[\\hbar vk_{x}^{e} = sign(\\epsilon)\\sqrt{\\epsilon^{2}+t_{\\perp}|\\epsilon|-(\\hbar vk_{ y})^{2}},\\] \\[\\hbar vk_{x}^{h} = sign(\\epsilon_{h})\\sqrt{\\epsilon_{h}^{2}+t_{\\perp}|\\epsilon_{h}| -(\\hbar vk_{y})^{2}},\\] \\[\\hbar v\\kappa = \\sqrt{(\\hbar vk_{y})^{2}+t_{\\perp}|\\epsilon|-\\epsilon^{2}},\\] \\[\\hbar v\\kappa^{h} = \\sqrt{(\\hbar vk_{y})^{2}+t_{\\perp}|\\epsilon|_{h}|-\\epsilon_{h}^{2}},\\] \\[\\hbar vk_{x}^{e^{\\prime}} = sign(\\epsilon^{\\prime})\\sqrt{\\epsilon^{\\prime 2}+t_{\\perp}|\\epsilon^{ \\prime}|-(\\hbar vk_{y})^{2}},\\] \\[\\hbar vk_{x}^{h^{\\prime}} = sign(\\epsilon^{\\prime}_{h})\\sqrt{\\epsilon_{h}^{\\prime 2}+t_{\\perp}| \\epsilon^{\\prime}_{h}|-(\\hbar vk_{y})^{2}},\\] \\[\\hbar v\\kappa_{x}^{e^{\\prime}} = \\sqrt{(\\hbar vk_{y})^{2}+t_{\\perp}|\\epsilon^{\\prime}|-\\epsilon^{ \\prime 2}},\\] \\[\\hbar v\\kappa_{x}^{h^{\\prime}} = \\sqrt{(\\hbar vk_{y})^{2}+t_{\\perp}|\\epsilon^{\\prime}_{h}|-\\epsilon _{h}^{2}}, \\tag{10}\\]
where \\(\\epsilon=(E+\\mu+sE_{z0})\\), \\(\\epsilon_{h}=(\\mu-sE_{z0}-E)\\), \\(\\epsilon^{\\prime}=(E+\\mu+U_{0})\\) and \\(\\epsilon^{\\prime}_{h}=(\\mu+U_{0}-E)\\). The continuity of \\(\\psi_{s}(x)\\) at \\(x=0\\) and \\(x=a\\) in total give 16 equations for 16 scattering amplitudes to be solved. Then, the conductance is calculated using Eq. (6).
First, the conductance is calculated for \\(E_{z0}=0\\), for various values of \\(a\\) and a fixed value of \\(U_{0}=\\Delta\\) in Fig. 8. It can be seen that for higher values of \\(a\\), Fabry-Perot type oscillations [18] are observed in the conductance spectra. Comparing this with the experimental results in Ref. [14], the absence of conductance oscillations there suggests that in a realistic system, \\(a\\) is small (\\(a\\ll 50\\hbar v/t_{\\perp}\\)).
Next, we study the case of \\(U_{0}=5t_{\\perp}\\) (discussed in Ref. [22]) keeping \\(E_{z0}=0\\) in Fig. 9 for different values of \\(a\\). We see that for larger values of \\(a\\) (\\(a>100\\hbar v/t_{\\perp}\\)), there are Fabry-Perot type oscillations in conductance. Comparing these with the experimental results in Ref. [14],
Figure 8: Conductance spectra for the choice of parameters \\(a=10,50,100,150\\) (in units of \\(\\hbar v/t_{\\perp}\\)) for top-left, top-right, bottom-left, bottom-right respectively. \\(x\\)-axis is \\(eV_{bias}/\\Delta\\) and \\(y\\)-axis is \\(\\mu/\\Delta\\). Parameters: \\(\\Delta=0.003t_{\\perp}\\), \\(U_{0}=\\Delta\\) and \\(E_{z0}=0\\).
we see that \\(a\\) must be small (\\(a\\ll 100\\hbar v/t_{\\perp}\\)). While the precise values of \\(U_{0}\\) and \\(a\\) are unknown in a realistic system, our results suggest that \\(U_{0}\\sim\\Delta\\) and \\(a\\lesssim 10\\hbar v/t_{\\perp}\\). Furthermore, this limit of \\(U_{0}\\) and \\(a\\) is important to observe the features of the crossover from retro to specular Andreev reflection in a system with finite \\(E_{z0}\\).
Now, we turn to the case of \\(E_{z0}=0.5\\Delta\\). In Fig. 10, we see how the conductance spectrum changes as \\(a\\) is changed keeping \\(U_{0}=\\Delta\\) fixed. The features of crossover still remain, but there are oscillations in the conductance spectrum due to Fabry-Perot type interference, which occur due to modes in the region \\(0<x<a\\). The two dark regions of low conductance around the points \\(B\\) and \\(D\\), and the dark lines \\(PD\\) and \\(DS\\) remain. Furthermore, the dark lines \\(BA\\) and \\(BC\\) remain, while the dark lines along \\(AQ\\) and \\(CS\\) vanish. It is not possible to distinguish the Fabry-Perot oscillations in the conductance spectrum from the crossover from specular to retro Andreev reflection, but with a knowledge of \\(E_{z0}\\) and \\(\\Delta\\) the points: \\(P,Q,R,S,A,B,C,\\) and \\(D\\) in the conductance spectrum can be identified, thereby finding the crossover lines.
## References
* (1) A. F. Andreev, \"The Thermal Conductivity of the Intermediate State in Superconductors\", J. Exp. Theor. Phys. 19, 1228 (1964).
* (2) G. E. Blonder, M. Tinkham, and T. M. Klapwijk, \"Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion\", Phys. Rev. B 25, 4515 (1982).
* (3) A. Kastalsky, A. W. Kleinsasser, L. H. Greene, R. Bhat, F. P. Milliken, and J. P. Harbison, \"Observation of pair currents in superconductor-semiconductor contacts\", Phys. Rev. Lett. 67, 3026 (1991).
* (4) K. S. Novoselov, A.K. Geim, S.V. Morozov, D. Jiang, Y. Zhang, S.V. Dubonos, I.V. Grigorieva, and A.A. Firsov, \"Electric Field Effect in Atomically Thin Carbon Films\", Science 306, 666 (2004).
* (5) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, \"The electronic properties of graphene\", Rev. Mod. Phys. 81, 109 (2009).
* (6) A. V. Rozhkov, A. O. Sboychakov, A. L. Rakhmanov, F. Nori, \"Electronic properties of graphene-based bilayer systems\", Physics Reports 648, 1-104 (2016).
* (7) E McCann and M Koshino, \"The electronic properties of bilayer graphene\", Rep. Prog. Phys. 76, 056503 (2013).
* (8) S. Bhattacharjee and K. Sengupta, \"Tunneling Conductance of Graphene NIS Junctions\", Phys. Rev. Lett. 97, 217001 (2006).
* (9) C. W. J. Beenakker, \"Specular Andreev Reflection in Graphene\", Phys. Rev. Lett. 97, 067007 (2006).
* (10) C. Benjamin and J. K. Pachos, \"Detecting entangled states in graphene via crossed Andreev reflection\", Phys. Rev. B 78, 235403 (2008).
* (11) L. Majidi, and M. Zareyan, \"Enhanced Andreev reflection in gapped graphene \", Phys. Rev. B 86, 075443 (2012).
* (12) D. Rainis, F. Taddei, F. Dolcini, M. Polini, and R. Fazio, \"Andreev reflection in graphene nanoribbons\", Phys. Rev. B 79, 115131 (2009).
* NbSe\\({}_{2}\\) junction\", Phys. Rev. B 94, 235451 (2016).
* (14) D. K. Efetov, L. Wang, C. Handschin, K. B. Efetov, J. Shuang, R. Cava, T. Taniguchi, K. Watanabe, J. Hone, C. R. Dean and P. Kim, \"Specular interband Andreev reflections at van der Waals interfaces between graphene and NbSe\", Nat. Phys. 12, 328 (2016).
* (15) D. K. Efetov and K. B. Efetov, \"Cross-over from retro to specular Andreev reflections in bilayer graphene\", Phys.
Figure 10: Conductance spectra for the choice of parameters \\(a=10,50,100\\), and \\(150\\) (in units of \\(\\hbar v/t_{\\perp}\\)) for top-left, top-right, bottom-left, and bottom-right panels respectively. The \\(x\\)-axis is \\(eV_{bias}/\\Delta\\) and the \\(y\\)-axis is \\(\\mu/\\Delta\\). Parameters: \\(\\Delta=0.003t_{\\perp}\\), \\(U_{0}=\\Delta\\) and \\(E_{z0}=0.5\\Delta\\).
Figure 9: Conductance spectra for the choice of parameters \\(a=10,100,500\\), and \\(1000\\) (in units of \\(\\hbar v/t_{\\perp}\\)) for top-left, top-right, bottom-left, and bottom-right panels respectively. The \\(x\\)-axis is \\(eV_{bias}/\\Delta\\) and the \\(y\\)-axis is \\(\\mu/\\Delta\\). Parameters: \\(\\Delta=0.003t_{\\perp}\\), \\(U_{0}=5t_{\\perp}\\) and \\(E_{z0}=0\\).
Rev. B 94, 075403 (2016).
* (16) T. Ludwig, \"Andreev reflection in bilayer graphene\", Phys. Rev. B 75, 195322 (2007).
* (17) R. Landauer, \" Spatial Variation of Currents and Fields Due to Localized Scatterers in Metallic Conduction\" IBM J. Res. Dev. 1, 223 (1957); R. Landauer, \"Electrical resistance of disordered one-dimensional lattices\", Philos. Mag. 21, 863 (1970); M. Buttiker, Y. Imry, R. Landauer, and S. Pinhas, \"Generalized many-channel conductance formula with application to small rings\", Phys. Rev. B 31, 6207 (1985); M. Buttiker, \"Four-Terminal Phase-Coherent Conductance\", Phys. Rev. Lett. 57, 1761 (1986); S. Datta, _Electronic Transport in Mesoscopic Systems_ (Cambridge University Press, Cambridge, 1995).
* (18) A. Soori, S. Das and S. Rao, \"Magnetic field induced Fabry- Perot resonances in helical edge states\", Phys. Rev. B 86, 125312 (2012).
* (19) H. Haugen, D. Huertas-Hernando, and A. Brataas, \"Spin transport in proximity-induced ferromagnetic graphene\", Phys. Rev. B 77, 115406 (2008).
* (20) Z. Wang, C. Tang, R. Sachs, Y. Barlas, and J. Shi, \"Proximity-Induced Ferromagnetism in Graphene Revealed by the Anomalous Hall Effect\", Phys. Rev. Lett. 114, 016603 (2015).
* (21) P. Wei, S. Lee, F. Lemaitre, L. Pinel, D. Cutaia, W. Cha, F. Katmis, Y. Zhu, D. Heiman, J. Hone, J. S. Moodera and C.-T. Chen, \"Strong interfacial exchange field in the graphene/EUS heterostructure\", Nat. Mater. 15, 711-716 (2016).
* (22) Y. Takane, K. Yarimizu, and A. Kanda, \"Andreev Reflection in a Bilayer Graphene Junction: Role of Spatial Variation of the Charge Neutrality Point\", J. Phys. Soc. Jpn. 86, 064707 (2017). | Andreev reflection in graphene is special since it can be of two types, retro or specular. Specular Andreev reflection (SAR) dominates when the position of the Fermi energy in graphene is comparable to or smaller than the superconducting gap. Bilayer graphene (BLG) is an ideal candidate to observe the crossover from retro to specular since the Fermi energy broadening near the Dirac point is much weaker compared to monolayer graphene. Recently, the observation of signatures of SAR in BLG have been reported experimentally by looking at the enhancement of conductance at finite bias near the Dirac point. However, the signatures were not very pronounced possibly due to the participation of normal quasiparticles at bias energies close to the superconducting gap. Here, we propose a scheme to observe the features of enhanced SAR even at zero bias at a normal metal (NM)-superconductor (SC) junction on BLG. Our scheme involves applying a Zeeman field to the NM side of the NM-SC junction on BLG (making the NM ferromagnetic), which energetically separates the Dirac points for up-spin and down-spin. We calculate the conductance as a function of chemical potential and bias within the superconducting gap and show that well-defined regions of specular- and retro-type Andreev reflection exist. We compare the results with and without superconductivity. We also investigate the possibility of the formation of a p-n junction at the interface between the NM and SC due to a work function mismatch. | Give a concise overview of the text below. | 314 |
arxiv-format/2309_04549v1.md | # Poster: Making Edge-assisted LiDAR Perceptions Robust to Lossy Point Cloud Compression
Jin Heo13, Gregoire Phillips24, Per-Erik Brodin25, Ada Gavrilovska12
1 Georgia Institute of Technology, Atlanta, Georgia, USA
2 Ericsson Research, Santa Clara, California, USA
Email: [email protected], [email protected], [email protected], [email protected]
## I Introduction
A light detection and ranging (LiDAR) sensor enables 3D sensing capability. It emits laser beams and generates a point cloud by measuring the time taken to receive the reflected light pulses. Since a LiDAR sensor allows using 3D environmental information and is more robust to light and weather conditions than 2D camera sensors [1], it has been widely applied to self-driving cars and robotics. In the past, LiDAR sensors were so costly and big in size that few device platforms leveraged them [2]. Recently, LiDAR sensors are more available to mobile devices because they are getting smaller, cost-effective, and low-power [3, 4, 5]. So, there are more opportunities for mobile devices to utilize this 3D sensing capability in diverse use cases, _e.g._, extended reality (XR) and 3D reconstruction.
While a LiDAR sensor is getting prevalent on mobile devices, its usage for real-time perceptions such as 3D object detection and simultaneous localization and mapping is restricted due to the prohibitive computational costs of LiDAR perception algorithms [6, 7, 8, 9, 10]. In this situation, edge computing can be a technology for enabling computationally intensive LiDAR perceptions to mobile users. Commodity devices can offload LiDAR perceptions on the edge, and offloading LiDAR perceptions ensures lower processing time and cost for the perception algorithms [11, 12, 13].
When running the LiDAR perceptions on the edge in real-time, an efficient point cloud compression (PCC) method is necessary because of the large volume of raw point clouds. Additionally, the PCC method should be lightweight to operate on mobile devices and low latency for the latency-performance tradeoff of real-time perceptions; the higher end-to-end latency causes larger discrepancies between the perception result and the real-world environment [14]. In our recent work, we showed that existing PCC methods [15, 16, 17, 18] are hardly suitable for remote real-time perceptions and presented a fast and lightweight PCC method, FLiCR [19].
Although FLiCR achieved the low-latency and efficiency requirements, it compromised the data quality due to use of lossy compression, and caused the perception performance loss. To mitigate this perception performance loss, we are developing a lightweight and low-latency interpolation algorithm to restore the lost points in a point cloud. Our algorithm is based on the range image (RI) representation of a point cloud, which maps 3D points into a 2D frame. Since manipulations at the 2D frame can cause unexpected distortions in 3D space, the interpolation algorithm needs to be specialized for RIs of LiDAR point clouds.
## II Range Image Interpolation and Preliminary Results
A LiDAR point cloud is an unstructured point cloud having an arbitrary number of 3D points. The existing PCC methods convert the raw point cloud into the intermediate representations (IR), _e.g._, range image (depth map), octree, k-d tree, and mesh, and compress the IRs. Lossy compression methods achieve a higher compression ratio than lossless compression by reducing the level of detail (LoD) at the IR level and losing points in the point cloud. Among the IRs, the range image (RI) has a low-latency benefit with the simplicity of its conversion process [19] as it is generated by converting the points in the 3D Cartesian coordinates to the spherical coordinates and mapping the converted points into a 2D image. Each pixel of a RI has a depth value. By presenting a point cloud as an image, it becomes possible to leverage the existing image-processing techniques. Our work is motivated by the observation that the pixel interpolation at the RI level generates the interpolated points in 3D space, and it can be used to relieve the performance degradation of LiDAR perceptions on the edge by restoring the lost points by lossy compression.
By using the existing image interpolation algorithms, we upscale a RI and reconstruct point clouds from the upscaled RIs. Figure 1 shows the reference and interpolated RIs. The reference RI of 1024\\(\\times\\)64 is from a point cloud in the KITTI dataset [20]. Figure 1c is a part of the upscaled RI (2048\\(\\times\\)64)by bilinear interpolation. Along with bilinear interpolation, we upscale the reference RI with bicubic and Lanczos interpolations. For the interpolated RIs, we measure the image quality metrics of DSS [21], FSIM [22], SSIM [23], VSI [24], and SR-SIM [25]. Table I shows the image quality results of the interpolated RIs with respect to the reference RI. The results show the interpolated RIs are objectively good-quality images to the reference RIs.
Although those RIs of high image quality scores are expected to generate high-quality point clouds, the effectiveness of the 2D image interpolations is not translated into high-quality 3D point clouds. Figure 2 shows the point clouds reconstructed from the original and upscaled RIs. Compared to the point cloud from the original RI (Figure 1(a)), the point cloud from the upscaled RI by bilinear interpolation (Figure 1(c)) has many noisy points (red boxes). The interpolation algorithms including bilinear, bicubic, and Lanczos interpolations, leverage the near-pixel information of an interpolating pixel to put a proper value, and these noisy points are caused by the interpolating operation that blindly utilizes the near pixels; the pixels neighboring in a 2D frame can be placed far away in the 3D space or come from empty spaces.
The interpolation algorithm for RIs should be designed with awareness of the RI characteristics and how its operation in the 2D space affects the 3D point cloud. We are developing an interpolation algorithm specialized for RIs. Our algorithm consists of two phases: window exploration and interpolation. Our algorithm windows a RI, and the window exploration iterates all windows of a RI and finds interpolating places within a window. In the window exploration phase, our algorithm calculates the depth gradients and interpolating values. When the gradients are calculated, our algorithm identifies empty pixels and invalidates the gradients between object and empty pixels. Then, in the window interpolation phase, our algorithm interpolates pixels within a window based on the information from the exploration phase. The interpolation policy can be set; among the possible places in a window, the policy prioritizes the places in ascending or descending order of the depth values for the interpolation priority among near or far-away objects.
Figure 2(b) shows the RI with our interpolation algorithm. As shown in Table I, the interpolated RI with our algorithm shows lower image quality scores than the existing interpolation algorithms. However, when the point cloud is reconstructed from the RI with our algorithm in Figure 1(b), the interpolated pixels are effectively translated into 3D points and densify the 3D object shapes (yellow boxes), not as noisy points.
## III Summary and Next Steps
We motivate the need for RI interpolation techniques to compensate for the loss of perception due to compressing large point-clouds during offload from on-device sensors to perception services on a nearby edge. We present the preliminary results from an algorithm to demonstrate the opportunities from new interpolation techniques. Our ongoing work focuses on further improvements and comprehensive evaluation of the impact on end-to-end perception of this approach.
## IV Acknowledgment
This work is supported by Ericsson Research Santa Clara.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline & DSS [21] & FSIM [22] & SSM [23] & VSI [24] & SR-SIM [25] \\\\ \\hline
**Ours** & **0.92** & **0.93** & **0.93** & **0.97** & **0.94** \\\\ \\hline Illusion & 0.99 & 0.98 & 0.98 & 0.99 & 0.98 \\\\ \\hline Lanczos & 0.95 & 0.95 & 0.96 & 0.98 & 0.97 \\\\ \\hline Bicubic & 0.95 & 0.95 & 0.96 & 0.98 & 0.97 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: The image quality results of the RIs interpolated by different algorithms to the reference RI (Max 1.0).
Fig. 1: The reference and interpolated RIs of a point cloud in the KITTI dataset.
Fig. 2: The reconstructed point clouds from upscaled RIs.
## References
* [1]M. Khader and S. Cherian (2020) An introduction to automotive lidar. Texas Instruments Incorporated. Cited by: SSI.
* [2]T. Chen, L. Ravindranath, S. Deng, P. Bahl, and H. Balakrishnan (2015) Glimpse: continuous, real-time object recognition on mobile devices. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, pp. 155-168. Cited by: SSI.
* [3]M. Chen, T. Li, H. Kim, D. E. Culler, and R. H. Katz (2018) Marvel: enabling mobile augmented reality with low energy and low latency. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, pp. 292-304. Cited by: SSI.
* [4]M. Chen, T. Li, H. Kim, D. E. Culler, and R. H. Katz (2018) Marvel: enabling mobile augmented reality with low energy and low latency. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, pp. 292-304. Cited by: SSI.
* [5]M. Chen, T. Li, H. Kim, D. E. Culler, and R. H. Katz (2018) Marvel: enabling mobile augmented reality with low energy and low latency. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, pp. 292-304. Cited by: SSI.
[MISSING_PAGE_POST]
. Hu, E. Takeuchi, A. Carballo, and K. Takeda (2019) Real-time streaming point cloud compression for 3d lidar sensor using u-net. IEEE Access7, pp. 113616-113625. Cited by: SSI.
* [26]J. Heo, C. Phillips, and A. Gavrilovska (2022) FLiCR: a fast and lightweight lidar point cloud compression based on lossy ri. In 2022 IEEE/ACM Symposium on Edge Computing (SEC), Vol., pp. be published. Cited by: SSI.
* [27]A. Galanov, A. Schwartz, Y. Moshe, and N. Peleg (2015) Image quality assessment based on dct subband similarity. In 2015 IEEE International Conference on Image Processing (ICIP), pp. 2105-2109. Cited by: SSI.
[MISSING_PAGE_POST] | Real-time light detection and ranging (LiDAR) perceptions, _e.g._, 3D object detection and simultaneous localization and mapping are computationally intensive to mobile devices of limited resources and often offloaded on the edge. Offloading LiDAR perceptions requires compressing the raw sensor data, and lossy compression is used for efficiently reducing the data volume. Lossy compression degrades the quality of LiDAR point clouds, and the perception performance is decreased consequently. In this work, we present an interpolation algorithm improving the quality of a LiDAR point cloud to mitigate the perception performance loss due to lossy compression. The algorithm targets the range image (RI) representation of a point cloud and interpolates points at the RI based on depth gradients. Compared to existing image interpolation algorithms, our algorithm shows a better qualitative result when the point cloud is reconstructed from the interpolated RI. With the preliminary results, we also describe the next steps of the current work. | Summarize the following text. | 194 |
mdpi/0057f109_d89a_4cdb_8568_a5c2eb49cbda.md | # Research on Instrument Visibility of Ozone Wind Imaging Interferometer
Chunmin Zhang
1The Institute of Space Optics, Xi'an Jiaotong University, Xi'an 710049, China; [email protected] (X.D.); [email protected] (T.Y.); gxuu060208tu.xjtu.edu.cn (G.L.)
Xiao Du
1The Institute of Space Optics, Xi'an Jiaotong University, Xi'an 710049, China; [email protected] (X.D.); [email protected] (T.Y.); gxuu060208tu.xjtu.edu.cn (G.L.)
Tingyu Yan
1The Institute of Space Optics, Xi'an Jiaotong University, Xi'an 710049, China; [email protected] (X.D.); [email protected] (T.Y.); gxuu060208tu.xjtu.edu.cn (G.L.)
Guixiu Li
1The Institute of Space Optics, Xi'an Jiaotong University, Xi'an 710049, China; [email protected] (X.D.); [email protected] (T.Y.); gxuu060208tu.xjtu.edu.cn (G.L.)
Footnote 1: [https://www.mdpi.com/journal/renotesensing](https://www.mdpi.com/journal/renotesensing)
## 1 Introduction
The atmosphere plays a vital role in the living environment of human beings. The ozone in the stratosphere absorbs ultraviolet radiation from the sun, thereby protecting creatures on the earth from ultraviolet radiation. In recent years, the study of the atmosphere has become a hot topic because of a series of environmental problems caused by the ozone hole in the north and south poles and the problem of ozone depletion. Wind is a basic parameter of atmospheric dynamics. The measurement of atmospheric wind field and ozone concentration has important scientific significance [1]; it also has great practical significance to improve the accuracy of environmental prediction and guarantee spaceflight aviation safety [2]. The development of space technology makes atmospheric wind speed field and temperature field detection no longer limited to ground-based remote sensing methods, such as Doppler radar or Raman temperature radar [3, 4]. According to the atmosphere detection mode, the remote sensing detection method can be divided into active detection and passive detection. Active detection is based on transmitting information, and then receiving the echo signal after being reflected by the particles in the atmosphere, after processing the data, the distribution of the wind speed field, temperature field and pressure field can be obtained. Passive detection uses interference imaging spectroscopy and electromagnetic Doppler effect, which mainly uses airglow in the atmosphere as the detected source and obtains atmospheric wind field information by inverting the data which is obtained by detecting the fringe visibility and the frequency shift of the spectral line under the condition of large optical path difference. Because the principle of passive detection is simple, it also has high detection accuracy and no emission source, so it is more suitable for satellite-borne detection [5, 6, 7, 8, 9]. The wind imaging interferometer (WINDII) carried on the UARS (Upper Atmosphere Research Satellite) and launched by NASA in 1991 has created a precedent for passive detection of atmospheric wind field [10], whichuses a moving mirror scanning system. The atmospheric wind field information obtained from four measurements at different times is inverted; therefore, errors caused by changes in the wind field, radiation field and temperature field will be introduced in the process, so this technology has certain flaws in principle. Gault et al. proposed a method that one of the Michelson mirrors is divided into four parts and each quadrant coated separately for obtaining four simultaneous phase-stepped images in 1996 [11]. Since then, in the instrument WAMI proposed by Ward et al. and Gordan G. Shepherd's review of WINDII improvements, it is also mentioned that using this method can avoid errors caused by wind field changes [12; 13]. The new type of ozone wind imaging interferometer developed by our group also uses static partition coating technology to construct a four-part reflector, which is based on the static four-intensity method and uses a pyramid prism to split light to obtain four interferograms of different phases in one measurement on CCD. The instrument developed by our group uses a single channel to observe the \\(O_{2}(a^{1}\\Delta_{g})1.27~{}\\upmu\\)m airglow spectrum line to obtain the earth's middle atmosphere temperature, wind field and ozone concentration information simultaneously. Its detection range is \\(25~{}\\sim 110\\) km. This design greatly simplifies the instrument structure and makes the weight of the instrument is lighter and smaller, which cannot be ignored in the engineering application of spaceborne instruments. In the process of detecting the atmosphere with the ozone wind imaging interferometer, the modulation of the interferogram plays a decisive role in the inversion of wind speed field and temperature field, which is determined by the instrument visibility and the fringe visibility. The instrument visibility is affected by the system transmittance, compensation glass surface tilt, mirror surface accuracy, which is an important standard to measure the performance of the instrument. The fringe visibility is mainly related to the spectral line type and width [14]. This article used the principle of double beam interference, aiming at the ozone wind imaging interferometer developed by our group. Assuming that the spectral line source is Gaussian line, the interference intensity was calculated in detail to obtain the modulation of the interferogram including the instrument visibility. Simulate and analyze the influence of the system transmittance, compensation glass surface tilt and mirror surface accuracy on the instrument visibility. This research provides an important theoretical basis for the development of the ozone wind imaging interferometer and the practical guidance for the detection of the atmosphere wind field.
## 2 Principle
### Principles of Atmospheric Wind Field Detection
The principle of atmospheric wind field detection is to use the airglow spectrum line frequency shift phenomenon caused by the Doppler effect which is produced by the relative movement of the airglow and the detector, retrieving the wind speed field and temperature field by detecting the airglow frequency shift. The ozone molecules in the atmosphere produce the second excited state of molecular oxygen \\(O_{2}(a^{1}\\Delta_{g})\\) by photolysis in the Hartley band [15]. The transition from \\(O_{2}(a^{1}\\Delta_{g})\\) to the ground state \\(O_{2}(X^{3}\\Sigma_{g})\\) is an important source of molecular oxygen radiation spectrum. The transition radiation spectrum is a near-infrared spectrum composed of a series of spectral lines with very small spectral intervals, and its wavelength is about 1.27 \\(\\upmu\\)m. The radiation equation can be expressed as follows: \\(O_{2}(a^{1}\\Delta_{g})\\overset{A_{\\text{op}}}{\\rightarrow}O_{2}+h\
u(1.27~{} \\upmu\\)m\\). Because the airglow of this band has very high brightness, the distribution height range is \\(25~{}\\sim 110\\) km, and the generation of airglow is related to the concentration of ozone. Detecting this spectral line can not only achieve a higher signal-to-noise ratio but also invert the ozone concentration [16]. This radiation band has been observed by a variety of instruments, such as the spacecraft instrument [17; 18], the near-infrared spectrometer on the Solar Mesosphere Explorer satellite SME (Solar Mesosphere Explorer) [19], the optical spectrograph and infrared imaging system OSIRIS on the Odin satellite (Optical Spectrograph and InfraRed Imager System) [20] and the atmospheric sounding broadband radiometer SABER [21], that are used to obtain the ozone concentration of the earth's atmosphere. It can be seen that through the observation of \\(O_{2}(a^{1}\\Delta_{g})1.27\\) um near-infrared airglow radiation that the wind field speed and temperature field and ozone concentration information can be obtained at the same time. The SWIFT instrument proposed by Rahnama et al. intended to use 8.8 um as the observation spectrum [22], but Gordon G. Shepherd and others pointed out this implementation would be more challenging because infrared-transmitting materials would have to be used and the interferometer would have to be cooled to reduce the thermal emission of the instrument to levels below that of the atmospheric signal [12]. In summary, \\(O_{2}(a^{1}\\Delta_{g})1.27\\) um is very suitable for detecting atmospheric wind fields. \\(O_{2}(a^{1}\\Delta_{g})\\) collides with the surrounding gas molecules many times before the transition and forms a thermal equilibrium with the surrounding atmosphere, forming a common temperature and overall speed which is the temperature and speed of the wind field of the atmosphere to be detected.
Suppose the instrument visibility of the ozone wind imaging interferometer is \\(U\\) and the fringe visibility is \\(V\\), then the interference intensity can be written as follows when the light reaches the CCD [23]:
\\[I(\\Delta)=I_{0}[1+UV\\cos(2\\pi\\sigma_{0}\\Delta)] \\tag{1}\\]
take \\(\\Delta=\\Delta_{0}+\\Delta^{\\prime}\\), where \\(\\Delta_{0}\\) is the reference optical path difference of the instrument, and it satisfies \\(\\cos(2\\pi\\sigma_{0}\\Delta_{0})=1\\), where \\(\\sigma_{0}\\) is the central wave number. If \\(\\sigma_{0}\\) corresponds to the wave number when the wind speed is zero, the Doppler effect of electromagnetic waves shows that when the relative speed between the light source and the observer is \\(\
u\\), the wave number becomes:
\\[\\sigma=\\sigma_{0}(1+\
u/c) \\tag{2}\\]
Therefore, the interference intensity can be expressed as [24]:
\\[I(\\Delta)=J_{1}+J_{2}U\\cos\\sigma_{i}-J_{3}U\\sin\\sigma_{i} \\tag{3}\\]
In this formula: \\(J_{1}\\), \\(J_{2}\\) and \\(J_{3}\\) are called apparent quantities, which are specifically expressed as:
\\[J_{1}=I_{0},J_{2}=VI_{0}\\cos\\sigma_{t},J_{3}=VI_{0}\\sin\\sigma_{t} \\tag{4}\\]
where \\(\\sigma_{t}=2\\pi\\sigma_{0}\\Delta_{0}\\frac{\
u}{c}\\) is the phase difference caused by wind speed [25], and \\(\\sigma_{i}\\) is the phase difference caused by the step optical path difference, thus:
\\[\\sigma_{t}=\\arctan\\frac{J_{3}}{J_{2}} \\tag{5}\\]
The fringe visibility can be written as:
\\[V=\\frac{\\sqrt{J_{2}^{2}+J_{3}^{2}}}{J_{1}} \\tag{6}\\]
The step optical path difference can take the following values for the four-step method:
\\[\\Delta^{\\prime}=0,\\frac{\\lambda_{0}}{4},\\frac{\\lambda_{0}}{2},\\frac{3\\lambda_ {0}}{4} \\tag{7}\\]
Four interference intensities can be obtained: \\(I_{1}\\), \\(I_{2}\\), \\(I_{3}\\), and \\(I_{4}\\), which can be derived:
\\[J_{1}=\\left(I_{1}+I_{2}+I_{3}+I_{4}\\right)/4,J_{2}=\\frac{1}{2U}(I_{4}-I_{2}), J_{3}=\\frac{1}{2U}(I_{3}-I_{1}) \\tag{8}\\]
According to the basic principles of optics, the fringe visibility of Gaussian line is defined as [25]:
\\[V=\\exp(-QT\\Delta^{2})=\\frac{I_{\\max}-I_{\\min}}{I_{\\max}+I_{\\min}} \\tag{9}\\]Therefore, the fringe visibility \\(V\\) and the phase difference \\(\\varphi_{t}\\) can be obtained by measuring \\(J_{1}\\), \\(J_{2}\\) and \\(J_{3}\\). According to Formulas (9) and (5), the wind speed field and temperature field can be inverted, and it can also obtain the ozone concentration by inverting the spectral line radiance of 1.27 \\(\\upmu\\)m and the oxygen molecule number density from \\(J_{1}\\)[26]. It can be seen from Formula (8) that the instrument visibility \\(U\\) directly affects \\(J_{1}\\), \\(J_{2}\\) and \\(J_{3}\\). Therefore, the instrument visibility \\(U\\) plays a vital role in the inversion of wind speed field and temperature field.
### Principle of Ozone Concentration Inversion
In order to obtain the ozone concentration, firstly, the atmospheric temperature (\\(T\\)), pressure (\\(P\\)), the \\(O_{2}\\) IR Atmospheric (0-0) band volume emission rates (\\(E\\)) (it can be calculated from the spectral line volume emission rates (\\(\\eta\\)) and \\(T\\)) and the number density of \\(O_{2}\\) can be obtained through the inversion of \\(J_{1}\\) and the established forward model; secondly, use \\(J_{2}\\), \\(J_{3}\\), \\(T\\), \\(P\\), \\(E\\) and the number density of \\(O_{2}\\) as input parameters (the wind speed along the line of sight can be inverted, and the horizontal vector wind can be calculated from the wind speeds in two orthogonal line of sight directions); finally, by analyzing the production and quenching process of \\(O_{2}(a^{1}\\Delta_{g})\\), according to the photochemical equilibrium conditions [27]:
\\[E_{t}(\\stackrel{{\\rightarrow}}{{s}})=\\frac{A_{00}}{A_{\\Delta}+k_{ 5}^{O_{2}}[O_{2}]}\\Bigg{\\{}\\varepsilon J_{H1}[O_{3}]+\\frac{k_{3}^{O_{2}}[O_{2 }]+k_{3}^{N_{2}}[N_{2}]}{A_{\\Delta}+k_{3}^{O_{2}}[O_{2}]+k_{3}^{N_{2}}[N_{2}]} \\times(J_{4}[O_{2}]+\\frac{k_{2}^{O_{2}}[O_{2}]\\varepsilon J_{H1}[O_{3}]}{A_{D }+k_{2}^{O_{2}}[O_{2}]+k_{2}^{N_{2}}[N_{2}]})\\Bigg{\\}} \\tag{10}\\]
In the formula, \\(E_{t}(\\stackrel{{\\rightarrow}}{{s}})\\) represents the \\(O_{2}(a^{1}\\Delta_{g})\\) band volume emission rates at time \\(t\\), unit: photons cm\\({}^{-3}\\)s\\({}^{-1}\\); \\(\\stackrel{{\\rightarrow}}{{s}}\\) is the path of the observation point along the line of sight of the satellite, which is the functions of height, longitude and latitude; \\([O_{2}]\\), \\([O_{3}]\\) and \\([N_{2}]\\) represent the number densities of \\(O_{2}\\), \\(O_{3}\\) and \\(N_{2}\\), respectively, at \\(\\stackrel{{\\rightarrow}}{{s}}\\). \\([N_{2}]\\) can be obtained from the atmospheric model; other coefficients (\\(A_{00}\\), \\(A_{\\Delta}\\), \\(k_{5}^{O_{2}}\\), \\(\\varepsilon\\), \\(J_{H1}\\), \\(k_{3}^{O_{2}}\\), \\(k_{3}^{N_{2}}\\), \\(A_{\\Delta}\\), \\(J_{4}\\), \\(k_{2}^{O_{2}}\\), \\(A_{D}\\), \\(k_{2}^{N_{2}}\\)) are the reaction coefficient of the photochemical reaction process, which is known [27]. Because \\(E_{t}(\\stackrel{{\\rightarrow}}{{s}})\\) and \\([O_{2}]\\) can be retrieved from \\(J_{1}\\), according to Formula (10), the ozone concentration can be retrieved. The details about the wind speed retrieval algorithm and theory are similar to WINDII--please refer to [10; 28; 29]; \\(T\\) and \\(O_{2}\\) retrieval algorithm is similar to SWIFT--please refer to [22; 28; 30]; for ozone concentration retrieval algorithm see [27].
### Principle of Ozone Wind Imaging Interferometer
The ozone wind imaging interferometer developed by our group is a new type of interferometer for detecting wind temperature field and speed field. As shown in Figure 1, the instrument is mainly composed of a Fabry-Perot filter (E1), a front telescope (L1), Michelson interferometer (M1), the pyramid prism (P1), imaging lens (L2) and CCD detector (Array detector). After light passes through the Michelson interferometer (M1), due to the difference in the thickness of the coating on the four zones on the mirror, it will produce four beams of light with different phases at the same time; there are the following: 0, \\(\\pi/2\\), \\(\\pi\\) and \\(3\\pi/2\\), then the four lights with different phases are divided into four beams after passing through the pyramid prism (P1); therefore, four interferograms with different phases can be obtained on CCD, which can be used to invert wind temperature field, speed field and ozone concentration by using the static four-intensity method.
## 3 Analysis and Calculation of Instrument Visibility
The main factors that affect the instrument visibility of the ozone wind imaging interferometer include instrument transmittance, compensation glass surface tilt and mirror surface accuracy. This article discusses the influence of the above three factors on the instrument visibility separately and focuses on calculating the influence of transmittance on instrument visibility.
### The Influence of Transmittance on Instrument Visibility
As shown in Figure 2, when a beam of light enters the Michelson interferometer (M1) at the incident angle \\(i_{1}\\), it is divided into two beams at the beam splitter B, and the transmittance of the two beams can be calculated respectively. The beam passing through the arm 1 is from left to right after passing through interfaces A, B, C, D, and E, respectively, and reaching the E interface it is reflected by the four-zone total reflection mirror, and the reflected light passes through D, C, and B and then exits from the H surface; the same happens for the light of arm 2. In Figure 2, \\(n_{0}\\), \\(n_{1}\\), \\(n_{2}\\), \\(n_{2}^{\\prime}\\) are refractive indices, \\(i_{1}\\) is the incident angle, and \\(i_{2}\\), \\(i_{3}\\), \\(i_{4}\\), \\(i_{2}^{\\prime}\\) are refraction angles respectively.
Because the transmittance of the p component and the s component of the incident light are different, the Fresnel formula must be used to calculate one by one. According to the law of refraction:
\\[n_{0}sini_{1}=n_{1}sini_{2}=n_{2}sini_{3}=n_{0}sini_{4};n_{1}sini_{2}=n_{2}^{ \\prime}sini_{2}^{\\prime} \\tag{11}\\]
When the incident light passes through interface A, the transmittance of its two orthogonal components can be expressed as follows:
\\[t_{pA}=\\frac{2n_{0}\\cos i_{1}}{n_{1}\\cos i_{1}+n_{0}\\cos i_{2}},t_{sA}=\\frac{2 n_{0}\\cos i_{1}}{n_{0}\\cos i_{1}+n_{1}\\cos i_{2}} \\tag{12}\\]
Figure 1: Schematic diagram of optical path of the ozone wind imaging interferometer.
Figure 2: Schematic diagram of the light propagation plane of the Michelson interferometer (M1).
The incident light passes through the B surface for the first time. Because the beam splitter is a split amplitude beam splitter, the transmittance of the p and s components of the transmitted beam 1 is set to be \\(\\alpha\\), and the reflectivity of the p and s components of the reflected beam 2 is \\(\\beta\\). Then:
\\[T_{pB}=T_{sB}=\\alpha,R_{pB}=R_{sB}=\\beta \\tag{13}\\]
Therefore, in arm 1, the transmittance of the p and s components of the transmitted beam 1 can be expressed as:
\\[T_{P1}={t_{pA}}^{2}T_{pB}{t_{pc}}^{2}{t_{pD}}^{2}R_{pE}{t_{pc}}^{2}{t_{pD}}^{2 }R_{pB}{2t_{pH}}^{2},T_{S1}={t_{sA}}^{2}T_{sB}{t_{sc}}^{2}{t_{sD}}^{2}R_{sE}{t_ {sC}}^{2}{t_{sD}}^{2}R_{sB}{2t_{sH}}^{2} \\tag{14}\\]
\\[T_{P1}=\\left(\\frac{1}{2}\\times\\alpha\\times R_{pE}\\times\\alpha \\right)\\times\\left(\\frac{2n_{0}\\cos i_{1}}{n_{1}\\cos i_{1}+n_{0}\\cos i_{2}} \\right)^{2}\\times\\left(\\frac{2n_{1}\\cos i_{2}}{n_{2}\\cos i_{2}+n_{1}\\cos i_{1} }\\right)^{2}\\times\\left(\\frac{2n_{2}\\cos i_{1}}{n_{0}\\cos i_{3}+n_{2}\\cos i_{ 4}}\\right)^{2} \\tag{15}\\]
\\[T_{S1}=\\left(\\frac{1}{2}\\times\\alpha\\times R_{sE}\\times\\alpha \\right)\\times\\left(\\frac{2n_{0}\\cos i_{1}}{n_{0}\\cos i_{1}+n_{1}\\cos i_{1}} \\right)^{2}\\times\\left(\\frac{2n_{1}\\cos i_{2}}{n_{1}\\cos i_{2}+n_{2}\\cos i_{ 1}}\\right)^{2}\\times\\left(\\frac{2n_{2}\\cos i_{3}}{n_{2}\\cos i_{3}+n_{0}\\cos i_ {4}}\\right)^{2} \\tag{16}\\]
Similarly, in arm 2, the transmittance of the p component and s component of the reflected beam 2 is also expressed as follows:
\\[T^{\\prime}_{P1}=t_{pA}^{2}R_{pB}t_{pF}^{2}R_{pG}t_{pF2}^{2}t_{pB}t_{pH}^{2},T _{S1}^{\\prime}=t_{sA}^{2}R_{sB}t_{sF}^{2}R_{sG}t_{sF2}^{2}T_{sB}t_{sH}^{2} \\tag{17}\\]
\\[T^{\\prime}_{P1}=\\left(\\frac{1}{2}\\times\\beta\\times R_{pG}\\times \\beta\\right)\\times\\left(\\frac{2n_{0}\\cos i_{1}}{n_{1}\\cos i_{1}+n_{0}\\cos i_{2 }}\\right)^{2}\\times\\left(\\frac{2n_{1}\\cos i_{2}}{n_{1}\\cos i_{2}+n_{2}\\cos i_{ 2}}\\right)^{2}\\times\\left(\\frac{2n_{1}^{\\prime}\\cos i_{2}^{\\prime}}{n_{2}\\cos i _{2}+n_{1}\\cos i_{2}}\\right)^{2} \\tag{18}\\]
\\[T^{\\prime}_{S1}=\\left(\\frac{1}{2}\\times\\beta\\times R_{sG}\\times \\beta\\right)\\times\\left(\\frac{2n_{0}\\cos i_{1}}{n_{1}\\cos i_{1}+n_{0}\\cos i_{ 2}}\\right)^{2}\\times\\left(\\frac{2n_{1}\\cos i_{2}}{n_{1}\\cos i_{2}+n_{2}^{ \\prime}\\cos i_{2}^{\\prime}}\\right)^{2}\\times\\left(\\frac{2n_{1}^{\\prime}\\cos i _{2}^{\\prime}}{n_{2}\\cos i_{2}+n_{1}\\cos i_{2}}\\right)^{2} \\tag{19}\\]
Figure 3 shows the optical path of light propagating in the pyramid prism (with a base angle of \\(\\alpha\\) and a refractive index of \\(n\\)). When light is incident on the pyramid prism at any angle, if the transmittance of the light passing through the pyramid prism is calculated, the incident angle and exit angle of the light at each interface must be calculated.
Figure 3: The pyramid prism light path.
As shown in the figure above, make the normal line PL of the incident surface EBC, and when the light is incident on the F point of pyramid prism EBC surface at \\(i_{1}\\) with the z-axis, take the line \\(HF=1\\) parallel to the z-axis; according to the geometric relationship, there are \\(\\angle GFH=i_{1}\\), \\(\\angle LFH=\\alpha\\), so the incident angle \\(i_{5}\\) of the light on the EBC surface can be obtained as:
\\[i_{5}=\\arccos\\frac{GF^{2}+LF^{2}-GL^{2}}{2GF\\cdot LF} \\tag{20}\\]
According to the law of refraction:
\\[n_{0}\\sin i_{5}=n\\sin i_{6} \\tag{21}\\]
From the geometric relationship:
\\[\\angle LMF=\\pi-i_{6}-\\angle GLF \\tag{22}\\]
From the law of cosines:
\\[\\angle GLF=\\arccos\\frac{GL^{2}+LF^{2}-GF^{2}}{2GL\\cdot LF} \\tag{23}\\]
From the law of sine:
\\[\\frac{MF}{\\sin\\angle MLF}=\\frac{LF}{\\sin\\angle LMF} \\tag{24}\\]
From this, the length of MF can be solved, so it can be solved that the incident angle \\(i_{7}\\) of the refracted light on the ABCD plane satisfies:
\\[i_{7}=\\arccos\\frac{HF}{MF} \\tag{25}\\]
According to the law of refraction, \\(n\\sin i_{7}=n_{0}\\sin i_{8}\\), then the exit angle \\(i_{8}\\) of the refracted light on the ABCD plane can be solved. Therefore, the transmittance of the p component and s component of the transmitted light beam and the reflected light beam after passing through the pyramid prism can be expressed as:
\\[T_{P2}=T^{\\prime}_{P2}=\\left(t_{p2}\\right)^{2}=\\left(\\frac{2n_{0}\\cos i_{5}}{ n\\cos i_{5}+n_{0}\\cos i_{6}}\\right)^{2}\\times\\left(\\frac{2n\\cos i_{7}}{n_{0} \\cos i_{7}+n\\cos i_{8}}\\right)^{2} \\tag{26}\\]
\\[T_{S2}=T^{\\prime}_{S2}=\\left(t_{s2}\\right)^{2}=\\left(\\frac{2n_{0}\\cos i_{5}}{ n_{0}\\cos i_{5}+n\\cos i_{6}}\\right)^{2}\\times\\left(\\frac{2n\\cos i_{7}}{n_{0} \\cos i_{8}+n\\cos i_{7}}\\right)^{2} \\tag{27}\\]
When the beam 1 reaches the CCD, the total transmittance of its p component and s component can be written as:
\\[T_{P}=T_{P1}T_{P2},T_{S}=T_{S1}T_{S2} \\tag{28}\\]
Similarly, for the beam 2:
\\[T^{\\prime}_{P}=T^{\\prime}_{P1}T^{\\prime}_{P2},T^{\\prime}_{S}=T^{\\prime}_{S1}T ^{\\prime}_{S2} \\tag{29}\\]
For the p component, the wave vector of the output light with the wave number \\(\\sigma\\) of the arm 1 can be expressed as:
\\[\\varepsilon_{P}(z_{1},\\sigma)=t_{p}\\varepsilon(\\sigma)e^{i(\\omega t-2\\pi \\sigma z_{1})} \\tag{30}\\]
Similarly, for the arm 2:
\\[\\varepsilon_{P}(z_{2},\\sigma)=t^{\\prime}_{p}\\varepsilon(\\sigma)e^{i(\\omega t-2 \\pi\\sigma z_{2})} \\tag{31}\\]where \\(\\varepsilon(\\sigma)\\) is the amplitude of incident light with wave number \\(\\sigma\\) and the incident light is natural light, \\(z_{1}\\) and \\(z_{2}\\) are the optical path differences of the two beams of light after passing through the interferometer system respectively. From the large field of view, achromatic, and temperature compensation conditions, it can be seen that the optical path difference between the two beams of light is stable [8] and does not change due to the incident angle, wavelength, etc. The optical path difference of the two output lights can be regarded as a fixed value, that is, when the parameters of the instrument are determined, \\(\\Delta=z_{1}-z_{2}\\) is a constant, so the amplitude of light is:
\\[\\varepsilon_{p}(z_{1},z_{2},\\sigma)=t_{p}\\varepsilon(\\sigma)\\mathrm{e}^{i( \\omega t-2\\pi\\sigma z_{1})}+t_{p}^{\\prime}\\varepsilon(\\sigma)\\mathrm{e}^{i( \\omega t-2\\pi\\sigma z_{2})}=\\varepsilon(\\sigma)\\Big{(}t_{p}e^{i(\\omega t-2\\pi \\sigma z_{1})}+t_{p}^{\\prime}e^{i(\\omega t-2\\pi\\sigma z_{2})}\\Big{)} \\tag{32}\\]
The interference intensity of P light is:
\\[I_{p}(\\sigma,\\Delta)=\\varepsilon_{P}^{*}\\varepsilon_{P}=\\varepsilon(\\sigma)^ {2}\\Big{[}t_{P}^{2}+t_{P}^{\\prime 2}+t_{P}\\Big{(}e^{i2\\pi\\sigma\\Delta}+e^{-i2\\pi \\sigma\\Delta}\\Big{)}\\Big{]}=\\varepsilon(\\sigma)^{2}\\bigg{[}T_{P}+T_{P}^{ \\prime}+2\\sqrt{T_{P}T_{P}^{\\prime}}\\cos 2\\pi\\sigma\\Delta\\bigg{]} \\tag{33}\\]
The incident light is Gaussian line, which satisfies:
\\[B(\\sigma)=\\frac{1}{2}ck_{0}\\varepsilon_{p}^{*}\\varepsilon_{p} \\tag{34}\\]
where \\(c\\) is the speed of light in vacuum, \\(k_{0}\\) is the vacuum dielectric constant (\\(k_{0}=8.85\\times 10^{-12}F/m\\)), and a Gaussian line of width \\(w\\) centered at wave number \\(\\sigma_{0}\\) may be represented by:
\\[B(\\sigma)=B_{0}\\exp[-4\\ln 2\\Big{(}\\sigma-\\sigma_{0})^{2}/w^{2}\\Big{]} \\tag{35}\\]
where \\(B(\\sigma)\\) is the spectral radiance of the line.
Therefore, the total interference intensity of P light in the output light may be rewritten as:
\\[\\begin{split}& I_{P}=\\int_{-\\infty}^{+\\infty}I_{P}(\\sigma,\\Delta)d \\sigma=(T_{P}+T_{P}^{\\prime})\\frac{B_{0}}{2ck_{0}}\\sqrt{w^{2}\\pi/4\\ln 2}+ \\sqrt{T_{P}T_{P}^{\\prime}}\\frac{B_{0}}{ck_{0}}\\sqrt{w^{2}\\pi/4\\ln 2}\\\\ &\\times\\exp(-\\frac{w^{2}}{4\\ln 2}\\pi^{2}\\Delta^{2})\\times\\cos 2\\pi \\sigma_{0}\\Delta\\end{split} \\tag{36}\\]
Similarly, for the S light:
\\[\\begin{split}& I_{S}=\\int_{-\\infty}^{+\\infty}I_{S}(\\sigma, \\Delta)d\\sigma=\\big{(}T_{S}+T_{S}^{\\prime}\\big{)}\\frac{B_{0}}{2ck_{0}}\\sqrt{w^ {2}\\pi/4\\ln 2}+\\sqrt{T_{S}T_{S}^{\\prime}}\\frac{B_{0}}{ck_{0}}\\sqrt{w^{2}\\pi/4 \\ln 2}\\\\ &\\times\\exp(-\\frac{w^{2}}{4\\ln 2}\\pi^{2}\\Delta^{2})\\times\\cos 2 \\pi\\sigma_{0}\\Delta\\end{split} \\tag{37}\\]
The total interference intensity of the output light becomes:
\\[\\begin{split}& I_{total}=I_{P}+I_{S}\\ =\\big{(}T_{P}+T_{P}^{\\prime}+T_{S}+T_{S}^{ \\prime}\\big{)}\\frac{B_{0}}{2ck_{0}}\\sqrt{w^{2}\\pi/4\\ln 2}\\times\\\\ &\\left(1+\\frac{2\\sqrt{T_{P}T_{P}^{\\prime}}+2\\sqrt{T_{S}T_{S}^{ \\prime}}}{T_{P}+T_{S}+T_{S}^{\\prime}}\\times\\exp(-\\frac{w^{2}}{4\\ln 2}\\pi^{2} \\Delta^{2})\\times\\cos 2\\pi\\sigma_{0}\\Delta\\right)\\end{split} \\tag{38}\\]
Therefore, the modulation of the interferogram of the ozone wind imaging interferometer can be written as:
\\[V_{G}=\\frac{2\\sqrt{T_{P}T_{P}^{\\prime}}+2\\sqrt{T_{S}T_{S}^{\\prime}}}{T_{P}+T_ {P}^{\\prime}+T_{S}+T_{S}^{\\prime}}\\times\\exp(-\\frac{w^{2}}{4\\ln 2}\\pi^{2} \\Delta^{2}) \\tag{39}\\]
The fringe visibility is:
\\[V_{L}=\\exp(-\\frac{w^{2}}{4\\ln 2}\\pi^{2}\\Delta^{2})=\\exp(-QT\\Delta^{2}) \\tag{40}\\]
therefore, the modulation of the interferogram can be rewritten as:\\[\\mathrm{V}_{G}=\\frac{2\\sqrt{T_{P}T_{P}^{\\prime}}+2\\sqrt{T_{S}T_{S}^{\\prime}}}{T_{P} +T_{P}^{\\prime}+T_{S}+T_{S}^{\\prime}}V_{L}=UV_{L} \\tag{41}\\]
where the instrument visibility becomes:
\\[U=\\frac{2\\sqrt{T_{P}T_{P}^{\\prime}}+2\\sqrt{T_{S}T_{S}^{\\prime}}}{T_{P}+T_{P}^{ \\prime}+T_{S}+T_{S}^{\\prime}} \\tag{42}\\]
Therefore, when only considering the influence of transmittance, the instrument visibility can be expressed as:
\\[U=\\frac{2\\Big{(}T_{P2}\\sqrt{T_{P1}T_{P1}^{\\prime}}+T_{S2}\\sqrt{T_{S1}T_{S1}^{ \\prime}}\\Big{)}}{T_{P2}\\big{(}T_{P1}+T_{P1}^{\\prime}\\big{)}+T_{S2}\\big{(}T_{S1} +T_{S1}^{\\prime}\\big{)}} \\tag{43}\\]
### The Influence of Compensation Glass Surface Tilt on Instrument Visibility
In the real instrument, in addition to the difference in transmittance of the two arms, the mirror may not be aligned, and the optical surface may not be smooth. These factors will reduce the instrument visibility, and each of these factors can be estimated an influence factor; the final instrument visibility \\(U\\) will be the product of each term [25]. This section discusses errors caused by the tilt of compensation glass.
As shown in Figure 4: Establish two coordinate systems \\(x_{1}O_{1}y_{1}\\) and \\(x_{2}O_{2}y_{2}\\) perpendicular to the optical axis \\(z_{1}\\) and \\(z_{2}\\) to describe the compensation of the tilt of the glass surface, where the rotation angles of the optical surface around \\(x\\)-axis and \\(y\\)-axis are \\(\\gamma_{i}\\) and \\(\\phi_{i}\\) respectively. Because the rotation angle is small enough, the change in transmittance caused by the tilt of the glass surface can be ignored, and the change of interference optical path difference caused by rotation becomes the main factor affecting the instrument visibility. The incident rectangular aperture of the Michelson interferometer is \\(D^{2}\\). When the incident light interferes on the CCD, and only considering the influence of the tilt of the glass surface on the instrument visibility, the instrument visibility \\(U\\) can be expressed as [26]:
\\[U=\\sin c(\\frac{BD}{\\lambda_{0}})\\sin c(\\frac{CD}{\\lambda_{0}}) \\tag{44}\\]
where:
\\[B=2\\phi_{1}(n_{2}-n_{0})-2\\phi_{2}n_{2}^{\\prime}\\text{, }C=2\\gamma_{1}(n_{2}-n_{ 0})-2\\gamma_{2}n_{2}^{\\prime} \\tag{45}\\]
Figure 4: Schematic diagram of the three-dimensional coordinate system of the Michelson interferometer (M1) system.
### The Influence of Mirror Surface Accuracy on Instrument Visibility
In the process of plane mirror processing, due to the influence of polishing and coating, the plane mirror cannot obtain the ideal surface shape, which will reduce the instrument visibility. Assuming that the mirror polishing or coating error W obeys the Gaussian probability distribution, when the plane mirror has a surface error, two coherent lights will produce additional optical path difference \\(2\\sigma_{W}\\) when reflected by the plane mirror, where \\(\\sigma_{W}=\\overline{\\left(W^{2}\\right)}^{1/2}\\) is the root mean square value of the mirror surface error. The change of the optical path difference will cause the interference phase difference to change, which is:
\\[\\delta\\varphi=\\frac{4\\pi}{\\lambda_{0}}\\sigma_{W} \\tag{46}\\]
By calculating the intensity distribution of the double-beam interference, the relationship between the Gaussian distribution mirror surface error and the instrument visibility can be obtained, expressed as Equation (47), and the derivation process can be referred to [31]:
\\[U=\\exp[-8\\pi^{2}(\\frac{\\sigma_{W}}{\\lambda_{0}})^{2}] \\tag{47}\\]
### Wind Speed and Temperature Measurement Accuracy
The random error of the wide-angle Michelson interferometer for wind speed and temperature measurement will be affected by many factors, as shown in Equations (48) and (49) [29,32].
The standard deviation of wind speed random error \\(\\sigma_{v}\\) can be expressed as:
\\[\\sigma_{v}=\\frac{c\\lambda_{0}}{2\\sqrt{2}\\pi USV\\Delta} \\tag{48}\\]
where \\(c\\) is the speed of light, \\(\\lambda_{0}\\) is the reference wavelength, \\(U\\) is the instrument visibility, \\(S\\) is the signal-to-noise ratio, \\(\\Delta\\) is the optical path difference, and \\(V\\) is the fringe visibility which can be obtained from Equation (9).
The standard deviation of temperature random error \\(\\sigma_{T}\\) can be expressed as:
\\[\\sigma_{T}=\\frac{\\sqrt{2+U^{2}V^{2}}}{2USQV\\Delta^{2}} \\tag{49}\\]
where \\(Q\\) is the molecular constant, for airglow radiation of \\(O_{2}(a^{1}\\Delta_{g})1.27~{}\\upmu\\)m, \\(Q\\)= \\(3.526\\times 10^{-6}\\) cm\\({}^{-2}\\)K\\({}^{-1}\\).
## 4 Computer Simulation and Analysis
### Influence of Beam Splitting Ratio on Instrument Visibility
The light emitted from the pyramid prism will be divided into four beams due to the light splitting effect of the pyramid prism. When the four beams reach the CCD, the p component and s component of the four beams interfere respectively; then, four interferograms will be formed on the CCD, so the instrument visibility distribution of the four zones are different. This section simulates the change of instrument visibility when the beam splitter takes different transmittance and reflectance.
As shown in Figure 5, when the incident angle is \\(i=1.5^{\\circ}\\), the influence of the splitting ratio of the beam splitter on the instrument visibility is simulated. The analysis shows that when the splitting ratio of the beam splitter reaches 1:1, the instrument visibility reaches the maximum value. In engineering, \\(U\\geq 0.9\\) is generally required. For example, the visibility of the WINDII instrument is 0.9 [10]. In this case, the range of the splitting ratio of the beam splitter is: \\(0.66<\\alpha/\\beta<1.65\\).
Figure 6 shows the distribution of instrument visibility on the CCD when the beam splitting ratio of the beam splitter takes different values. The analysis shows that the distribution of the instrument visibility of the four zones has a certain symmetry when the splitting ratio takes a certain value, which is caused by the symmetry of the light splitting of the pyramid prism. As shown in Figure 3, when a beam of light emitted from the Michelson interferometer enters the pyramid prism at the incident angle \\(i_{1}\\), the value of the incident angle is different for each side of the pyramid prism, so the refraction angle at each interface is different. When the light is emitted from the bottom surface of the pyramid prism, it will be divided into four beams which have different transmittance distributions; therefore, the interference intensity of four beams is different and it corresponds to the four certain points on the CCD. Thus, the parallel light at different angles will interfere on the CCD to form four interferograms within the field of view. For each partition, the field angle at the center of the CCD is 0\\({}^{\\circ}\\). The maximum angle of view at the diagonal is 2.93\\({}^{\\circ}\\). Comparing Figure 6a,b, we can find the following: when the beam splitting ratio is different, the visibility of the instrument changes in the opposite trend. This is because different beam splitting ratios will affect the transmittance of the light emitted from arm 1 and arm 2 respectively, thereby affecting the distribution of instrument visibility. However, no matter what the value of the beam splitting ratio is, the change of the field of view has little effect on the instrument visibility, but it also shows that the splitting ratio of the beam splitter and the pyramid prism play an important role in the instrument visibility.
After the light passes through the pyramid prism, it is divided into four beams. The transmittance distribution of one beam is simulated, and the result is shown in Figure 8. It can be found that the transmittance of p light in the transmitted light is lower than the s light, while the reflected light is just the opposite. This is caused by the difference in the angular distribution of the transmitted light and the reflected light.
Figure 5: The visibility of the instrument when the incident angle is 1.5\\({}^{\\circ}\\).
Figure 8: Transmittance of different light in one of the four zones. (**a**) \\(T_{P1}\\); (**b**) \\(T_{S1}\\); (**c**) \\(T_{P2}\\); (**d**) \\(T_{S2}\\).
Figure 6: The instrument visibility when the beam splitter takes different transmittances. (**a**) The beam splitter takes \\(\\alpha:\\beta=11:9\\); (**b**) The beam splitter takes \\(\\alpha:\\beta=9:11\\). In the actual measurement, although the splitting ratio of the beam splitter will affect the instrument visibility, there is also a certain difference in the transmittance of p light and s light in the beam. Figure 7 shows the transmittance of p light and s light of the beam splitter in the ozone wind imaging interferometer developed by our research group, where T represents the transmitted light, which is the light passing through the arm 1, and R represents the reflected light, which is the light passing through the arm 2.
Figure 7: Transmittance of beam splitter to different light.
Simulate the distribution of instrument visibility on the CCD at this condition, and the result is shown in Figure 9. From Figure 8 we can know \\(\\alpha/\\beta<1\\). Compared with the result of Figure 6b, when \\(\\alpha/\\beta<1\\), the instrument visibility of the center is the lowest, which also verifies the influence of the beam splitting ratio on the instrument visibility distribution. Analyzing Figure 9, it can be found that the instrument visibility gradually decreases from the inside to the outside in a concentric circle, while in Figure 6b, the distribution of the instrument visibility at the edge of the CCD is not concentric rings. This is because of the beam splitting ratio of the instrument is closer to the ideal value, so the distribution of the instrument visibility on the CCD is more uniform.
### The Influence of Compensation Glass Surface Tilt on Instrument Visibility
Because the compensation glass surface tilt will also affect the instrument visibility, this section simulates the influence of the compensation glass surface tilt on the instrument visibility. Because the tilt angle is small, the angle unit is arcsec (\\(1\\text{arcsec}=1/3600^{\\circ}\\)). It can be seen from Equation (44) that \\(\\gamma_{i}\\) and \\(\\phi_{i}\\) have the same influence on the instrument visibility, so \\(\\gamma_{1}=\\gamma_{2}=0\\) is taken, that is, only the influence of the tilt of the glass surface around the y-axis on the instrument visibility is considered, the result is shown in Figure 10. It can be found from the analysis that the influence of \\(\\phi_{2}\\) is greater than that of \\(\\phi_{1}\\), which is caused by the difference in refractive index of the two arms. When \\(\\phi_{1}\\) and \\(\\phi_{2}\\) are limited to \\(-0.35\\text{arcsec}\\sim 0.35\\text{arcsec}\\), the instrument visibility \\(U>0.9\\).
Figure 10: The influence of \\(\\phi_{i}\\) on instrument visibility.
Figure 9: The distribution of instrument visibility on CCD.
As shown in Figure 11a, it is the change of the instrument visibility when \\(\\phi_{1}=\\gamma_{1}=0.1\\). That is, only when the compensation glass surface of arm 1 has an inclination angle is the influence of the compensation glass surface tilt of arm 2 on the instrument visibility considered. It can be seen that when the compensation glass of arm 1 is tilted, the tilt angle of the glass can be compensated by adjusting arm 2 to maximize the instrument visibility; The Figure 11b is the change of the instrument visibility when two compensation glasses have tilt angles around the x-axis and y-axis when \\(\\phi_{2}=\\gamma_{1}=0.1\\). It can be seen from the figure that if the visibility of the instrument reaches the maximum, it is necessary to have a certain deflection angle for \\(\\phi_{1}\\) and \\(\\gamma_{2}\\) at the same time, instead of taking \\(\\phi_{1}=\\gamma_{2}=0\\) in this case. In summary, it can be found that the tilt angle has a great influence on the instrument visibility. Therefore, in practical applications, the angle of compensating the tilt of the glass surface should be controlled as much as possible to increase the instrument visibility.
### The Influence of Mirror Surface Accuracy on Instrument Visibility
As shown in Figure 12a, when considering the mirror surface error, the phase difference change and the instrument visibility change with the surface error, what we can get from the figure is that the interference phase difference \\(\\delta_{\\varphi}\\) increases linearly with the increase of the root mean square value of the mirror surface error \\(\\sigma_{W}\\). Analyzing Figure 12b, when the instrument visibility \\(U>0.9\\), \\(\\sigma_{W}\\) should satisfy \\(\\sigma_{W}<45.87\\) nm. In summary, \\(\\sigma_{W}\\) should be reduced as much as possible to ensure that the instrument has high visibility in practical applications.
Figure 11: The influence of different tilt angles on instrument visibility. (**a**) \\(\\phi_{1}=\\gamma_{1}=0.1\\); (**b**) \\(\\phi_{2}=\\gamma_{1}=0.1\\).
Figure 12: (**a**) The variation of phase difference with surface error; (**b**) The variation of instrument visibility with surface error.
### Wind Speed and Temperature Measurement Accuracy
In the altitude range of \\(25\\sim 110\\) km, the atmospheric temperature varies from \\(170\\) K to \\(310\\) K. Therefore, this section simulates the influence of the instrument visibility on the standard deviation of wind speed and temperature random error. Figure 13 shows the relationship between wind speed and temperature random deviation with the instrument visibility. The analysis of this figure shows that when the instrument visibility \\(U>0.9\\), in the detection range, the random deviation of wind speed is within \\(1.1\\) m/s, and the random deviation of temperature is within \\(5.7\\) K.
## 5 Conclusions
This paper discussed the principles and system optical path of the ozone wind imaging interferometer used in the detection of mid-low atmospheric wind speed, temperature and ozone concentration, and it focused on the analysis of instrument visibility. We concluded as follow:
1. When the splitting ratio of the beam splitter reaches 1:1, the instrument visibility reaches the maximum value. The splitting ratio should satisfy when the incident angle is \\(i=1.5^{\\circ}\\) and \\(U>0.9\\). The splitting ratio of the beam splitter and the pyramid prism play an important role in the distribution of instrument visibility.
2. The tilt angle of the compensation glass surface \\(\\gamma_{i}\\) and \\(\\phi_{i}\\) have the same impact on the instrument visibility; for \\(\\phi_{1}\\) and \\(\\phi_{2}\\), the influence of \\(\\phi_{2}\\) is greater than that of \\(\\phi_{1}\\). When \\(\\phi_{1}\\) and \\(\\phi_{2}\\) are limited to \\(-0.35\\mathrm{arcsec}\\sim 0.35\\mathrm{arcsec}\\), the instrument visibility \\(U>0.9\\). In practical applications, the tilt angle of the compensation glass surface should be controlled as much as possible to increase the instrument visibility.
3. When considering the mirror surface error, \\(\\sigma_{W}\\) should satisfy \\(\\sigma_{W}<45.87\\mathrm{nm}\\) if the instrument visibility \\(U>0.9\\).
4. when the instrument visibility \\(U>0.9\\), in the detection range, the random deviation of wind speed is within \\(1.1\\) m/s, and the random deviation of temperature within \\(5.7\\) K.
In the actual measurement, the instrument visibility may also be affected by the uneven refractive index of the glass and the slight tilt of other components. In addition, the accuracy of wind speed and temperature will be reduced because it will be affected by various factors such as latitude and altitude, but this calculation still has good guidance and reference significance, and it also provides important theoretical basis and experimental guidance for the development of ozone wind imaging interferometers.
Figure 13: The relationship between the wind speed and temperature random deviation and the instrument visibility. (**a**) Wind speed random deviation; (**b**) Temperature random deviation.
**Author Contributions:** Conceptualization, C.Z., X.D. and G.L.; methodology, C.Z., X.D. and T.Y.; software, X.D. and T.Y.; validation, C.Z., X.D. and T.Y.; writing--original draft preparation, X.D.; writing--review and editing, C.Z., X.D., T.Y. and G.L. All authors have read and agreed to the published version of the manuscript.
This research was funded by the Major International (Regional) Joint Research Project of National Natural Science Foundation of China (Grant No. 42020104008), The Key Program of National Natural Science Foundation of China (Grant No. 41530422), National High Technology Research and Development Program of China (863 Program) (Grant No. 2012AA121101).
Not applicable.
Not applicable.
**Informed Consent Statement:** Not applicable.
**Data Availability Statement:** Data sharing not applicable.
**Conflicts of Interest:** The authors declare no conflict of interest.
## References
* Zhu et al. (2001) Zhu, X.; Yee, J.-H.; Talaat, E.R. Diagnosis of Dynamics and Energy Balance in the Mesosphere and Lower Thermosphere. _J. Atmos. Sci._**2001**, _58_, 2441-2454. [CrossRef]
* He et al. (2019) He, W.-W.; Wu, K.-J.; Wang, S.-N. Observation technology of wind and temperature by on board imaging interferometer with 1.27 \\(\\upmu\\)m air glow. _Opt. Optoelectron. Technol._**2019**, _17_, 72-74.
* Shangguan et al. (2016) Shangguan, M.; Xia, H.; Wang, C.; Qiu, J.; Shentu, G.; Zhang, Q.; Dou, X.; Pan, J.W. All-fiber up conversion high spectral resolution wind lidar using a Fabry-Perot interferometer. _Opt. Express_**2016**, _24_, 19322-19336. [CrossRef]
* Liu and Yi (2014) Liu, F.; Yi, F. Lidar-measured atmospheric N_2 vibrational-rotational Raman spectra and consequent temperature retrieval. _Opt. Express_**2014**, _22_, 27833-27844. [CrossRef]
* Xiao-Hua et al. (2007) Xiao-Hua, J.; Chun-Min, Z.; Bao-Chang, Z. A new method for spectrum reproduction and interferogram processing. _Acta Phys. Sin._**2007**, _56_, 824-829.
* Zhi-Lin et al. (2007) Zhi-Lin, Y.; Chun-Min, Z.; Bao-Chang, Z. Study of SNR of a novel polarization interference imaging spectrometer. _Acta Phys. Sin._**2007**, _56_, 6413-6419.
* Zhi-Hong et al. (2006) Zhi-Hong, P.; Chun-Min, Z.; Bao-Chang, Z.; Ying-Cai, L.; Fu-Quan, W. The transmittance of Savart polariscope in polarization interference imaging spectrometer. _Acta Phys. Sin._**2006**, _55_, 6374-6381.
* Shepherd et al. (1985) Shepherd, G.G.; Gault, W.A.; Miller, D.W.; Pasturczyk, Z.; Johnston, S.F.; Kosteniuk, P.R.; Haslett, J.W.; Kendall, D.J.W.; Wimperis, J.R. WAMDII: Wide-angle Michelson Doppler imaging interferometer for Spacelab. _Appl. Opt._**1985**, _24_, 1571-1584. [CrossRef] [PubMed]
* Chunmin (2000) Chunmin, Z. Interference image spectroscopy for upper atmospheric wind field measurement. _Acta Opt. Sin._**2000**, _20_, 234-239.
* Shepherd et al. (1993) Shepherd, G.G.; Thuillier, G.; Gault, W.A.; Solheim, B.H.; Hersom, C.; Alunni, J.M.; Brun, J.-F.; Brune, S.; Charlot, P.; Cogger, L.L.; et al. WINDII, the wind imaging interferometer on the Upper Atmosphere Research Satellite. _J. Geophys. Res. Space Phys._**1993**, _98_, 10725-10750. [CrossRef]
* Gault et al. (1996) Gault, W.A.; Sargoytchev, S.I.; Shepherd, G.G. Divided-Mirror Scanning Technique for a Small Michelson Interferometer. In _Optical Spectroscopic Techniques and Instrumentation for Atmospheric and Space Research II_; International Society for Optics and Photonics: Bellingham, WA, USA, 1996; Volume 2830, pp. 15-18.
* Shepherd et al. (2012) Shepherd, G.G.; Thuillier, G.; Cho, Y.M.; Dubein, M.L.; Evans, W.F.; Gault, W.A.; Hersom, C.; Kendall, D.J.W.; Lathuillere, C.; Lowe, R.P.; et al. The Wind Imaging Interferometer (WINDII) on the Upper Atmosphere Research Satellite: A 20 year perspective. _Rev. Geophys._**2012**, _50_, RG2007. [CrossRef]
* Ward et al. (2001) Ward, W.E.; Gault, W.A.; Shepherd, G.G.; Rowlands, N. The Waves Michelson Interferometer: A visible/near-IR interfer-ometer for observing middle atmosphere dynamics and constituents. _Proc. SPIE_**2001**, _4540_. [CrossRef]
* He et al. (2007) He, J.; Zhang, C.M.; Zhang, Q.G. Research om theory and application of the interferogram of vogit profile. _Acta Phys. Sin._**2007**, _27_, 423-426. [CrossRef]
* Crutzen and Jones (1971) Crutzen, P.J.; Jones, I.T.N.; Wayne, R.P. Calculation of [\\(O_{2}(a^{1}\\Delta_{g})\\)] in the atmosphere using new laboratory data. _J. Geophys. Res._**1971**, _76_, 1490-1497. [CrossRef]
* Wu et al. (2018) Wu, K.; Fu, D.; Feng, Y.; Li, J.; Hao, X.; Li, F. Simulation and application of the emission line O19P18 of O-2(a(1)Delta(g)) dayglow near 1.27 \\(\\upmu\\)m for wind observations from limb-viewing satellites. _Opt. Express_**2018**, _26_, 16984-16999. [CrossRef]
* Evans et al. (1968) Evans, W.F.J.; Hunten, D.M.; Llewellyn, E.J.; Jones, A.V. Altitude profile of the infrared atmospheric system of oxygen in the dayglow. _J. Geophys. Res._**1968**, _73_, 2885-2896. [CrossRef]
* Mlynczak et al. (2001) Mlynczak, M.G.; Morgan, F.; Yee, J.H.; Espy, P.; Murtagh, D.; Marshall, B.; Schmidlin, F. Simultaneous measurements of the \\(O_{2}(^{1}\\Delta)\\) and \\(O_{2}(^{1}\\Sigma)\\) Airglows and ozone in the daytime mesosphere. _Geophys. Res. Lett._**2001**, _28_, 999-1002. [CrossRef]
* Thomas et al. (1984) Thomas, R.J.; Barth, C.A.; Rusch, D.W.; Sanders, R.W. Solar Mesosphere Explorer Near-Infrared Spectrometer: Measurements of 1.27 \\(\\upmu\\)m radiances and the inference of mesospheric ozone. _J. Geophys. Res. Atmos._**1984**, _89_, 9569-9580. [CrossRef]* _Llewellyn et al. (2004)_ _Llewellyn, E.J.; Lloyd, N.D.; Degenstein, D.A.; Gattinger, R.L.; Petelina, S.V.; Bourassa, A.E.; Wiensz, J.T.; Ivanov, E.V.; McDade, I.C.; Solheim, B.H.; et al. The OSIRIS instrument on the Odin spacecraft. Can. J. Phys._ **2004**, _82_, 411-422. [CrossRef]
* _Mlynczak et al. (2007)_ _Mlynczak, M.G.; Marshall, B.T.; Martin-Torres, F.J.; Russell, J.M., III; Thompson, R.E.; Remsberg, E.E.; Gordley, L.L. Sounding of the Atmosphere using Broad band Emission Radiometry observations of daytime mesospheric O\\({}_{2}\\)(\\({}^{1}\\Delta\\)) 1.27 \\(\\mu\\)m emission and derivation of ozone, atomic oxygen, and solar and chemical energy deposition rates. J. Geophys. Res. Atmos._ **2007**, _112_, D15306. [CrossRef]
* _Rahnama et al. (2006)_ _Rahnama, P.; Rochon, Y.J.; McDade, I.C.; Shepherd, G.G.; Gault, W.A.; Scott, A. Satellite measurement of stratospheric winds and ozone using Doppler Michelson interferometry. Part I: Instrument model and measurement simulation. J. Atmos. Ocean. Technol._ **2006**, _23_, 753-769. [CrossRef]
* _Lin et al. (2010)_ _Lin, Z.; Chun-Min, Z.; Xiao-Hua, J. Passive detection of upper atmospheric wind field based on the Lorentzian line shape profile. Acta Phys. Sin._ **2010**, _59_, 899-906.
* _Rahnama (2003)_ _Rahnama, P. Simulation and Analysis Studies of SWIFT Measurements. Master's Thesis, York University, Toronto, ON, Canada, 2003._
* _Shepherd (2002)_ _Shepherd, G.G. Spectral Imaging of the Atmosphere; Academic Press: Salt Lake City, UT, USA, 2002; p. 82._
* _Piao (2020)_ _Piao, R. Middle Atmospheric Temperature Variation and Near-Infrared Static Wind Imaging Interferometry. Ph.D. Thesis, Xi'an Jiaotong University, Shaanxi, China, 2020._
* _Evans et al. (1988)_ _Evans, W.F.J.; McDade, I.C.; Yuen, J.; Llewellyn, E.J. A rocket measurement of the_ \\(O_{2}\\) _Infrared Atmospheric (0-0) band emission in the dayglow and a determination of the mesospheric ozone and atomic oxygen densities. Can. J. Phys._ **1988**, _66_, 941-946. [CrossRef]
* _Shepherd et al. (2001)_ _Shepherd, G.G.; McDade, I.C.; Gault, W.A.; Rochon, Y.J.; Scott, A.; Rowlands, N.; Buttner, G. The Stratospheric Wind Interferometer for Transport studies (SWIFT). In Proceedings of the 2001 IEEE International Geoscience & Remote Sensing Symposium, Sydney, Australia, 9-13 July 2001._
* _Rochon and Winds (2000)_ _Rochon, Y.J. The retrieval of winds, Doppler temperatures, and emission rates for the WINDII experiment. Ph.D. Thesis, York University, Toronto, ON, Canada, 2000._
* _Rahnama et al. (2006)_ _Rahnama, P.; Rochon, Y.J.; McDade, I.C.; Shepherd, G.G.; Gault, W.A.; Scott, A. Satellite Measurement of Stratospheric Winds and Ozone Using Doppler Michelson Interferometry. Part II: Retrieval Method and Expected Performance. J. Atmos. Ocean. Technol._ **2006**, _23_, 770-784. [CrossRef]
* _Katti and Singh (1966)_ _Katti, P.K.; Singh, K. A Note on the Surface Accuracy and Alignment of the End Mirrors in a Michelson Interferometer. Appl. Opt._ **1966**, \\(5\\), 1962-1964. [CrossRef] [PubMed]
* _Ward (1988)_ _Ward, W.E. The Design and Implementation of the Wide-Angle Michelson Interferometer to Observe Thermospheric Winds. Ph.D. Thesis, York University, Toronto, ON, Canada, 1988._ | This paper discussed the principle of the ozone wind imaging interferometer developed by our group, which used remote sensing method to detect wind field and ozone concentration simultaneously, focused on the analysis and calculation of the instrument visibility and gave the theoretical representation of the instrument visibility. Computer simulation was used to analyze the influence of the system transmittance, compensation glass surface tilt and mirror surface accuracy on the instrument visibility. The results showed that the splitting ratio of the beam splitter and the field of view would affect the distribution of the instrument visibility; the tilt angle of the compensation glass surface can greatly affect the instrument visibility. We also gave the random error range of wind field speed and temperature at the instrument visibility \\(U>0.9\\). This research provides an important theoretical basis and practical guidance for the development and engineering of ozone wind imaging interferometers.
remote sensing detection; interferometric imaging technology; instrument visibility +
Footnote †: journal: Remote Sensing | Give a concise overview of the text below. | 189 |
arxiv-format/2101_00099v1.md | Deep learning for low frequency extrapolation of multicomponent data in elastic full waveform inversion
Hongyu Sun
Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139,
[email protected], [email protected]
Laurent Demanet
Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139,
[email protected], [email protected]
December, 2020
######
## 1 Acknowledgments
The authors thank Total SA for support. Laurent Demanet is also supported by AFOSR grant FA9550-17-1-0316. Hongyu Sun acknowledges SEG scholarships (SEG 75th Anniversary Scholarship and Michael Bahorich/SEG Scholarship) for funding. Tensorflow (Abadi et al., 2015) and Keras (Chollet et al., 2015) are used for deep learning. Elastic FWI in this note is implemented using the open source code DENISE ([https://github.com/daniel-koehn/DENISE-Black-Edition](https://github.com/daniel-koehn/DENISE-Black-Edition)). Acoustic training datasets are simulated using Pysit (Hewett & Demanet, 2013).
Introduction
Full waveform inversion is well-known for its great potential to provide quantitative Earth properties of complex subsurface structures. Acoustic FWI is widely used and has been successfully applied to real seismic data. However, most seismic data have strong elastic effects (Marjanovic et al., 2018). The acoustic approximation is insufficient to estimate correct reflections and introduces additional artifacts to FWI results (Plessix et al., 2013; Stopin et al., 2014). Therefore, it is desirable to develop a robust elastic FWI method for high-resolution Earth model building.
Foundational work has shown the ability of elastic FWI to retrieve realistic properties of the subsurface (Tarantola, 1986; Mora, 1987). However, it has difficulty handling real data sets. Elastic FWI is very sensitive to: accuracy of the starting model; correct estimation of density; proper definition of multi-parameter classes; and noise level (Brossier et al., 2010). The complex wave phenomena in elastic wavefields bring new challenges to FWI.
Among the many factors that affect the success of elastic FWI, the lowest starting frequency is an essential one, given that an accurate starting model is generally unavailable. Compared to acoustic FWI, the nonlinearity of elastic FWI is more severe due to the short S-wave propagating wavelength. Therefore, elastic FWI always requires a lower starting frequency compared to acoustic FWI. Additionally, the parameter cross-talk problem exists in elastic FWI and becomes more pronounced at higher frequencies, so ultra-low frequencies are required for a successful inversion of S-wave velocity and density.
In synthetic studies of elastic FWI, Brossier et al. (2009) invert the overthrust model (Aminzadeh et al., 1997) from 1.7Hz. Brossier et al. (2010) invert the Valhall model (Sirgue et al., 2009) from 2Hz. Both inversion workflows start from Gaussian smoothing of true models. Moreover, Choi et al. (2008) invert the Marmousi2 model (Martin et al., 2006) using a velocity-gradient starting model but a very low frequency (0.16Hz). For a successful inversion of the Marmousi2 density model, Kohn et al. (2012) use 0-2Hz in the first stage of multi-scale FWI. Jeong et al. (2012) invert the same model from 0.2Hz.
Few applications of elastic FWI to real data sets are reported in the literature (Crase et al., 1990; Sears et al., 2010; Marjanovic et al., 2018). Vigh et al. (2014) use 3.5Hz as the starting frequency of elastic FWI given that the initial models are accurate enough. Raknes et al. (2015) apply 3D elastic FWI to update P-wave velocity and obtain S-wave velocity and density using empirical relationships. Borisov et al. (2020) perform elastic FWI involving surface waves in the band of 5-15Hz for a land data set.
New developments in acquisition enhance the recent success of FWI by measuring data with lower frequencies and longer offsets (Mahrooqi et al., 2012; Brenders et al., 2018). However, only acoustic FWI was applied to the land data set with low frequencies down to 1.5 Hz (Plessix et al., 2012). In addition to the expensive acquisition cost for the low-frequency signals, direct use of the field low-frequency data requires dedicated pre-processing steps, including travel-time tomography, for an accurate enough model to initialize FWI. The final inversion results strongly rely on the starting tomography model. Hence, attempting to retrieve reliable low-frequency data offers a sensible pathway to relieve the dependency of elastic FWI on starting models.
Deep learning is an emerging technology in many aspects of exploration geophysics. In seismic inversion, several groups have experimented with directly mapping data to model using deep learning (Araya-Polo et al., 2018; Yang and Ma, 2019; Wu and Lin, 2019; Zhang and Lin, 2020; Kazei et al., 2020). Within Bayesian seismic inversion framework, deep learning has been applied for formulating priors (Herrmann et al., 2019; Mosser et al., 2020; Fang et al., 2020). Other groups use deep learning as a signal processing step to acquire reasonable data for inversion. For instance, Li et al. (2019) use deep learning to remove elastic artifacts for acoustic FWI. Siahkoohi et al. (2019) remove the numerical dispersion of wavefields by transfer learning.
Computationally extrapolating the missing low frequencies from band-limited data is the cheapest way for FWI to mitigate the cycle-skipping problem. Li and Demanet (2015, 2016) seperate the shot gather to atomic events and then change the wavelet to extrapolate the low frequencies. Li and Demanet (2017) extend the frequency spectrum based on the redundancy of extended forward modeling. Recently, Sun and Demanet (2018); Ovcharenko et al. (2018); Jin et al. (2018) have utilized CNN to extrapolate the missing low frequencies from band-limited data. They have proposed different architectures of CNN to learn the mapping between high and low frequency data from different features in the training datasets. However, only acoustic data are considered in these studies.
Although the mechanism of deep learning is hard to explain, the feasibility of low frequency extrapolation has been discussed in terms of sparsity inversion (Hu et al., 2019) and wavenumber illumination (Ovcharenko et al., 2019). With multiple-trace extrapolation, the low wavenumbers of far-offset data have been proposed as the features in the frequency domain detected by CNN to extrapolate the missing low frequencies (Ovcharenko et al., 2019). In contrast, for trace-by-trace extrapolation (Sun and Demanet, 2020), the features to learn are the structured time series themselves. The feasibility of trace-by-trace frequency extrapolation has been mathematically proved in simple settings in Demanet and Nguyen (2015); Demanet and Townsend (2019), as a by-product of super-resolution.
In this paper, we extend our workflow of extrapolated FWI with deep learning (Sun and Demanet, 2020) into the elastic regime. We separately train the same neural network on two different training datasets, one to predict the low-frequency data of the horizontal components (\\(v_{x}\\)) and one to predict the low frequencies of the vertical components (\\(v_{y}\\)). The extrapolated low frequency data are used to initialize elastic FWI from a crude starting model. For the architecture design of CNN, a large receptive field is achieved by either large convolutional kernels or dilation convolution. Moreover, to investigate the generalization ability of neural networks over different physical models, we compare the extrapolation results of the neural networks trained on elastic data and acoustic data to predict the elastic low-frequency data. We also investigate several hyperparameters of deep learning to understand its bottleneck for low frequency extrapolation, such as mini-batch size, learning rate and the probability of dropout.
The organization of this article is as follows. In Section 3, we first briefly review elastic FWI and its implementation in this paper. Then, we present the architecture of neural networks and training datasets for low frequency extrapolation of elastic data. Section 4 describes the numerical results of low frequency extrapolation, extrapolated elastic FWI and investigation of hyperparameters. Section 5 discusses the limitations of the method. Section 6 comes to conclusions and future directions.
Method
We first give a brief review of elastic FWI as implemented in this paper. Then we illustrate the feasibility of low frequency extrapolation, and design two deep learning models for this purpose. Afterwards, the training and test datasets are provided to train and verify the performance of the proposed neural networks.
### Review of elastic FWI
Elastic FWI is implemented in the time domain to invert the P-wave velocities (\\(\\mathbf{v}_{p}\\)), S-wave velocities (\\(\\mathbf{v}_{s}\\)) and density (\\(\\rho\\)) simultaneously. The object function \\(E\\) is formulated as
\\[E=\\frac{1}{2}\\delta\\mathbf{d}^{T}\\delta\\mathbf{d}=\\frac{1}{2}\\sum_{s}\\sum_{r} \\int[\\mathbf{u}_{cal}-\\mathbf{u}_{obs}]^{2}dt, \\tag{1}\\]
where \\(\\mathbf{d}\\) are the residuals between observed wavefields \\(\\mathbf{u}_{obs}\\) and calculated wavefields \\(\\mathbf{u}_{cal}\\). In 2D, both \\(\\mathbf{u}_{obs}\\) and \\(\\mathbf{u}_{cal}\\) contain the \\(v_{x}\\) and \\(v_{y}\\) components of elastic wavefields. The gradient \\(\\frac{\\delta E}{\\delta\\mathbf{m}}\\) relative to the model parameters \\(\\mathbf{m}\\) is calculated in terms of \\(\\mathbf{v}_{p}\\), \\(\\mathbf{v}_{s}\\) and \\(\\rho\\) using the velocity-stress formulation of the elastic wave equation (Kohn et al., 2012). The starting models \\(\\mathbf{m_{0}}\\) are updated using the L-BFGS method (Nocedal & Wright, 2006).
### Deep learning models for low-frequency extrapolation
We choose CNN to perform the task of low-frequency extrapolation. By trace-by-trace extrapolation, the output and input are the same seismic recording in the low and high frequency band, respectively. In 2D, the elastic data contain horizontal and vertical components. As a result, we propose to separately train the same neural network twice on two different training datasets: one contains \\(v_{x}\\) and the other contains \\(v_{y}\\).
We design two kinds of CNN architectures with large receptive field for low-frequency extrapolation. A very large receptive field enables each feature in the final output to include a large range of input pixels. Since any single frequency component is related to the entire waveform in the time domain, extrapolation from one frequency band to the other requires a large receptive field to cover the entire input signal. Typically, the receptive field is increased by stacking layers. For example, stacking two convolutional layers (without pooling) with \\(3\\times 3\\) filter results in a layer of \\(5\\times 5\\) filter. However, it requires many layers to result in a large enough receptive field and is computationally inefficient. Therefore, we design the CNN architecture with two methods: directly use a large filter or a small filter with dilated convolution.
The first CNN architecture (ARCH1) directly employs a large filter on convolutional layers, which is the same as Sun & Demanet (2020) (Figure 1). Recall that ARCH1 is a feed-forward stack of five convolutional blocks. Each block is a combination of a 1D convolutional layer, a PReLU layer and a batch normalization layer. The length of all filters is 200. On each convolutional layer, the channels number 64, 128, 64, 128 and 32, respectively. Although only one trace is plotted in Figure 1, the proposed neural network can easily explore multiple traces of the shot gather for multi-trace extrapolation by increasing the size of the kernels from \\(200\\times 1\\) to \\(200\\times ntr\\), where \\(ntr\\) is the number of the input traces.
The second CNN architecture (ARCH2) uses dilated convolution to increase the receptive field by orders of magnitude. A dilated convolution (convolution with holes) is a convolution where the filter is applied over an area larger than its length by skipping input values with a certain step (dilation). It effectively allows the network to operate on a coarser scale than with a normal convolution. This is similar to pooling or stride, but here the output has the same size as the input. As a special case, dilated convolution with dilation of one yields the standard convolution. Stacked dilated convolutions enable networks to have very large receptive fields with just a few layers. In addition to save computational cost, this method helps to preserve the input resolution throughout the network (Oord et al., 2016). Moreover, we use causal convolution to process time series (Moseley et al., 2018), although this choice does not appear to be essential in our case.
The architecture of ARCH2 (Figure 2) has two dilated convolutional blocks. Each block consists of 10 1D convolutional layers. Each convolutional layer is followed by a PReLU layer and a batch normalization layer. On each convolutional layer, there are 64 causal convolutional filters with a length of 2 (Figure 2b). The dilations of the ten convolutional layers are \\(2^{0}\\), \\(2^{1}\\), , \\(2^{10}\\), respectively. The exponential increase in dilation results in exponential growth, with depth, of the receptive field (Yu and Koltun, 2015).
With the two proposed architectures, we can compare the specific receptive fields of both ARCH1 and ARCH2. Without pooling layer, the size of the receptive field \\(RF_{l+1}\\) on the \\(l+1\\) layer is
\\[RF_{l+1}=RF_{l}+(k_{l+1}-1)\\times s_{l+1}\\times d_{l+1},\\quad l=0, ,n\\quad, \\tag{2}\\]
\\[RF_{0}=1, \\tag{3}\\]
where \\(k_{l+1}\\) is the kernel size of the \\(l+1\\) convolutional layer. \\(s_{l+1}\\) is the stride size. \\(d_{l+1}\\) is
Figure 1: ARCH1. The first choice of deep learning architecture that directly employs a large filter on each convolutional layer (Sun and Demanet, 2020). The size and number of filters are labeled on the top of each convolutional layer.
Figure 2: (a)ARCH2. The second choice of deep learning architecture with dilated convolution. The filter length is 2 on each convolutional layer but two dilated convolutional blocks are stacked to increase the receptive field exponentially with depth. (b)The dilated causal convolutional block. Each block has 11 convolutional layers. The filter size, number of channel and dilation of each convolutional layer are labeled on the top.
the dilation on the \\(l+1\\) layer if the layer contains a dilated convolution. Otherwise, \\(d_{l+1}\\) equals 1 for regular convolutional layers.
The receptive field is 996 for ARCH1 and 4095 for ARCH2. Since a smaller kernel is used in ARCH2, the number of trainable parameter in ARCH2 (\\(p_{2}\\)=148,453,480) is much less than ARCH1 (\\(p_{1}\\)=294,602,648). Although both neural networks are able to perform low frequency extrapolation, convolution with dilation is more efficient than directly using a large convolutional kernel.
### Training and test datasets
The training and test datasets are simulated on the elastic training and test models. The Marmousi2 elastic model (Figure 3) is referred to as the test model in deep learning. This is also the true model in the subsequent elastic FWI. The training models (Figure 4) are six batches randomly extracted from the Marmousi2 model. Our previous work (Sun & Demanet, 2020) has shown that the random selection of the training models on the Marmousi model provides enough generalization ability for the neural network to extrapolate low frequencies on the full-size Marmousi model. Here in the elastic regime, each model consists of three parameters: \\(\\mathbf{v}_{p}\\), \\(\\mathbf{v}_{s}\\) and \\(\\rho\\). The size of each model is \\(500\\times 174\\) with a grid spacing of 20m, including a water layer on the top of each model with a depth of 440m.
Both training and test datasets are simulated using a 2D time domain stress-velocity P-SV finite-difference (FD) code (Virieux, 1986; Levander, 1988) with an eighth-order spatial FD operator. A Ricker wavelet with a dominant frequency of 10Hz is used as the source signal. The sampling rate and the recording time is 0.02s and 6s, respectively. It is not necessary to collect the training and test datasets using the same acquisition geometry. To collect the test dataset, 50 shots are excited evenly from 800m to 8640m in the water layer at the same depth of 40m. 400 receivers are placed from 800m to 8780m under the water layer with a depth of 460m to record \\(v_{x}\\) and \\(v_{y}\\) of the elastic wavefields. However, for the training model, there are 100 shots excited evenly from 500m to 8420m with a depth of 40m on each training model. 400 receivers are placed from 480m to 8460m.
After the forward modeling, two training datasets are collected, one with a dataset of horizontal components and one with a dataset of vertical components. The 2D elastic data on the test model is also separated into two test datasets to process each component individually. By trace-by-trace extrapolation setup, there are \\(6\\times 100\\times 400=240,000\\) training samples in each training dataset and \\(1\\times 50\\times 400=20,000\\) test samples in each test dataset.
A simple preprocessing step can be used to improve the deep learning performance. Each sample in the training and test datasets is normalized to one by dividing the raw signal by its maximum. Then all the data are scaled with a constant (for instance, 100) to stabilize the training process. The values used to normalize and scale the raw data are recorded to recover the original observed data for elastic FWI. After this process, each sample in the training and test dataset is separated into a low-frequency signal and a high-frequency signal using a smooth window in the frequency domain. Then, each time series in the high-frequency band is fed into the neural network to predict the low-frequency time series.
Figure 3: The true Marmousi2 model: (a)\\(\\mathbf{v}_{p}\\), (b)\\(\\mathbf{v}_{s}\\) and (c)\\(\\rho\\).
## 4 Numerical Examples
The numerical examples section is divided into four parts. In the first part, we train ARCH1 to extrapolate the low-frequency data of bandlimited multi-component recordings simulated on the Marmousi2 model (Figure 3). In the second part, we study the generalization ability of the proposed neural network over different physical models (acoustic or elastic wave equation). Then, we use the extrapolated low-frequencies of multi-component band-limited data to seed the frequency sweep of elastic FWI on the Marmousi2 model. In the last part, we investigate the hyperparameters of deep learning for low-frequency extrapolation using the proposed ARCH1 and ARCH2.
### Low frequency extrapolation of multicomponent data
We first extrapolate the low frequency data below 5Hz on the Marmousi2 model (Figure 3) using 5-25Hz band-limited data. Each sample in the training and test datasets is separated into a 0-5Hz low-frequency signal and a 5-25Hz high-frequency signal using a smooth window in the frequency domain. The time series in the high-frequency band is directly fed into the neural network to predict the 0-5Hz low-frequency time series. To deal with the multicomponent data, the neural network ARCH1 is trained twice: once on the training dataset of \\(v_{x}\\) and once on the training dataset of \\(v_{y}\\). Both training processes use the ADAM method with a mini-batch of 32 samples. We refer readers to Sun and Demanet (2020) for more details about training. Figures 5(a) and 5(b) show the training processes over 40 epochs to predict
Figure 4: The six training models randomly extracted from the Marmousi2 model. The size of each training model is equally \\(500\\times 174\\) with a grid spacing of 20m including a 440m depth water layer. Each training model contains three parameters: \\(\\mathbf{v}_{p}\\), \\(\\mathbf{v}_{s}\\) and \\(\\rho\\).
the low frequencies of \\(v_{x}\\) and \\(v_{y}\\), respectively. The curves of training loss decay over epochs on both the training and test datasets, which indicate that the neural network does not overfit.
Figure 6 shows the extrapolation results of both \\(v_{x}\\) and \\(v_{y}\\) where the source is located at \\(7.04\\)km. Figures 7(a) and 7(b) compare the amplitude and phase spectrum of \\(v_{y}\\) and \\(v_{x}\\) at \\(x=6.1\\)km among the band-limited recording (\\(5.0-25.0\\)Hz), the fullband recording with true and predicted low frequencies (\\(0.1-5.0\\)Hz). Despite minor prediction errors on both amplitude and phase, the neural network ARCH1 can successfully recover the low frequencies of \\(v_{x}\\) and \\(v_{y}\\) recordings with satisfactory accuracy.
### Generalization over physical models
To study the generalization ability of the proposed neural network over different physical models (acoustic wave and elastic wave), we train ARCH1 once on an acoustic training dataset and simultaneously predict the low frequencies of both \\(v_{x}\\) and \\(v_{y}\\) in the same elastic test dataset. The acoustic training dataset is simulated using the acoustic wave equation on only the P-wave velocity models in Figure 4. Figures 8(a) and 8(b) show the amplitude and phase spectrum of \\(v_{y}\\) and \\(v_{x}\\) at \\(x=6.1\\)km after training with the same procedure. Compared with the results in Figure 7, the extrapolation accuracy of the same trace in the test data is much poorer on acoustic training dataset than elastic training dataset. Even the extrapolation of the vertical component is not successful when the neural network is trained on acoustic data. This is an indicator that the neural network has difficulty generalizing to different physical models.
### Extrapolated elastic full waveform inversion
We perform extrapolated elastic FWI using 4-20Hz band-limited data on the Marmousi2 model. The lower band of the band-limited data is 4Hz. Figures 9a, 10a and 11a show the initial models of \\(v_{p}\\), \\(v_{s}\\) and \\(\\rho\\), respectively. Unlike in the previous examples, a free surface boundary condition is applied to the top of the model to simulate the realistic marine
Figure 5: The learning curves of ARCH1 trained to extrapolate the 0-5Hz low frequencies of (a) \\(v_{y}\\) and (b) \\(v_{x}\\) from the 5-25Hz band-limited elastic recordings.
Figure 6: The extrapolation results of ARCH1 for Marmousi2 model: comparison among the (a) band-limited recordings (\\(5.0-25.0\\)Hz), (b) predicted and (c) true low-frequency recordings (\\(0.1-5.0\\)Hz) of \\(v_{y}\\) and (d) band-limited recordings (\\(5.0-25.0\\)Hz), (e) predicted and (f) true low-frequency recordings (\\(0.1-5.0\\)Hz) of \\(v_{x}\\).
Figure 8: Extrapolation results of ARCH1 _trained on acoustic data_: comparison of the amplitude and phase spectrum of (a) \\(v_{y}\\) and (b) \\(v_{x}\\) at \\(x=6.1\\)km among the bandlimited recording (\\(5.0-25.0\\)Hz), the recording (\\(0.1-25.0\\)Hz) with true and predicted low frequencies (\\(0.1-5.0\\)Hz).
Figure 7: Extrapolation results of ARCH1 _trained on elastic data_: comparison of the amplitude and phase spectrum of (a) \\(v_{y}\\) and (b) \\(v_{x}\\) at \\(x=6.1\\)km among the bandlimited recording (\\(5.0-25.0\\)Hz), the recording (\\(0.1-25.0\\)Hz) with true and predicted low frequencies (\\(0.1-5.0\\)Hz).
Figure 9: Comparison among (a) the initial \\(\\mathbf{v}_{p}\\) model, the inverted low-wavenumber velocity models using (b) \\(2.0-4.0Hz\\) extrapolated data and (c) \\(2.0-4.0Hz\\) true data. The inversion results in (b) and (c) are started from the initial model in (a).
Figure 10: Comparison among (a) the initial \\(\\mathbf{v}_{s}\\) model, the inverted low-wavenumber velocity models using (b) \\(2.0-4.0Hz\\) extrapolated data and (c) \\(2.0-4.0Hz\\) true data. The inversion results in (b) and (c) are started from the initial model in (a).
Figure 11: Comparison among (a) the initial \\(\\rho\\) model, the inverted low-wavenumber density models using (b) \\(2.0-4.0Hz\\) extrapolated data and (c) \\(2.0-4.0Hz\\) true data. The inversion results in (b) and (c) are started from the initial model in (a).
Figure 12: Comparison of the inverted \\(\\mathbf{v}_{p}\\) models from elastic FWI using \\(4-20Hz\\) band-limited data. (a) The resulting model starts from the original initial model. (b) The resulting model starts from the inverted low-wavenumber velocity model using \\(2.0-4.0Hz\\) extrapolated data. (c) The resulting model starts from the inverted low-wavenumber velocity model using \\(2.0-4.0Hz\\) true data.
Figure 13: Comparison of the inverted \\(\\mathbf{v}_{s}\\) models from elastic FWI using \\(4-20Hz\\) band-limited data. (a) The resulting model starts from the original initial model. (b) The resulting model starts from the inverted low-wavenumber velocity model using \\(2.0-4.0Hz\\) extrapolated data. (c) The resulting model starts from the inverted low-wavenumber velocity model using \\(2.0-4.0Hz\\) true data.
Figure 14: Comparison of the inverted \\(\\rho\\) models from elastic FWI using \\(4-20Hz\\) band-limited data. (a) The resulting model starts from the original initial model. (b) The resulting model starts from the inverted low-wavenumber density model using \\(2.0-4.0Hz\\) extrapolated data. (c) The resulting model starts from the inverted low-wavenumber density model using \\(2.0-4.0Hz\\) true data.
exploration environment. The free surface condition damages the low frequency data and thus the energy in the low frequency band 0-2Hz is close to zero in the simulated full-band data. This brings a new challenge to the low frequency extrapolation and introduces prediction errors to the extrapolated data. For this reason, we start elastic FWI using 2-4Hz extrapolated data before exploring the band-limited data.
Starting from the crude initial model in Figure 9(a), Figures 9(b) and 9(c) show the resulting P-wave velocity models after 30 iterations using extrapolated and true 2-4Hz low-frequency data, respectively. The inverted S-wave velocity models using extrapolated and true low frequencies are shown in Figures 10(b) and 10(c). Also, Figures 11(b) and 11(c) compare the inverted density models using the extrapolated and true 2-4Hz low frequencies. The inverted low-wavenumber models of \\(v_{p}\\), \\(v_{s}\\) and \\(\\rho\\) using extrapolated data are roughly the same as those using true data. However, the inversion of density model is not successful since 2-4Hz data are relatively high frequencies for the inversion of density model.
Then the inversion is continued with the 4-20Hz band-limited data. We utilize a multi-scale method (Bunks et al., 1995) and sequentially explore the 4-6Hz, 4-10Hz and 4-20Hz band-limited data in the elastic FWI. In each frequency band, the number of iterations is 30, 30 and 20, respectively. Figures 12 and 13 show the resulting \\(v_{p}\\) and \\(v_{s}\\) models started from different low wavenumber models. The inversion results of \\(v_{p}\\) and \\(v_{s}\\) started from 2-4Hz extrapolated data are very close to the results started from 2-4Hz true data. Conversely, elastic FWI directly starting from the crude initial models using the band-limited data shows large errors.
Figure 14 shows the resulting density models using the 4-20Hz band-limited data but started from different models in Figure 11. Since the starting frequency band (2-4Hz) is relatively high for the inversion of density on the crude initial model (Figure 11(a)), the inverted models using 2-4Hz data (Figures 11(b) and (c)) only show high-wavenumber structure of the density model. With band-limited data involved in the inversion, the inverted density models (Figure 14) resemble migration results but show the density perturbation (Mora, 1987). For a successful inversion of the density model, a much lower starting frequency band is required to recover the low-wavenumber structures.
### Investigation of hyperparameters
The performance of deep learning is very sensitive to the hyperparameters of training. However, choosing appropriate hyperparameters requires expertise and extensive trial and error. Here we discuss the influence of several hyperparameters on the training process, including mini-batch size, learning rate, and a layer-specific hyperparameter, i.e., dropout. We also compare the performance of ARCH1 and ARCH2 in terms of training cost and prediction accuracy. In each case, we compare the model performance of the new hyperparameter setting with the results predicted by the neural network ARCH1 in Section 4.3. The neural network is trained using a mini-batch of 32, learning rate of \\(10^{-3}\\). The probability of dropout is 50%.
The first hyperparameter is the mini-batch size. A batch is a small subset of training data randomly selected by the optimizer to calculate the gradient. Choosing a suitable mini-batch size is a trade-off between training speed and test accuracy. Fewer samples in a batch slow down the training process but a large batch increases the instability of the neural network and lead to a poor performance. Figures 15(a) and 15(b) show the learning curvesof ARCH1 when processing the \\(v_{y}\\) and \\(v_{x}\\) components using a batch size of 16, 32, 64 and 128. A mini-batch of 32 gives more reasonable decrease of the training loss among others. Therefore, we choose it as the mini-batch size in our experiments.
The second essential hyperparameter is the learning rate of the optimizer. Figure 16 compares the learning curves when the learning rates equal to \\(10^{-2}\\), \\(10^{-3}\\) and \\(10^{-4}\\), respectively. If the learning rate is small, training is more reliable, but it will take significant time because steps towards the minimum of the loss function are tiny. The model may also miss the important patterns in the training data; Conversely, if the learning rate is high, training may not converge. Weight changes are so large that the optimizer overshoots the minimum and makes the loss worse. We observe that a learning rate of \\(10^{-3}\\) seems to be an optimal learning rate and can quickly and stably find the minimum loss.
Moreover, we study a model-related hyper-parameter, i.e., dropout in ARCH1. Dropout prevents neural networks from overfitting by randomly dropping neurons during training. Each neuron is retained with a fixed probability \\(p\\) independent of other neurons. \\(p\\) can be chosen using a validation set or can simply be set at 0.5, which seems to be close to optimal for a wide range of networks and tasks (Srivastava et al., 2014). Figure 17 compares the learning curves of ARCH1 with a dropout rate of 0% (no dropout), 20% and 50%, respectively. It seems that for this training dataset, a case without dropout layer does not hurt the performance of ARCH1. All of the three cases give reasonably right learning curves on both training and test datasets.
Finally, we compare the performance of ARCH1 and ARCH2 using the same training dataset and hyperparameter. Each network is trained twice with the training dataset of \\(v_{x}\\) and the training dataset of \\(v_{y}\\). According to the learning curves in Figure 17, the training of ARCH2 is much more stable than that of ARCH1. ARCH2 also requires less training time, due to the less trainable parameters compared with ARCH1. Figure 18 shows the extrapolated elastic FWI results started from the 2-4Hz extrapolated low frequency data using ARCH2. According to the comparison of the inverted models started from the extrapolated data using ARCH1 and ARCH2, both neural networks are able to provide sufficient accuracy for the inversion of \\(v_{p}\\) and \\(v_{s}\\).
Figure 16: The learning curves of ARCH1 trained using different learning rates to extrapolate the 0-4Hz low frequencies of (a) \\(v_{y}\\) and (b) \\(v_{x}\\) from the 4-25Hz band-limited elastic recordings.
Figure 15: The learning curves of ARCH1 trained using different batch sizes to extrapolate the 0-4Hz low frequencies of (a) \\(v_{y}\\) and (b) \\(v_{x}\\) from the 4-25Hz band-limited elastic recordings.
## 5 Discussions and Limitations
Recovering the density using FWI is very challenging, independently of the bandwidth extension question, for the following reasons. (1) Cross-talk happens using short-offset data since P-wave velocity and density have the same radiation patterns at short apertures. (2) The variations in density are smaller than those in velocities. (3) Inversion of density requires ultra-low frequencies. Although elastic FWI does not always allow to correctly estimate density, it stands a better chance of properly reconstructing velocities, with either extrapolated or true low frequencies.
In extrapolated FWI, the choice of starting frequency is a trade-off between the accuracy of extrapolated low frequency data and the lowest frequency to mitigate the cycle-skipping problem. We start elastic FWI with 2-4Hz extrapolated low frequency data due to the insufficient extrapolation accuracy in the near-zero frequency range. The accuracy of the 2-4Hz extrapolated low frequency data is sufficient for elastic FWI of P-wave and S-wave velocities when starting from 4Hz band-limited data. However, the inversion of density model still suffers from the cycle-skipping problem and lack of the low-wavenumber structure.
The numerical example shows that the neural network cannot meaningfully generalize from the acoustic training dataset to the elastic test dataset. In addition to the wave propagation driven by different physics, another factor that makes the generalization fail could be numerical modeling. The synthetic elastic training and test datasets are simulated by solving the stress-velocity formulation of the wave equation using standard staggered grid with an eighth order FD operator. However, the acoustic training dataset is simulated by
Figure 17: The learning curves of ARCH1 and ARCH2 trained to extrapolate the 0-4Hz low frequencies of (a) \\(v_{y}\\) and (b) \\(v_{x}\\) from the 4-25Hz band-limited elastic recordings.
Figure 18: Comparison of the inverted (a) \\(\\mathbf{v}_{p}\\), (b) \\(\\mathbf{v}_{s}\\) and (c) \\(\\rho\\) models from extrapolated elastic FWI. The resulting models start from the inverted low-wavenumber models using \\(2.0-4.0Hz\\) data extrapolated using ARCH2.
solving the stress-displacement formulation using a sixth order FD operator.
The source signal is assumed to be known for extrapolated elastic FWI in this paper. However, for field data, the source signal may vary shot by shot. One solution could be to retrieve the source wavelet of the field dataset firstly, and then artificially boost the low-frequency energy after denoising. The new source signal can be used to synthesize the training dataset for low-frequency extrapolation. It can also be the source wavelet in the following elastic FWI using the extrapolated low-frequency data. In this way, the uncertainty of the source can be controlled to some extent.
We do not provide the numerical results of other factors that affect the performance of the deep learning models. For example, regularization of training loss, number of iterations, parameters of neural network, number of training samples and even inverse crime. Deep learning models contain many hyper-parameters and finding the best configuration for these parameters in a high dimensional space is challenging.
Finally, neural networks are trained using a stochastic learning algorithm. This means that the same model trained on the same dataset may result in a different performance. The specific results may vary, but the general trend should be the same, as reported in Section 4.4.
## 6 Conclusions
To relieve the dependency of elastic FWI on starting models, low-frequency extrapolation of multi-component seismic recordings is implemented to computationally recover the missing low frequencies from band-limited elastic data. The deep learning model is designed with a large receptive field in two different ways. One directly uses a large filter on each convolutional layer, the other utilizes dilated convolution to increase the receptive field exponentially with depth. By training the neural network twice, once with a dataset of horizontal components and once with a dataset of vertical components, we can extrapolate the low frequencies of multi-component band-limited recordings separately. The extrapolated 0-5Hz low frequencies match well with the true low-frequency data on the Marmousi2 model. Elastic FWI using 2-4Hz extrapolated data shows similar results to the true low frequencies. The accuracy of the extrapolated low frequencies is enough to provide low-wavenumber starting models for elastic FWI of P-wave and S-wave velocities on data band-limited above 4Hz.
The generalization ability of the neural network over different physical models is studied in this paper. The neural network trained on purely acoustic data shows larger prediction error on elastic test dataset compared to the neural network trained on elastic data. Therefore, collecting more realistic elastic training dataset will help to process the field data with strong elastic effects.
## References
* Abadi et al. (2015) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mane, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viegas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., & Zheng, X., 2015. TensorFlow: Large-scale machine learning on heterogeneous systems, Software available from tensorflow.org.
* Aminzadeh et al. (1997) Aminzadeh, F., Jean, B., & Kunz, T., 1997. SEG/EAGE 3-D Salt and Overthrust Models. SEG/EAGE 3-D Modeling Series, No. 1: Distribution CD of Salt and Overthrust models, SEG book series.
* Araya-Polo et al. (2018) Araya-Polo, M., Jennings, J., Adler, A., & Dahlke, T., 2018. Deep-learning tomography, _The Leading Edge_, **37**(1), 58-66.
* Borisov et al. (2020) Borisov, D., Gao, F., Williamson, P., & Tromp, J., 2020. Application of 2D full-waveform inversion on exploration land data, _Geophysics_, **85**(2), R75-R86.
* Brenders et al. (2018) Brenders, A., Dellinger, J., Kanu, C., Li, Q., & Michell, S., 2018. The wolfspar(r) field trial: Results from a low-frequency seismic survey designed for FWI, in _in 88th Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 1083-1087.
* Brossier et al. (2009) Brossier, R., Operto, S., & Virieux, J., 2009. Seismic imaging of complex onshore structures by 2D elastic frequency-domain full-waveform inversion, _Geophysics_, **74**(6), WCC105-WCC118.
* Brossier et al. (2010) Brossier, R., Operto, S., & Virieux, J., 2010. Which data residual norm for robust elastic frequency-domain full waveform inversion?, _Geophysics_, **75**(3), R37-R46.
* Bunks et al. (1995) Bunks, C., Saleck, F. M., Zaleski, S., & Chavent, G., 1995. Multiscale seismic waveform inversion, _Geophysics_, **60**(5), 1457-1473.
* Choi et al. (2008) Choi, Y., Min, D.-J., & Shin, C., 2008. Frequency-domain elastic full waveform inversion using the new pseudo-hessian matrix: Experience of elastic Marmousi-2 synthetic data, _Bulletin of the Seismological Society of America_, **98**(5), 2402-2415.
* Chollet et al. (2015) Chollet, F. et al., 2015. Keras.
* Crase et al. (1990) Crase, E., Pica, A., Noble, M., McDonald, J., & Tarantola, A., 1990. Robust elastic nonlinear waveform inversion: Application to real data, _Geophysics_, **55**(5), 527-538.
* Demanet & Nguyen (2015) Demanet, L. & Nguyen, N., 2015. The recoverability limit for superresolution via sparsity, _arXiv preprint arXiv:1502.01385_.
* Demanet & Townsend (2019) Demanet, L. & Townsend, A., 2019. Stable extrapolation of analytic functions, _Foundations of Computational Mathematics_, **19**(2), 297-331.
* Demanet et al. (2016)Fang, Z., Fang, H., & Demanet, L., 2020. Deep generator priors for bayesian seismic inversion, _Advances in Geophysics_, **61**, 179-216.
* Herrmann et al. (2019) Herrmann, F. J., Siahkoohi, A., & Rizzuti, G., 2019. Learned imaging with constraints and uncertainty quantification, _arXiv preprint arXiv:1909.06473_.
* Hewett & Demanet (2013) Hewett, R. & Demanet, L., 2013. the psit team, 2013, _PySIT: Python seismic imaging toolbox v0.5_.
* Hu et al. (2019) Hu, W., Jin, Y., Wu, X., & Chen, J., 2019. Progressive transfer learning for low frequency data prediction in full waveform inversion, _arXiv preprint arXiv:1912.09944_.
* Jeong et al. (2012) Jeong, W., Lee, H.-Y., & Min, D.-J., 2012. Full waveform inversion strategy for density in the frequency domain, _Geophysical Journal International_, **188**(3), 1221-1242.
* Jin et al. (2018) Jin, Y., Hu, W., Wu, X., Chen, J., et al., 2018. Learn low wavenumber information in FWI via deep inception based convolutional networks, in _in 88th Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 2091-2095.
* Kazei et al. (2020) Kazei, V., Ovcharenko, O., Plotnitskii, P., Peter, D., Alkhalifah, T., Silvestrov, I., Bakulin, A., & Zwartjes, P., 2020. Elastic near-surface model estimation from full waveforms by deep learning, in _in 90th Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 3872-3876.
* Kohn et al. (2012) Kohn, D., De Nil, D., Kurzmann, A., Przebindowska, A., & Bohlen, T., 2012. On the influence of model parametrization in elastic full waveform tomography, _Geophysical Journal International_, **191**(1), 325-345.
* Levander (1988) Levander, A. R., 1988. Fourth-order finite-difference P-SV seismograms, _Geophysics_, **53**(11), 1425-1436.
* Li et al. (2019) Li, D., Gao, F., & Williamson, P., 2019. A deep learning approach for acoustic FWI with elastic data, in _in 89th Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 2303-2307.
* Li & Demanet (2017) Li, Y. & Demanet, L., 2017. Extrapolated full-waveform inversion: An image-space approach, in _in 87th Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 1682-1686.
* Li & Demanet (2015) Li, Y. E. & Demanet, L., 2015. Phase and amplitude tracking for seismic event separation, _Geophysics_, **80**(6), WD59-WD72.
* Li & Demanet (2016) Li, Y. E. & Demanet, L., 2016. Full-waveform inversion with extrapolated low-frequency data, _Geophysics_, **81**(6), R339-R348.
* Mahrooqi et al. (2012) Mahrooqi, S., Rawahi, S., Yarubi, S., Abri, S., Yahyai, A., Jahdhami, M., Hunt, K., & Shorter, J., 2012. Land seismic low frequencies: acquisition, processing and full wave inversion of 1.5-86 Hz, in _in 82nd Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 1-5.
* Mahrooqi et al. (2013)Marjanovic, M., Plessix, R.-E., Stopin, A., & Singh, S. C., 2018. Elastic versus acoustic 3-D full waveform inversion at the east pacific rise 9\\({}^{\\circ}\\) 50' N, _Geophysical Journal International_, **216**(3), 1497-1506.
* Martin et al. (2006) Martin, G. S., Wiley, R., & Marfurt, K. J., 2006. Marmousi2: An elastic upgrade for marmousi, _The Leading Edge_, **25**(2), 156-166.
* Mora (1987) Mora, P., 1987. Nonlinear two-dimensional elastic inversion of multioffset seismic data, _Geophysics_, **52**(9), 1211-1228.
* Moseley et al. (2018) Moseley, B., Markham, A., & Nissen-Meyer, T., 2018. Fast approximate simulation of seismic waves with deep learning, _arXiv preprint arXiv:1807.06873_.
* Mosser et al. (2020) Mosser, L., Dubrule, O., & Blunt, M. J., 2020. Stochastic seismic waveform inversion using generative adversarial networks as a geological prior, _Mathematical Geosciences_, **52**(1), 53-79.
* Nocedal & Wright (2006) Nocedal, J. & Wright, S., 2006. _Numerical optimization_, Springer Science & Business Media.
* Oord et al. (2016) Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., & Kavukcuoglu, K., 2016. Wavenet: A generative model for raw audio, _arXiv preprint arXiv:1609.03499_.
* Ovcharenko et al. (2018) Ovcharenko, O., Kazei, V., Peter, D., Zhang, X., & Alkhalifah, T., 2018. Low-frequency data extrapolation using a feed-forward ANN, in _80th EAGE Conference and Exhibition 2018_, European Association of Geoscientists & Engineers, 1-5.
* Ovcharenko et al. (2019) Ovcharenko, O., Kazei, V., Kalita, M., Peter, D., & Alkhalifah, T., 2019. Deep learning for low-frequency extrapolation from multioffset seismic data, _Geophysics_, **84**(6), R989-R1001.
* Plessix et al. (2013) Plessix, R., Milcik, P., Rynja, H., Stopin, A., Matson, K., & Abri, S., 2013. Multiparameter full-waveform inversion: Marine and land examples, _The Leading Edge_, **32**(9), 1030-1038.
* Plessix et al. (2012) Plessix, R.-E., Baeten, G., de Maag, J. W., ten Kroode, F., & Rujie, Z., 2012. Full waveform inversion and distance separated simultaneous sweeping: a study with a land seismic data set, _Geophysical Prospecting_, **60**(4), 733-747.
* Raknes et al. (2015) Raknes, E. B., Arntsen, B., & Weibull, W., 2015. Three-dimensional elastic full waveform inversion using seismic data from the sleipner area, _Geophysical Journal International_, **202**(3), 1877-1894.
* Sears et al. (2010) Sears, T. J., Barton, P. J., & Singh, S. C., 2010. Elastic full waveform inversion of multi-component ocean-bottom cable seismic data: Application to Alba Field, UK North Sea, _Geophysics_, **75**(6), R109-R119.
* Siahkoohi et al. (2019) Siahkoohi, A., Louboutin, M., & Herrmann, F. J., 2019. The importance of transfer learning in seismic modeling and imaging, _Geophysics_, **84**(6), A47-A52.
* Siahkoohi et al. (2019)Sirgue, L., Barkved, O., Van Gestel, J., Askim, O., & Kommedal, J., 2009. 3D waveform inversion on Valhall wide-azimuth OBC, in _71st EAGE Conference and Exhibition incorporating SPE EUROPEC 2009_, European Association of Geoscientists & Engineers, pp. cp-127.
* Srivastava et al. (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R., 2014. Dropout: a simple way to prevent neural networks from overfitting, _The journal of machine learning research_, **15**(1), 1929-1958.
* Stopin et al. (2014) Stopin, A., Plessix, R.-E., & Al Abri, S., 2014. Multiparameter waveform inversion of a large wide-azimuth low-frequency land data set in Oman, _Geophysics_, **79**(3), WA69-WA77.
* Sun & Demanet (2018) Sun, H. & Demanet, L., 2018. Low frequency extrapolation with deep learning, in _in 88th Annual SEG meeting_, SEG Technical Program Expanded Abstracts, 2011-2015.
* Sun & Demanet (2020) Sun, H. & Demanet, L., 2020. Extrapolated full-waveform inversion with deep learning, _Geophysics_, **85**(3), R275-R288.
* Tarantola (1986) Tarantola, A., 1986. A strategy for nonlinear elastic inversion of seismic reflection data, _Geophysics_, **51**(10), 1893-1903.
* Vigh et al. (2014) Vigh, D., Jiao, K., Watts, D., & Sun, D., 2014. Elastic full-waveform inversion application using multicomponent measurements of seismic data collection, _Geophysics_, **79**(2), R63-R77.
* Virieux (1986) Virieux, J., 1986. P-SV wave propagation in heterogeneous media; velocity-stress finite-difference method, _Geophysics_, **51**(4), 889-901.
* Wu & Lin (2019) Wu, Y. & Lin, Y., 2019. InversionNet: An efficient and accurate data-driven full waveform inversion, _IEEE Transactions on Computational Imaging_, **6**, 419-433.
* Yang & Ma (2019) Yang, F. & Ma, J., 2019. Deep-learning inversion: A next-generation seismic velocity model building method, _Geophysics_, **84**(4), R583-R599.
* Yu & Koltun (2015) Yu, F. & Koltun, V., 2015. Multi-scale context aggregation by dilated convolutions, _arXiv preprint arXiv:1511.07122_.
* Zhang & Lin (2020) Zhang, Z. & Lin, Y., 2020. Data-driven seismic waveform inversion: A study on the robustness and generalization, _IEEE Transactions on Geoscience and Remote Sensing_, **58**(10), 6900-6913 | Full waveform inversion (FWI) strongly depends on an accurate starting model to succeed. This is particularly true in the elastic regime: The cycle-skipping phenomenon is more severe in elastic FWI compared to acoustic FWI, due to the short S-wave wavelength. In this paper, we extend our work on extrapolated FWI (EFWI) by proposing to synthesize the low frequencies of multi-component elastic seismic records, and use those \"artificial\" low frequencies to seed the frequency sweep of elastic FWI. Our solution involves deep learning: we separately train the same convolutional neural network (CNN) on two training datasets, one with vertical components and one with horizontal components of particle velocities, to extrapolate the low frequencies of elastic data. The architecture of this CNN is designed with a large receptive field, by either large convolutional kernels or dilated convolution. Numerical examples on the Marmousi2 model show that the 2-4Hz low frequency data extrapolated from band-limited data above 4Hz provide good starting models for elastic FWI of P-wave and S-wave velocities. Additionally, we study the generalization ability of the proposed neural network over different physical models. For elastic test data, collecting the training dataset by elastic simulation shows better extrapolation accuracy than acoustic simulation, i.e., a smaller generalization gap. | Condense the content of the following passage. | 271 |
arxiv-format/2312_01809v1.md | # SE-LIO: Semantics-enhanced Solid-State-LiDAR-Inertial Odometry for Tree-rich Environments
Tisheng Zhang, Linfu Wei, Hailiang Tang, Liqiang Wang, Man Yuan, and Xiaoji Niu
This research is funded by the National Natural Science Foundation of China (No.42374034, No.41974024) and the National Key Research and Development Program of China (No. 2020YFB0505803). (_Corresponding authors: Hailiang Tang; Xiaqi Niu_.)
Tisheng Zhang, Linfu Wei, Hailiang Tang, Liqiang Wang, Man Yuan and Xiaoji Niu are with the GNSS Research Center, Wuhan University, Wuhan 430079, China. (email: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected])
## I Introduction
Continuous, reliable, and accurate positioning in complex environments is crucial for autonomous vehicles and mobile robots. While LiDAR-inertial odometry (LIO) based on line and plane features has demonstrated excellent performance in structured environments, it encounters significant challenges in scenes abundant with unstructured features, such as tree-rich campuses and parks. Fig. 1 illustrates some typical campus and park scenes. In these scenarios, roads are surrounded by trees, resulting in a high proportion of unstructured point clouds (mainly tree leaves point clouds) in the LiDAR-scanned point cloud. These unstructured point clouds may significantly decrease the accuracy of traditional positioning methods that rely on geometric features, such as extracting plane feature points from unstructured point clouds like tree leaves. These plane feature points lack sufficient accuracy, leading to reduced positioning accuracy.
Normal distribution transform (NDT)-based methods [1, 2] have been presented to address these problems. These methods rely on statistical features of point clouds, such as the mean and covariance, rather than geometric features. They have demonstrated commendable positioning accuracy in unstructured environments. However, the matching accuracy and efficiency of NDT-based methods are intrinsically linked to the size of the grid into which they are divided. A smaller grid size yields higher matching accuracy but compromises matching efficiency, and vice versa. Adaptive voxel mapping [3] is a similar method. It divides point clouds into voxel grids, each containing an octree. The octree nodes contain the distribution information of the point clouds in the node. If the point clouds in the node do not conform to a planar distribution, the node is divided into eight sub-nodes, and the distribution of the point clouds is further determined in the sub-nodes, achieving adaptive planar fitting.
Additional methods preprocess point clouds to mitigate the impact of unstructured point clouds. For instance, LeGO-LOAM [4] segregates point clouds into ground and non-ground points, further segmenting the non-ground points and filtering out small non-ground point cloud clusters. It effectively reduces the influence of unstructured point clouds. The segmentation method employed is based on traditional image processing methods [5] for range images. Recently, many networks have been developed for semantic segmentation of point clouds [6]. These models can be categorized into range image-based models [7, 8], voxel-based models [9, 10], and point-based models [11, 12] based on the segmentation objects. As research progresses, the point cloud semantic segmentation accuracy has improved, effectively distinguishing between different types of point clouds, such as ground, buildings, tree trunks, tree leaves, and vehicles. This semantic information can be used to enhance positioning. Several studies have applied semantic segmentation to LiDAR positioning. SuMa++[13] uses semantic segmentation results to weight feature points, reducing the impact of dynamic objects. PSF-LO [14] performs geometric modeling on several types of semantic segmentation
Fig. 1: Typical scenes abundant with unstructured features.
results types, modeling roads, buildings, and traffic signs as planar features and poles as line features. SLOAM [15] is designed for tree-rich scenes, modeling tree trunks as cylindrical features, but it does not consider the curvature of tree trunks and is therefore not suitable for curved tree-rich environments. SALOAM [16] uses semantic information for front-end odometry and loop detection, reducing incorrect matches using semantic label constraints. However, the research mentioned above, which utilizes semantic information to enhance positioning, is mainly designed for spinning LiDAR. It involves projecting the point clouds into a range image for semantic segmentation, which is unsuitable for solid-state LiDAR.
Recently, the low-cost solid-state LiDARs have been widely used in autonomous robots [17]. The solid-state LiDAR has an irregular scanning pattern, completely different from the spinning LiDAR. Consequently, as the scanning time increases, the point-cloud coverage gradually increases, known as the integral property of solid-state LiDAR. Point clouds scanned by solid-state LiDAR are not arranged in a regular array and cannot be directly converted into a range image for semantic segmentation. Hence, a point-based semantic segmentation model would be more appropriate than a range image-based model. However, the point cloud obtained by a single frame (generally 0.1 seconds) of solid-state LiDAR is relatively sparse and low point-cloud coverage [18]. This sparsity often results in missed point clouds from many objects, decreasing semantic segmentation accuracy. While the point-cloud coverage can be increased by extending the scanning cycle of a single-frame point cloud when the carrier is stationary, motion distortion occurs in the point cloud when the carrier is in motion. The longer the scanning cycle, the more notable the effect of motion distortion, which is detrimental to semantic segmentation. With the development of multi-source fusion technology, IMU has become a standard equipment for LiDAR positioning and has been used to compensate for motion distortion [19]. Furthermore, IMU information is tightly coupled with LiDAR information to improve positioning accuracy and robustness [20, 21, 22]. Therefore, IMU information can be used to compensate for motion distortion, improving the point-cloud coverage and, consequently, the semantic segmentation accuracy.
We propose a semantics-enhanced solid-state-LiDAR-inertial odometry (SE-LIO) for scenes abundant with unstructured features. The proposed method leverages an inertial navigation system (INS) pose to merge and compensate for multiple LiDAR frames. A deep-learning model is employed for semantic segmentation. The segmentation results are then utilized to remove unstructured point clouds and incorporate cylindrical features into state estimation, enhancing positioning accuracy. The primary contributions are as follows:
\\(\\bullet\\) To address the low semantic segmentation accuracy caused by sparse point clouds of the solid-state LiDAR, we design an INS-enhanced semantic segmentation method, which leverages the INS pose to merge and compensate for multiple LiDAR frames, thereby improving the point-cloud coverage and semantic segmentation accuracy.
\\(\\bullet\\) To fully leverage pole-like semantic information, we propose an adaptive piecewise cylinder fitting method, effectively accommodating environments with curved trees, thereby enhancing the system's environmental adaptability and positioning accuracy.
\\(\\bullet\\) Real-world experiments were conducted in complex campus and park environments to verify the accuracy and robustness of the proposed method. Several alation experiments were carried out to fully evaluate the impacts of the factors that may influence the accuracy of the proposed SE-LIO.
The remainder of this paper is organized as follows. We give an overview of the system pipeline in Section II. The proposed method is presented in Section III. The experiments and results for quantitative evaluation are discussed in Section IV. Finally, we conclude the proposed method.
## II System Overview
The system workflow is illustrated in Fig. 2. The point clouds and IMU data are first accumulated until a certain threshold is reached. The current pose is then propagated forward using the IMU mechanization, and the point clouds undergo motion compensation. Following this, the motion-compensated point clouds are subjected to semantic segmentation, dividing them into different types: ground, pole-like, building, tree leaves, and dynamic objects. The unstructured point clouds, primarily tree leaves and dynamic objects, are removed to mitigate their impacts on positioning accuracy. Subsequently, the pole-like semantic information is leveraged to enhance positioning accuracy, including adaptive piecewise cylinder fitting of pole-like point clouds and data association. Finally, the iterated error-state Kalman filter (IESKF) is employed for state estimation. Cylindrical features and plane features are used to construct point-to-cylinder and point-to-plane constraints. These constraints are tightly coupled with the prior constraints provided by INS to obtain the maximum a posteriori estimation.
## III Methodology
This section introduces the methodology of SE-LIO. It begins with the point cloud preprocessing phase, followed by the
Fig. 2: System overview of the proposed SE-LIO.
semantic enhancement phase for pole-like point clouds, focusing on adaptive piecewise cylinder fitting and data association methods. Finally, it presents the state estimation algorithm.
### _Point-cloud Preprocessing_
The adopted LiDAR is a solid-state non-repetitive scanning LiDAR. Unlike traditional spinning LiDAR, the solid-state-LiDAR has an irregular scanning pattern, and the point clouds of a single frame are relatively sparse. Consequently, range image-based semantic segmentation methods are unsuitable. Instead, we employ a point-based semantic segmentation method, RandLA-Net [12]. Given the relative sparsity of the point cloud from a single frame, we leverage the INS pose to merge and compensate for multiple LiDAR frames, thereby improving the point-cloud coverage and semantic segmentation accuracy.
Specifically, a fixed-length buffer is established to cache point cloud and IMU measurement. When the number of point clouds in the buffer reaches a predetermined threshold, the point clouds are projected to the end of the last frame of the point cloud in the buffer using INS pose, thereby generating the motion-compensated point clouds.
The motion-compensated point clouds are fed into the RandLA-Net [12] model for semantic segmentation. However, the pre-trained RandLA-Net is only suitable for spinning LiDAR and not solid-state LiDAR. Therefore, we performed transfer learning to adapt RandLA-Net to solid-state LiDAR. The segmentation result is depicted in Fig. 3(a), where the purple, green, and brown points represent ground, tree leaves, and pole-like point clouds, respectively. Owing to the substantial influence of unstructured point clouds on positioning accuracy, tree leaves, and dynamic objects point clouds are removed, as depicted in Fig. 3(b).
### _Semantic Enhancement for Pole-like Point Clouds_
After removing unstructured point clouds, the remaining point clouds predominantly consist of ground, building, and pole-like point clouds, such as tree trunks. We employ the cylinder model to fit these pole-like point clouds. Considering the curvature of pole-like objects in the environment, a piecewise fitting method is utilized to adaptively fit these curved pole-like point clouds. The specific method is described in detail below. First, the method for fitting a single cylinder is introduced, followed by the segmented fitting method for fitting a single pole-like object. Subsequently, the method for updating the cylinder map formed by multiple pole-like objects is introduced. Finally, the method for data association is introduced.
#### Ii-B1 Single Cylinder Fitting
A cylinder can typically be represented by a minimum of five parameters, with four parameters delineating the axis and one parameter indicating the radius. However, this minimal parameter representation lacks intuitiveness. Therefore, we select a seven-parameter representation for the cylinder model, as shown in the following equation.
\\[\\mathbf{c}=(\\mathbf{u}^{T},\\mathbf{q}^{T},r)^{T}, \\tag{1}\\]
where \\(\\mathbf{u}\\) represents the unit vector in the direction of the axis, \\(\\mathbf{q}\\) represents a point on the axis, and \\(r\\) represents the radius of the cylinder. Given a cluster of pole-like point clouds obtained by semantic segmentation, let \\(P=\\{\\mathbf{p}\\}\\) represent this cluster. The procedure of fitting a cluster of pole-like point clouds to a cylinder can be summarized in the following steps:
_Step 1:_ Estimate the axis direction \\(\\mathbf{u}\\). It is achieved by solving for the maximum eigenvalue of the point cloud covariance. The corresponding eigenvector is the estimated axis vector \\(\\mathbf{u}\\).
_Step 2:_ Construct the rotation matrix \\(\\mathbf{R}\\) using the axis \\(\\mathbf{u}\\) to transform the point cloud \\(P\\), _i.e._, \\(\\mathbf{p}^{\\prime}=\\mathbf{R}\\mathbf{p}\\), such that the distribution of the transformed point cloud in the z-axis is maximized. The point is then projected onto the x-O-y plane, converting the three-dimensional cylinder fitting problem into a two-dimensional circle fitting problem.
_Step 3:_ Employ the random sample consensus (RANSAC) algorithm [23] to calculate the circle parameters. Within RANSAC, the least squares method fits the circle to minimize the sum of the distances between all given points and the circle. The circle equation is expressed as: \\((x-x_{0})^{2}+(y-y_{0})^{2}=r^{2}\\), where \\(x_{0}\\) and \\(y_{0}\\) represent the two-dimensional coordinates of the circle center, and \\(r\\) represents the radius.
_Step 4:_ Convert the circle parameters to cylinder parameters. The rotation matrix \\(\\mathbf{R}\\) is used to convert the circle center coordinates \\((x_{0},y_{0})\\) to the point \\(\\mathbf{q}\\) on the cylinder axis, and the radius \\(r\\) is used as the cylinder radius. Finally, the parameters \\(\\mathbf{c}=(\\mathbf{u}^{T},\\mathbf{q}^{T},r)^{T}\\) of the cylinder are obtained.
#### Ii-B2 Adaptive Piecewise Cylinder Fitting for Pole-like Point Clouds
Given the diverse curvature of pole-like objects, a single-cylinder model may not provide an optimal fit. Consequently, we propose an adaptive piecewise fitting approach for pole-like objects, as shown in Algorithm 1. A binary tree structure manages the cylinders associated with the same pole-like object, facilitating piecewise fitting.
In Algorithm 1, the pole-like point cloud is first fitted to a cylinder. If the fitting residual is less than the threshold, the pole-like object is deemed to have been fitted. Otherwise, the pole-like point cloud is divided into upper and lower parts, each fitted to a cylinder. This process is repeated until the fitting residual is less than the threshold or the maximum depth of the binary tree is reached.
#### Ii-B3 Cylinder Map Update
Given limited pole-like objects, the efficiency of the algorithm will not be significantly impacted even if a linear search is employed for matching. Therefore, we use a linear array to store all trees. For a new frame of pole-like point
Fig. 3: Semantic segmentation results. The purple, green, and brown points represent ground, tree leaves, and pole-like point clouds, respectively.
clouds, a coarse matching method is employed to match it with the pole-like objects in the map. If the match is successful, the pole-like object in the map is updated; otherwise, the point does not belong to any tree in the map. We employ the density-based spatial clustering of applications with noise (DBSCAN) [24] algorithm to cluster these new points. The map-update algorithm is shown in Algorithm 2. It was employed to fit cylinders and update the cylinder map using the campus dataset, yielding the results shown in Fig. 4. As shown in Fig. 4, the proposed method can fit cylinders to straight and curved pole-like objects. For curved pole-like objects, the proposed method can adaptively fit them in segments, rather than using a single-cylinder model.
#### Iii-B4 Point-to-Cylinder Data Association
The matching process employs a coarse-to-fine matching strategy, as shown in Algorithm 3. The nearest tree to the current point is identified via a linear nearest neighbor search. If the distance between the current point and the nearest tree is less than the predetermined threshold, the point is considered part of that tree. Conversely, if the distance exceeds the threshold, the point is deemed independent of any tree. After confirming the tree to which the point belongs, the specific cylinder to which the point belongs on the tree is determined using binary search.
```
Input: Point cloud \\(P\\), Cylinder map \\(M\\). Output: Updated Cylinder map \\(M\\). procedure UPDATE_MAP(\\(P\\), \\(M\\)) if not initialized then add \\(P\\) to buffer and return if buffer is not full initialized \\(\\leftarrow\\) true add \\(P\\) to buffer for each\\(\\mathbf{p}\\) in \\(\\mathbf{P}\\)do if\\(\\mathbf{p}\\) does not belong to any tree then if enough nearest points around \\(\\mathbf{p}\\)then create new tree from cluster and add to \\(M\\) end else mark \\(\\mathbf{p}\\) as a point to update end UPDATE_TREES_IN_MAP(\\(M\\)) DELETE_OLD_POINTS_IN_BUFFER end
```
**Algorithm 2**Cylinder Map Update
### _State Estimation_
We employ the IESKF framework to fuse point-to-plane and point-to-cylinder observations. The state vector is defined as
\\[\\mathbf{x}=[(\\mathbf{R}_{b}^{w})^{T}\\quad(\\mathbf{p}^{w})^{T}\\quad(\\mathbf{v}^{w})^{T} \\quad\\mathbf{b}_{a}^{T}\\quad\\mathbf{b}_{g}^{T}]^{T}\\in SO(3)\\times\\mathbb{R}^{12}, \\tag{2}\\]
where \\(\\mathbf{R}_{b}^{w}\\) represents the rotation matrix of the IMU frame (\\(b\\)-frame) relative to the world frame (\\(w\\)-frame), \\(\\mathbf{p}^{w}\\) and \\(\\mathbf{v}^{w}\\) represents the translation and velocity vector of the \\(b\\)-frame relative to the \\(w\\)-frame, respectively. \\(\\mathbf{b}_{a}\\) and \\(\\mathbf{b}_{g}\\) represents the accelerometer and gyroscope biases, respectively. The estimated state and error state are defined as
\\[\\hat{\\mathbf{x}}=[(\\mathbf{\\hat{R}}_{b}^{w})^{T}\\quad(\\mathbf{\\hat{p}}^{w})^{T}\\quad (\\mathbf{\\hat{v}}^{w})^{T}\\quad\\mathbf{\\hat{b}}_{a}^{T}\\quad\\mathbf{\\hat{b}}_{g}^ {T}]^{T}\\in SO(3)\\times\\mathbb{R}^{12}\\]
\\[\\delta\\mathbf{x}=\\mathbf{x}\\boxplus\\hat{\\mathbf{x}}=[\\delta\\mathbf{p}^{T}\\quad\\delta\\mathbf{p}^{T} \\quad\\delta\\mathbf{v}^{T}\\quad\\delta\\mathbf{b}_{a}^{T}\\quad\\delta\\mathbf{b}_{g}^{T}]^{T} \\in\\mathbb{R}^{15}. \\tag{3}\\]
Before the observation update, the error covariance matrix is propagated using the motion model. The employed motion model is similar to that of FAST-LIO [21], except that we use a first-order Gaussian-Markov model [25] to model IMU bias, whereas FAST-LIO uses a random walk model to model IMU bias. The discrete IMU bias model employed is defined as
\\[\\delta\\mathbf{b}_{a,k+1} =\\left(1-\\frac{1}{T_{a}}\\Delta t\\right)\\delta\\mathbf{b}_{a,k}+\\mathbf{n} _{a} \\tag{4}\\] \\[\\delta\\mathbf{b}_{g,k+1} =\\left(1-\\frac{1}{T_{g}}\\Delta t\\right)\\delta\\mathbf{b}_{g,k}+\\mathbf{n} _{g},\\]
where \\(T_{a}\\) and \\(T_{g}\\) represent the correlation time of the accelerometer bias and gyroscope bias, respectively, and \\(\\mathbf{n}_{a}\\) and \\(\\mathbf{n}_{g}\\) represent the Gaussian white noise of the accelerometer bias and gyroscope bias, respectively.
Fig. 4: Cylinder fitting results.
The other error state transition equations of \\(\\delta\\mathbf{x}=\\mathbf{x}\\boxplus\\widehat{\\mathbf{x}}\\) are as follows
\\[\\delta\\mathbf{\\theta}_{k+1} =\\mathrm{Exp}(-(\\mathbf{\\omega}^{b}-\\mathbf{b}_{g})\\Delta t)\\delta\\mathbf{ \\theta}_{k}-\\delta\\mathbf{b}_{g}\\Delta t-\\mathbf{n}_{\\theta} \\tag{5}\\] \\[\\delta\\mathbf{p}_{k+1} =\\delta\\mathbf{p}_{k}+\\delta\\mathbf{v}\\Delta t\\] \\[\\delta\\mathbf{v}_{k+1} =\\delta\\mathbf{v}_{k}-\\widehat{\\mathbf{\\mathrm{R}}}_{b}^{w}\\Delta t \\delta\\mathbf{b}_{a}-\\widehat{\\mathbf{\\mathrm{R}}}_{b}^{w}((\\mathbf{f}^{b}-\\mathbf{b}_{a}) \\times)\\Delta t\\delta\\mathbf{\\theta}_{k}\\] \\[-\\mathbf{n}_{v}\\]
where \\(\\mathrm{Exp}\\) represents the exponential mapping from Lie algebra to Lie group, \\(\\mathbf{n}_{\\theta}\\) and \\(\\mathbf{n}_{v}\\) represent the Gaussian white noise of the attitude and velocity, respectively. (\\(\\times\\)) represents the conversion of a vector to a skew-symmetric matrix.
#### Iii-C2 Point-to-Cylinder Measurement Model
The point observed by the LiDAR is represented as \\(\\mathbf{p}_{i}^{l}\\), where \\(i\\) represents the \\(i\\)-th point, and \\(l\\) represents the LiDAR frame (\\(l\\)-frame). Assuming the LiDAR-IMU extrinsic parameters have been calibrated, the \\(i\\)-th point \\(\\mathbf{p}_{i}^{b}\\) in \\(b\\)-frame can be obtained by transforming the point \\(\\mathbf{p}_{i}^{l}\\), _i.e._, \\(\\mathbf{p}_{i}^{b}=\\mathbf{\\mathrm{R}}_{i}^{b}(\\mathbf{p}_{i}^{l}-\\mathbf{p}_{i}^{b})\\). Here, \\(\\mathbf{p}_{i}^{b}\\) represents the translation vector of the \\(l\\)-frame relative to the \\(b\\)-frame, and \\(\\mathbf{\\mathrm{R}}_{i}^{b}\\) represents the rotation matrix of the \\(l\\)-frame relative to the \\(b\\)-frame.
Given the point cloud set \\(P_{c}=\\{\\mathbf{p}_{i}^{b}\\}\\) of the pole-like object obtained from semantic segmentation, where \\(i\\) represents the \\(i\\)-th point, the cylinder \\(\\mathbf{c}=(\\mathbf{u}^{T},\\mathbf{q}^{T},r)^{T}\\) to which the point \\(\\mathbf{p}_{i}^{b}\\) belongs can be determined using the data association method described in Section III-B. The distance from the point to the cylinder surface is defined as
\\[d_{c,\\mathbf{p}_{i}^{b}}=||(\\mathbf{u}\\times)(\\mathbf{\\mathrm{R}}_{b}^{w}\\mathbf{p}_{i}^{b}+ \\mathbf{p}^{w}-\\mathbf{q})||_{2}. \\tag{6}\\]
The residual of \\(\\mathbf{p}_{i}^{b}\\) to the cylinder can be written as
\\[h_{c}(\\mathbf{x}_{k},\\mathbf{p}_{i}^{b}) =d_{c,\\mathbf{p}_{i}^{b}} \\tag{7}\\] \\[=h_{c}(\\widehat{\\mathbf{x}}_{k}\\boxplus\\delta\\mathbf{x}_{k},\\mathbf{p}_{i}^{b} )+\\mathbf{n}_{i}\\] \\[\\approx h_{c}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})+\\mathbf{\\mathrm{H}} _{c,i}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})\\delta\\mathbf{x}_{k}+\\mathbf{n}_{i}\\]
where \\(\\mathbf{n}_{i}\\) represents the Gaussian white noise, and \\(\\mathbf{\\mathrm{H}}_{c,i}\\) represents the Jacobian matrix of the point-to-cylinder observation equation, which is defined as
\\[\\mathbf{\\mathrm{H}}_{c,i}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b}) =\\frac{\\partial h_{c}(\\widehat{\\mathbf{x}}_{k}\\boxplus\\delta\\mathbf{x}_{k },\\mathbf{p}_{i}^{b})}{\\partial\\delta\\mathbf{x}_{k}}|_{\\delta\\mathbf{x}_{k}=0}, \\tag{8}\\] \\[=[\\mathbf{\\mathrm{H}}_{\\delta\\mathbf{\\theta}}\\quad\\mathbf{\\mathrm{H}}_{\\delta \\mathbf{p}}\\quad\\mathbf{0}_{1\\times 9}]\\]
where \\(\\mathbf{\\mathrm{H}}_{\\delta\\mathbf{\\theta}}\\) and \\(\\mathbf{\\mathrm{H}}_{\\delta\\mathbf{p}}\\) represent the Jacobian matrices of the residual w.r.t the attitude error vector and position error vector, respectively, as follows
\\[\\mathbf{\\mathrm{H}}_{\\delta\\mathbf{\\theta}} =-\\frac{((\\mathbf{u}\\times)(\\mathbf{\\mathrm{R}}_{b}^{w}\\mathbf{p}_{i}^{b}+ \\mathbf{p}^{w}-\\mathbf{q}))^{T}}{||(\\mathbf{u}\\times)(\\mathbf{\\mathrm{R}}_{b}^{w}\\mathbf{p}_{i}^{b}+ \\mathbf{p}^{w}-\\mathbf{q})||_{2}}(\\mathbf{u}\\times)\\mathbf{\\mathrm{R}}_{b}^{w}(\\mathbf{p}_{i}^{b}\\times) \\tag{9}\\] \\[\\mathbf{\\mathrm{H}}_{\\delta\\mathbf{p}} =\\frac{((\\mathbf{u}\\times)(\\mathbf{\\mathrm{R}}_{b}^{w}\\mathbf{p}_{i}^{b}+ \\mathbf{p}^{w}-\\mathbf{q}))^{T}}{||(\\mathbf{u}\\times)(\\mathbf{\\mathrm{R}}_{b}^{w}\\mathbf{p}_{i}^{b}+ \\mathbf{p}^{w}-\\mathbf{q})||_{2}}(\\mathbf{u}\\times)\\]
#### Iii-C3 Point-to-Plane Measurement Model
The residual of the point \\(\\mathbf{p}_{i}^{b}\\) to the plane can be written as
\\[h_{\\pi}(\\mathbf{x}_{k},\\mathbf{p}_{i}^{b}) =h_{\\pi}(\\widehat{\\mathbf{x}}_{k}\\boxplus\\delta\\mathbf{x}_{k},\\mathbf{p}_{i}^ {b})+\\mathbf{n}_{i} \\tag{10}\\] \\[\\approx h_{\\pi}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})+\\mathbf{\\mathrm{H}} _{\\pi,i}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})\\delta\\mathbf{x}_{k}+\\mathbf{n}_{i},\\]
where \\(\\mathbf{n}_{i}\\) represents the Gaussian white noise, and \\(\\mathbf{\\mathrm{H}}_{\\pi,i}\\) represents the Jacobian matrix of the point-to-plane observation equation, which is defined as
\\[\\mathbf{\\mathrm{H}}_{\\pi,i}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b}) =\\frac{\\partial h_{\\pi}(\\widehat{\\mathbf{x}}_{k}\\boxplus\\delta\\mathbf{x}_ {k},\\mathbf{p}_{i}^{b})}{\\partial\\delta\\mathbf{x}_{k}}|_{\\delta\\mathbf{x}_{k}=0}\\quad, \\tag{11}\\] \\[=[-\\mathbf{u}^{T}\\mathbf{\\mathrm{R}}_{b}^{w}(\\mathbf{p}_{i}^{b}\\times)\\quad\\bm {u}^{T}\\quad\\mathbf{0}_{1\\times 9}]\\]
where \\(\\mathbf{u}\\) represents the normal vector of the plane.
Finally, the objective function of the optimization problem can be obtained by combining the prior information and the point-to-plane and point-to-cylinder residuals:
\\[\\min_{\\delta\\mathbf{x}_{k}} (||\\mathbf{x}_{k}\\boxplus\\widehat{\\mathbf{x}}_{k}||_{\\mathbf{\\mathrm{P}}_{k}}^ {2} \\tag{12}\\] \\[+\\sum_{i=1}^{N_{\\pi}}||h_{\\pi}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b} )+\\mathbf{\\mathrm{H}}_{\\pi,i}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})\\delta\\mathbf{x}_{k}||_{ \\mathbf{\\mathrm{\\Sigma}}_{\\pi}}^{2},\\] \\[+\\sum_{i=1}^{N_{\\pi}}||h_{c}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})+ \\mathbf{\\mathrm{H}}_{c,i}(\\widehat{\\mathbf{x}}_{k},\\mathbf{p}_{i}^{b})\\delta\\mathbf{x}_{k}||_{ \\mathbf{\\mathrm{\\Sigma}}_{c}}^{2})\\]
where \\(N_{\\pi}\\) and \\(N_{c}\\) represent the number of point-to-plane and point-to-cylinder observations, respectively, and \\(\\widehat{\\mathbf{\\mathrm{P}}}_{k}\\), \\(\\mathbf{\\mathrm{\\Sigma}}_{\\pi}\\) and \\(\\mathbf{\\mathrm{\\Sigma}}_{c}\\) represent the observation noise covariance matrices of the prior information, the point-to-plane, and point-to-cylinder observations, respectively. We use the residual covariance matrix of the cylinder fitting as \\(\\mathbf{\\mathrm{\\Sigma}}_{c}\\).
## IV Experiments and Results
### _Implementation and Evaluation Setup_
The proposed SE-LIO is implemented in C\\({}^{++}\\) based on the robot operating system (ROS). Field tests are conducted using a low-speed wheeled robot with an average speed of around 1.5 m/s. The system uses a solid-state LiDAR with a frame rate of 10 Hz (Livox Mid-70), an industrial-grade MEMS IMU (ADI ADIS16465 with a gyroscope bias instability of 2 \\({}^{\\circ}\\) /hr and a frame rate of 200 Hz), and a dual-antenna GNSS receiver with a frame rate of 1 Hz (NovAtel OEM-718D). The GNSS real-time kinematic (RTK) technique is adopted to achieve high-accuracy positioning. All sensors are
Fig. 5: Test environments for quantitative evaluation. _S1_, _S2_, and _S3_ represent the degraded scenes in Experiment 2, and _S4_ represents the degraded scene in Experiment 4.
synchronized through hardware triggers to the GNSS time. The ground-truth system is a high-accuracy GNSS/INS integrated navigation system using the GNSS-RTK and a navigation-grade IMU. The ground truth (with an accuracy of 0.02 m for position and 0.01\\(\\,{}^{\\circ}\\) for attitude) is generated by post-processing software.
The deep learning model utilized for segmentation requires retraining for different types of LiDAR. Currently, there is no suitable public dataset containing solid-state LiDAR and tree-rich test scenes; therefore, the system was not tested on public datasets. To quantitatively evaluate the accuracy of the proposed system, we conducted a series of field tests in different environments. The test environments include campus and park areas, as shown in Fig. 5. The experiments, numbered 1 through 4, were conducted at Wuhan University, where the environment featured numerous trees and buildings. The last experiment, numbered 5, was conducted in Donghu, Wuhan, a large lake with artificial roads and trees on both sides. All the test environments contain many dynamic objects. _S1_-_S3_ in Fig. 5 represent the degraded scenes of Experiment 2, and _S4_ represents the degraded scene of Experiment 4.
To underscore the merits of the proposed method, we conducted a comparative analysis with the LIO system that solely utilizes plane features. The proposed method is referred to as SE-LIO. The baseline method is referred to as Ori-LIO, which only uses plane features. The performance of Ori-LIO is comparable to that of FAST-LIO, but it has been further optimized, as mentioned in Section III-C. Therefore, we use Ori-LIO as the baseline algorithm. To verify the effectiveness of the cylindrical features, we also conducted ablation experiments, _i.e._, testing the positioning accuracy that solely removed the unstructured point clouds and compared it with the proposed SE-LIO. The method that solely removes unstructured point clouds is referred to as SE-LIO-RU, where RU stands for removing unstructured point clouds.
The positioning performance was evaluated based on absolute and relative pose errors. It is important to note that the results are deterministic in each run. The system was run in real-time on a desktop PC (Intel Core i7-11700 CPU @ 2.50 GHz, 32 GB RAM, and an NVIDIA GTX 1650 GPU) under the ROS framework. It should be noted that different data used the same parameters for the same tested method.
### _The Impact of Point-cloud Integration on Semantic Segmentation Accuracy_
We used multi-frame merging to achieve point-cloud integration. The LiDAR point cloud was motion-compensated using the INS pose during multi-frame merging. The duration of a single frame of point cloud was 0.1 s, and the number of merged frames ranged from 1 to 6, corresponding to a merging time of 0.1 to 0.6 s. The test results are shown in TABLE I. Here, f represents frame(s), _i.e._, the number of frames.
As shown in TABLE I, as the number of merged frames increases, the semantic segmentation accuracy also increases. The results indicate that improving the point-cloud coverage is helpful for semantic segmentation. However, semantic segmentation accuracy is almost unchanged when the merged frames exceed 3. Improving the number of merged frames can reduce the frequency of semantic segmentation, but it will increase the error of INS motion compensation. Therefore, to balance the frequency of semantic segmentation and the accuracy of INS motion compensation, we merged 5 frames of point clouds for semantic segmentation in the subsequent tests.
### _Evaluation of the Positioning Accuracy_
The proposed method employs an adaptive piecewise fitting cylinder model, necessitating the configuration of the max fitting depth. This section sets the max fitting depth to 3, with a comprehensive explanation to follow in Section IV-D. The absolute rotation error (ARE) and absolute translation error (ATE) are shown in TABLE II, with the superior result among the three emphasized in bold, and E1-E5 represent Experiment 1-Experiment 5, respectively.
As depicted in TABLE II, SE-LIO-RU exhibits a substantial enhancement over Ori-LIO in most tests, and SE-LIO exhibits superior performance to SE-LIO-RU in both attitude and position accuracy. Specifically, in Experiment 5, the test scene is situated by a lake. During the majority of Experiment 5, the point cloud in the horizontal direction is only unstructured point clouds, such as trees and dynamic objects, and lacks structured point clouds, such as buildings. Therefore, the absolute positioning accuracy is greatly improved after removing unstructured point clouds. Furthermore, the proposed method utilizes pole-like objects like tree trunks for positioning, thereby strengthening the horizontal constraint and enhancing the absolute positioning accuracy compared to SE-LIO-RU.
Fig. 6 shows the trajectories estimated by the three methods in Experiment 5. SE-LIO exhibits the least drift, aligning more closely with the ground truth. In contrast, both SE-LIO-RU and Ori-LIO demonstrate larger drift. The relative translation error (RTE) and relative rotation error (RRE) were also evaluated to provide insight into short-term accuracy. The results are presented in TABLE III, with the superior result among the three emphasized in bold. As exhibited in TABLE III, the proposed SE-LIO outperforms the other two methods in most tests. The short-term accuracy of the proposed SE-LIO is improved, reflecting the system's superior robustness.
Notably, in Scene _S2_, the positioning accuracy of SE-LIO is comparable to that of Ori-LIO and SE-LIO-RU, indicating that the proposed method does not compromise accuracy in scenes devoid of pole-like object features.
In Scene _S4_, the positioning accuracy of SE-LIO-RU is notably the least optimal. It can be attributed to the fact that in Scene _S4_, the LiDAR scans fewer plane features, such as buildings. Ori-LIO makes use of unstructured point clouds while SE-LIO-RU removes them. Despite the unreliability of the plane features extracted from unstructured point clouds, they provide horizontal constraints, resulting in better positioning accuracy than SE-LIO-RU. However, SE-LIO, which is based on SE-LIO-RU, utilizes cylindrical features that provide more reliable horizontal constraints. Consequently, SE-LIO demonstrates superior positioning accuracy compared to both Ori-LIO and SE-LIO-RU.
### _Runtime Analysis_
We further tested the runtime of the proposed method. The test environment comprised an Intel Core i7-11700 CPU @ 2.50 GHz, 32 GB RAM, and an NVIDIA GTX 1650 GPU. In Section IV-B, we merged 5 frames of point clouds (0.5 s) for semantic segmentation and LIO positioning. For a 0.5 s point cloud, the average runtime for semantic segmentation was 392.8 ms, while the LIO fusion took 33.5 ms. Semantic segmentation accounted for approximately 90% of the total runtime, indicating potential for optimization in future work.
## V Conclusion
In this study, we propose a semantic-enhanced solid-state-LIO method for environments abundant with trees, such as campuses and parks. The method integrates and compensates multiple LiDAR frames using the INS pose to address the problem of low semantic segmentation accuracy due to sparse point clouds. Semantic information is then employed to enhance the positioning performance, including removing unstructured point clouds and constructing point-to-cylinder constraints using the cylindrical features of pole-like objects. An adaptive piecewise fitting method is proposed that the pole-like object is segmented into multiple cylinders, obtaining more accurate cylindrical features. Hence, the positioning accuracy can be improved in scenes with curved tree trunks. Experimental results demonstrate that SE-LIO outperforms the baseline method in terms of positioning accuracy and robustness.
## References
* [1]P. Biber and W. Stasser (2003-12) The normal distributions transform: a new approach to laser scan matching. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, Nevada, USA, pp. 2743-2748. Cited by: SSI.
* [2]P. Biber and W. Stasser (2003-12) The normal distributions transform: a new approach to laser scan matching. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, Nevada, USA, pp. 2743-2748. Cited by: SSI.
* [3]M. Bans, A. Lilienthal, and T. Duckett (2007) Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot.24 (10), pp. 803-827. Cited by: SSI.
* [4]M. Bans, A. Lilienthal, and T. Duckett (2007) Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot.24 (10), pp. 803-827. Cited by: SSI.
* [5]M. Bans, A. Lilienthal, and T. Duckett (2007) Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot.24 (10), pp. 803-827. Cited by: SSI.
* [6]M. Bans, A. Lilienthal, and T. Duckett (2007) Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot.24 (10), pp. 803-827. Cited by: SSI.
* [7]M. Bans, A. Lilienthal, and T. Duckett (2007) Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot.24 (10), pp. 803-827. Cited by: SSI.
| In this letter, we propose a semantics-enhanced solid-state-LiDAR-inertial odometry (SE-LIO) in tree-rich environments. Multiple LiDAR frames are first merged and compensated with the inertial navigation system (INS) to increase the point-cloud coverage, thus improving the accuracy of semantic segmentation. The unstructured point clouds, such as tree leaves and dynamic objects, are then removed with the semantic information. Furthermore, the pole-like point clouds, primarily tree trunks, are modeled as cylinders to improve positioning accuracy. An adaptive piecewise cylinder-fitting method is proposed to accommodate environments with a high prevalence of curved tree trunks. Finally, the iterated error-state Kalman filter (IESKF) is employed for state estimation. Point-to-cylinder and point-to-plane constraints are tightly coupled with the prior constraints provided by the INS to obtain the maximum a posteriori estimation. Targeted experiments are conducted in complex campus and park environments to evaluate the performance of SE-LIO. The proposed methods, including removing the unstructured point clouds and the adaptive cylinder fitting, yield improved accuracy. Specifically, the positioning accuracy of the proposed SE-LIO is improved by 43.1% compared to the plane-based LIO.
LiDAR-inertial navigation, state estimation, semantics enhancement, multi-sensor fusion navigation, pole-like point cloud. | Summarize the following text. | 275 |
mdpi/033355f5_20aa_4841_9bba_968ff3f267f7.md | Impacts of Socially Responsible Corporate Activities on Korean Consumers' Corporate Evaluations in the Agrifood Industry
Dongmin Lee
1Program in Regional Information, Department of Agricultural Economics and Rural Development, Seoul National University, Seoul 08826, Korea; [email protected] (D.L.); [email protected] (Y.C.C.)
Junghoon Moon
1Program in Regional Information, Department of Agricultural Economics and Rural Development, Seoul National University, Seoul 08826, Korea; [email protected] (D.L.); [email protected] (Y.C.C.)
Young Chan Choe
1Program in Regional Information, Department of Agricultural Economics and Rural Development, Seoul National University, Seoul 08826, Korea; [email protected] (D.L.); [email protected] (Y.C.C.)
Jaeseok Jeong
2Graduate School of Pan-Pacific International Studies, Kyung Hee University Global Campus, Yongin 17104, Korea; [email protected] (Correspondence: [email protected]; Tel.: +82-2-880-4722
## 1 Introduction
Corporate social responsibility (CSR) has been discussed in the context of both practical and the academic fields since at least the 1950s [1]. Given that traditional corporate philanthropy is founded on altruism [2], the fundamental notion of philanthropy concerns the contributions a firm makes without any expectation of receiving benefits in return [3]. As corporate involvement in social problems has evolved from a voluntary activity to a mandatory one [4], corporate executives have faced increasing demand for higher levels of CSR. \"The more companies donate, the more is expected of them\" [5] (p. 5). This dilemma has led to an increasing number of companies adopting strategic approaches to social activities [5; 6]. Both academia and practical fields address several concepts that link business value and societal problems, such as Prahalad and Hammond's Bottom-of-the-Pyramid theory [7] and Porter and Kramer's CSV concept [8].
The stream of research on strategic approaches in the CSR field is particularly relevant in the agrifood industry. Several multinational companies (e.g., Nestle (Vevey, Switzerland), Unilever (London, UK)) have applied the CSV concept to their business models. For instance, Nestle internalizes CSV activities in its daily business processes and evaluates its CSV performance annually. Why does the agrifood sector face such strong pressure to take up activities related to corporate social responsibility? According to Hartmann Hartmann (2010), this pressure is due to the agrifood sector's unique characteristics. First, the food sector depends heavily on natural, human, and physical resources, meaning that it has a significant impact on the environment (Hartmann, 2010). Second, consumers are sensitive about the food that they eat. With respect to societal concerns about food (e.g., danger of power abuse in procurement processes, interference in animal welfare, etc.), many consumers have strict requirements for the complete value chain of the agrifood industry (Hartmann, 2010). Nevertheless, though many food companies engage in socially responsible activities, little research has addressed the effects of these activities on the food sector (Hartmann, 2010).
The consumer perspective is critical when initiating socially responsible corporate activities. Many organizations initiate socially responsible activities under the assumption that consumers will reward them for their support of social programs (Hartmann, 2010). Consumers are particularly responsive to corporate socially responsible activities (Hartmann, 2010) and perceive these activities as essential components of consumer-company communication (Hartmann, 2010). When evaluating consumers' responses to socially responsible corporate activities, prior knowledge should be considered. An individual's initial attitude serves as a frame of reference for evaluating new information (Hartmann, 2010). That is, consumers evaluate corporate activities in the context of their existing knowledge (Hartmann, 2010). This study attempts to verify the impact of socially responsible corporate activities within the agrifood sector on consumers based on different prior corporate stereotypes.
In addition to understanding consumers' prior corporate stereotypes, it is important to understand which types of activities are more or less likely to be effective or efficient, particularly as larger budgets are increasingly invested into socially responsible corporate activities (Hartmann, 2010). This study compares two different types of socially responsible corporate activities: (1) CSV, a recent paradigm in which the interests of private companies and societal problems are fully connected (Hartmann, 2010); and (2) philanthropic giving, a reactive strategy to counter stakeholder demands, threats of government intrusion, and escalating public expectations (Hartmann, 2010; Hartmann, 2010). CSV, as a newly suggested paradigm, still receives significant criticism (Hartmann, 2010). However, since this study aims to address the effect of corporate activities on the consumer response, we assert that both CSV and philanthropic giving are corporate activities that produce social value from the perspective of consumers.
The remainder of this article is organized as follows: First, a theoretical background of socially responsible corporate activities and stereotype content models is described, and hypotheses are formulated. Next, the methodology is described, and results and a discussion are presented. Finally, we draw conclusions and suggest theoretical and practical implications.
## 2 Theoretical Background
### Socially Responsible Corporate Activities
CSR is the voluntary assumption by companies of responsibilities beyond purely economic and legal responsibilities (Hartmann, 2010). According to Carroll (2010), CSR covers four dimensions: economic, legal, ethical, and philanthropic. Firms use many types of CSR to cover several dimensions of societal problems. Prior studies (Hartmann, 2010; Hartmann, 2010; Hartmann, 2010) have addressed philanthropy, sponsorship, cause-related marketing (CRM), and corporate giving in particular.
Corporate philanthropy was initially founded on altruism (Hartmann, 2010). It has been defined as making a contribution of cash or kind to a worthy cause without any expectation of receiving a benefit in return (Hartmann, 2010; Hartmann, 2010). However, \"much of what is labeled as corporate philanthropy does seek to generate and exploit an association between the giving company and recipient organization\" (Hartmann, 2010) (p. 1364). A distinction should be made between pure forms of corporate philanthropy and \"pseudo-altruism\" (Hartmann, 2010), which aims to benefit from \"giving\" activities, such as CRM or sponsorship. True philanthropy seeks only to create social value and does not impact consumer behaviors or sales, given the lack of expectation of firm-related benefits (Hartmann, 2010). That is to say, in the context of true philanthropy, no business value is expected. Compared to \"true philanthropy\", sponsorship, corporate giving, and CRM are activities that may involve commercial motivations and strategic factors [2]. Sponsorships and CRM are considered to be conditional and contaminated prosocial activities in terms of pure corporate philanthropy [26].
Sponsorship is a form of supporting society, but it is recognized as a commercial activity due to the right of the sponsoring firm to promote an association with the recipient in return for support [2]. That is, sponsorship is more of \"an investment, in cash or in-kind, in an activity, in return for access to the exploitable commercial potential associated with that property\" [27] (p. 36). Sponsorship typically involves the following three agents: a sponsor, property, and consumers [28]. The property receives a payment, and the sponsor obtains the right to associate with the property [29]. Sponsorship creates social value, but it also indirectly impacts business value [2].
Like sponsorship, CRM is commercially motivated and, thus, creates both social and business value. In CRM, a corporation promises to donate to a social cause in order to influence consumers' purchasing behaviors [30; 31]. Thus, CRM directly creates business value by increasing sales [2]. A firm may create business value through CRM because CRM involves consumers purchasing the sponsoring company's product [32]. Moreover, previous studies concerning the effects of CRM on consumer perceptions of the sponsoring brand or firm show that CRM is expected to indirectly create business value, by improving corporate reputation or image [33; 34].
Corporate giving refers to making monetary gifts or giving goods and/or services through an established corporate foundation [18]. Corporate giving is usually considered to be a reactive strategy to counter stakeholder demands, threats of government intrusion, and escalating public expectations [18; 19]. Thus, like sponsorship, corporate giving seeks mainly to improve corporate image [18] in order to indirectly create business value. Campbell et al. [18] defined corporate giving as all giving activities, including CRM. However, to prevent confusion, the present study limits the definition of corporate giving to only those giving activities of firms that do not require consumer involvement (e.g., CRM), and use the new term, \"philanthropic giving\".
In academia and the practical field, in addition to the CSR activities addressed above, several further concepts, such as CSV and the Bottom-of-the-Pyramid (BoP) theory, \"link\" business value with societal problems. CSV can be defined as the \"policies and operating practices that enhance the competitiveness of a company while simultaneously advancing the economic and social conditions in the communities in which it operates\" [8] (p. 66). Business value creation (cf., profit) has long been a major concern in the business field, while societal issues have been treated as peripheral matters [8]. Assuming that corporations have an advantage over individuals or governments in solving social problems [6], shared value creation identifies and expands the link between business value creation and the pursuit of social causes. In a similar vein, the BoP theory [7] concerns business models that target markets at the bottom of the economic pyramid. Prahalad and Hammond [7] argue that these BoP models are new sources of growth for multinational companies. It is known that the creation of shared value is more influential when companies expand their business to developing countries and target low-income markets [17].
This study compares the effects of CSV with philanthropic giving on consumer response. We study CSV, in particular, because it covers larger targets and markets, including the BoP market. CSV involves creating business value by adding value to society by solving its needs and challenges. That is, CSV recognizes societal needs, not just conventional economic needs, and defines relevant markets [8]. As such, it is similar to other types of socially responsible activities, especially CRM. Porter and Kramer [8] distinguish CSV from prior CSR activities by explaining that CSV \"expands the total pool of economic and social value\" [8] (p. 65), rather than redistributing the value already created by firms.
However, in academia and practical areas, Porter and Kramer's [8] CSV concept continues to spark controversial debate [20]. Crane et al. [20] criticizes CSV for being (1) an unoriginal concept; (2) too optimistic with respect to expectations of reaching social and economic goals concurrently with business compliance; (3) a naive approach in terms of business compliance; and (4) based on a shallow assumption of the role of corporations in society.
Nevertheless, the present study compares CSV activity with philanthropic giving, as CSV inherits the contexts of other CSR types in terms of creating \"social value\". To prevent confusion concerning the terms CSR, CSV, and philanthropic giving, the term of \"socially responsible corporate activity\" is used to encompass any form of company involvement in solving societal problems, including CSV and philanthropic giving.
The reasons for addressing philanthropic giving, in particular, are as follows. First, examples of true philanthropy are rarely found in reality. Much of what is labeled as corporate philanthropy seeks to generate and exploit associations with causes [2]. Second, because sponsorship and CRM are considered to be more conditional activities [15; 26], these activities might arouse public suspicion about companies' hidden motives [15]. Corporate philanthropic giving, however, is considered to be the most effective prosocial activity to minimize public suspicion [26].
### Stereotype Content Model
Consumers evaluate corporate social activities in the context of their existing knowledge [16]. Moreover, given that companies are increasingly investing larger budgets into socially responsible activities, it is important to know which types of companies are more or less likely to be effective and efficient [16]. The present study examines the effects of consumers' prior corporate stereotypes on the evaluation of socially responsible activities.
According to the stereotype content model, a stereotype is measured by the warmth and competence of a target [35; 36]. The literature on social psychology and organizational behaviors has addressed the tendency to differentiate among others on the basis of perceived warmth and competence [37]. When evaluating a target, perceivers judge (1) whether the target intends to help or harm (i.e., warmth) and (2) whether the target can carry out its intent (i.e., competence) [38]. These two fundamental dimensions combine and generate distinct emotions, including admiration, contempt, envy, and pity [35; 36]. Although definitions vary, warmth relates to perceptions of generosity, kindness, sincerity, friendliness, and tolerance, while competence relates to perceptions of confidence, effectiveness, efficiency, independence, capability, intelligence, and competitiveness [35; 36; 37; 39].
A stereotype is defined as \"a shorthand, blanket judgment containing evaluative components\" [37] (p. 225). For instance, rich people are often perceived as being highly competent but not very warm, and the elderly are often perceived as being not very competent but very warm [35]. Warmth and competence have consistently emerged as two central dimensions of perceptions regarding specific individuals and groups [35; 36; 40; 41; 42], and they are often used when evaluating others as potential leaders [43], romantic partners [44], or employees [45].
Recent studies have used the stereotype content model to examine social perception as it applies to a variety of social targets. Some studies have expanded the use of the two aforementioned dimensions to measure stereotypes about corporate brands [37; 46; 47]. Kervyn et al. [47] used two dimensions--namely, warmth and competence--to examine how consumers' perceptions of brands are similar to their perceptions of people. In particular, Aaker et al. [37] investigated whether the warmth and competence consumers feel toward corporate brands influence their marketplace decisions, such as their willingness to buy. Consumers perceive nonprofits as being warmer but less competent than for-profit organizations, and, given this perception of low competence, have lower intentions to buy products made by nonprofits [37].
The present study examines the effects of stereotypes on evaluations of socially responsible corporate activities. We operationalize corporate warmth and competence based on prior studies [37; 46; 47], using a warmth index comprising \"warm\", \"kind\", and \"generous\" and a competence index comprising \"competent\", \"efficient\", and \"effective\".
## 3 Hypothesis Development
### Main Effect of Socially Responsible Corporate Activities (Philanthropic Giving vs. CSV)
The point that differentiates CSV from philanthropic giving is the pursuit of benefits for the firm, with an emphasis on extrinsic or self-interested motives. An extrinsic or self-interested motive is related to increasing the welfare of a company's brand by increasing sales or improving corporate image [48]. By contrast, philanthropic giving places greater emphasis on more intrinsic or selfless motives, given its ultimate goal of doing good and/or taking responsibility for societal problems [48]. According to Becker-Olsen et al. [49], when motivations are considered to be firm-oriented, consumers' favorable attitudes toward firms are likely to diminish. However, when motivations are considered to be society-oriented, attitudes toward firms are enhanced. Moreover, messages that completely lack self-interest are perceived as more trustworthy and persuasive [50].
Previous studies have shown that firms' perceived motives underlying their social activities influence their evaluations by consumers [51,52]. Thus, the present study builds on the following hypotheses (H1a-b), which aim to explore and compare the effects of CSV and philanthropic giving on consumers' evaluations of firms.
**H1a-b:**_Philanthropic giving (vs. CSV) influences consumers to evaluate a company more positively (H1a) and to perceive higher firm value (H1b)._
### Moderating Effect of Corporate Stereotype
The present study uses two dimensions from the stereotype content model: warmth and competence, which are arguably fundamental dimensions of people's judgments. In particular, it uses the following four company stereotypes: (1) high warmth and high competence (HWHC); (2) low warmth and low competence (LWLC); (3) low warmth and high competence (LWHC); and (4) high warmth and low competence (HWLC).
#### 3.2.1 Hwhc, Lwlc
The emotion of admiration, which is linked to high competence and high warmth, is \"directed toward those with positive outcomes when that does not detract from the self\" [35] (p. 869). This can be considered a positive emotional signal. When a company has a good reputation, people infer corporate social responsible activity as a mutually beneficial activity [26]. Moreover, consumers' desire to buy a company's products increases when they feel admiration for the company [37]. In such cases, CSV, which is considered to involve self-interested motives, is inferred as mutually beneficial activity. Therefore, the effects of CSV and philanthropic giving on consumers' attitudes are expected to be similar (H2a-b).
**H2a-b:**_In a company characterized by high warmth and high competence (HWHC), CSV and philanthropic giving have the same effect on company evaluation (H2a) and perceived firm value (H2b)._
By contrast, when a company has a poor reputation, people tend to perceive the company's efforts as self-interested [26]. When a person encounters an object characterized by low competence and low warmth, they will feel contempt and disgust [40]. Therefore, such companies will not benefit from a positive halo effect in evaluations of their CSV (H3a-b).
**H3a-b:**_In a company characterized by low warmth and low competence (LWLC), philanthropic giving (vs. CSV) influences consumers to evaluate the company more positively (H3a) and to perceive higher company value (H3b)._
#### 3.2.2 Hwlc, Lwhc
According to Kervyn et al. [47], companies perceived as having low warmth and high competence (LWHC) are considered to be luxury brands, whereas organizations with high warmth and lowcompetence (HWLC) are believed to need government support in the form of subsidy funding. The effect of corporate social activities on different types of companies can be explained by the \"dual motivational effect\", which refers to two contradictory values: reliance on social activities and abstract conceptions of the parent brand [16].
Luxury brands are associated with the self-enhancement concept, which causes motivational conflict in communications of corporate socially responsible initiatives [16]. According to Torelli et al. [16], the evaluation of a luxury brand tends to decline in the presence (vs. absence) of CSR information. The \"luxury\" tag automatically activates self-enhancement values of dominance over people and resources [53]. By contrast, CSR activates self-transcendence values and drives prosocial activities, such as caring for society [54]. Verplanken and Holland [54] asserted that this effect would not emerge with other brand concepts that do not have motivational conflicts with CSR.
Therefore, a company perceived as LWHC may face conflict when engaging in social activities (H4a-b). Philanthropic giving is considered to be a more selfless activity than CSV, which produces a positive perception of CSV.
**H4a-b**: _In a company characterized by low warmth and high competence (LWHC), CSV (vs. philanthropic giving) influences consumers to evaluate the company more positively (H4a) and to perceive higher company value (H4b)._
The emotion of pity is closely related to companies characterized by high warmth and low competence [40]. When HWLC companies initiate philanthropic giving, the emotion of pity may be enhanced by the assimilation effect and may result in negative attitudes or behaviors toward the companies' CSR initiatives. Therefore, because philanthropic giving activities are considered to be more selfless than CSV activities, philanthropic giving will produce a stronger emotion of pity and heightened negative attitudes (H5a-b).
**H5a-b**: _In a company characterized by high warmth and low competence (HWLC), CSV (vs. philanthropic giving) influences consumers to evaluate the company more positively (H5a) and to perceive higher company value (H5b)._
## 4 Materials and Methods
### Stimulus Material and Instrument Development
Participants were told that the survey involved determining how consumers evaluate the corporate activities of a leading agrifood company producing home meal replacement products in Korea. The participants began the study by reading one of four manipulated corporate history timelines. A total of 5 warm events, 5 competent events, and 11 neutral events were assembled to manipulate the warmth and the competence of the focal company (Table 1, Appendix A). The warm events included corporate activities concerning the environment and the company's employees (e.g., initiating an animal welfare program). Competence events mainly included events related to company products and services (e.g., certified Hazard Analysis and Critical Control Point). Neutral events included events related to daily activities, such as moving to a new office. To check whether the timelines were well manipulated, the researchers asked the participants to rate their perceptions of the company's warmth and competence on a three-item five-point scale (Appendix B).
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Type of Events Inserted** & **HWHC** & **HWLC** & **LWHC** & **LWLC** \\\\ \\hline Warm events & 5 & 5 & 0 & 0 \\\\ Competent events & 5 & 0 & 5 & 0 \\\\ Neutral events & 1 & 6 & 6 & 11 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Number of inserted events for each corporate stereotype.
The participants were subsequently instructed to read one of two excerpts (Appendix C). Each excerpt represented a different social value creation strategy: philanthropic giving or CSV. Each scenario began with a thorough narrative of the company, including its responsible ethical actions and their effects. The excerpts were adapted and modified based on the actual corporate practice of Danone, \"Grameen Danone Foods\" [55]. Danone developed a yogurt called _Shakti Doi_ that has a high nutritional value and is affordable to the poor in Bangladesh [55; 56]. Rather than making donations, Danone retails _Shakti Doi_ for between three and four cents. In the present study, a rice-portridge product was used instead of yogurt, and the target population was \"elderly people and children in an urban area\".
According to Porter and Kramer [8], the starting point of business planning involves highlighting the possibility of earning business profits while simultaneously addressing societal problems. Compared to philanthropic giving, CSV focuses more explicitly on creating real business profits directly. Therefore, the CSV excerpt referred to R&D for a new rice-portridge product and the philanthropic giving excerpt referred to the company donating an existing rice-portridge product. Only the CSV scenario described a result for the company--a gradual increase in sales. As both CSV and philanthropic giving create social value, both scenarios contained the result of decreasing the societal calcium deficiency rate by either developing or donating rice-portridge.
To check the manipulation of each scenario, the researchers asked the participants to evaluate the perceived social value and business value (\\(\\alpha\\) = 0.758) of each activity (Appendix B). Moreover, the congruency between corporate core competence and social activity was controlled and measured (Appendix B). Previous studies have consistently asserted that companies should support causes that match logically with their products or brand images to elicit positive consumer behaviors [49; 57]. Table 2 summarizes the manipulated points in the philanthropic giving and CSV scenarios.
Subsequently, the participants answered a series of questions about consumer responses to socially responsible corporate activities. Prior studies have categorized consumer responses into psychological and behavioral outcomes [58]. As the present study used a fictitious company as a stimulus, we used a psychological variable for the corporate evaluation. Specifically, we used Brown and Dacin's [59] measurement of corporate evaluation, which considers a company's overall degree of favorability. The participants evaluated the fictitious company on a five-point scale (Appendix B).
In addition to examining companies' degrees of favorability, many prior studies have measured the impact of socially responsible activities on company value using market-based measures, such as price per share or share price appreciation [60]. Stock prices determine market [60]. Since the present study is concerned with consumers' perceptions, we asked the participants to evaluate the stock price of the fictitious company, ABC AgriFood Inc. in relation to the average stock price in Korean currency (Korean Won, KRW). We assumed the average stock price to be that of agrifood companies of similar size and profit and with a similar number of listed stocks.
## 5 Results
A survey-based experiment was conducted. Data were collected using both a paper-based survey and a web-based survey system. This study recruited 212 undergraduate and graduate students from business- and economics-related classes in Seoul and its greater urban area. An additional 268 participants were office workers recruited using web-based survey system. The most important filtering criterion for these workers was whether they worked in an agrifood-related company. Upon
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Manipulated Points** & **Philanthropic Giving Scenario** & **CSV Scenario** \\\\ \\hline Perceived social value created & Yes & Yes \\\\ Perceived business value created & No & No \\\\ Perceived congruency between corporate & Yes & Yes \\\\ core competence and social initiative & & Yes \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Manipulated points in philanthropic giving and CSV scenarios.
completion of the survey, the participants were provided with a meal coupon or book gift card ($3-$55) for their participation. Each of the 480 participants (49.4% male, 50.6% female) were assigned randomly to the conditions of the four previously described corporate stereotypes (LWLC, HWLC, LWHC, and HWHC) \\(\\times\\) two socially responsible corporate activities (CSV and philanthropic giving).
### Manipulation Check
Before analyzing the effects of philanthropic giving and CSV on consumer responses, the manipulations of the stimuli were checked. First, the result of one-way analysis of variance (ANOVA) demonstrated that the corporate history timelines were well manipulated in terms of warmth and competence (\\(p<0.001\\)). Corporate history timelines with competence events (LWHC: m = \\(-0.371\\), SD = 0.853; HWHC: m = 0.309, SD = 0.837) differed significantly from those with non-competence events (LWLC: m = \\(-0.528\\), SD = 1.128; HWLC: m = \\(-0.196\\), SD = 0.909) at a 1% level. Moreover, corporate history timelines with warm events (HWLC: m = 0.588, SD = 0.712; HWHC: m = 0.623, SD = 0.736) differed significantly from those with non-warm events (LWLC: m = \\(-0.873\\), SD = 0.758; LWHC: m = \\(-0.428\\), SD = 0.864) at a 1% level.
The participants perceived the philanthropic giving and CSV excerpts differently. When encountering a CSV scenario, they tended to perceive higher business value (\\(p<0.05\\)) and lower social value (\\(p<0.001\\)) than they did when encountering philanthropic giving. As expected, the participants did not perceive a difference in congruency between social activity and the company itself (\\(p=0.055\\)).
### Main Effect of Socially Responsible Activities
To examine the main effects of philanthropic giving and CSV on participant response (H1a-b), an independent \\(t\\)-test (one-tailed test) was conducted. There was no significant difference between the participants who read the scenario of philanthropic giving and those who read the CSV scenario in terms of corporate evaluation (M\\({}_{\\text{PG}}\\) = 3.66, M\\({}_{\\text{CSV}}\\) = 3.70, \\(p\\) = 0.259) or perceived firm value (M\\({}_{\\text{PG}}\\) = 116,849, M\\({}_{\\text{CSV}}\\) = 119,839, \\(p\\) = 0.129). However, this result depends on the corporate stereotype. Our findings support the view that considering corporate stereotypes is key to understanding how consumers respond to differed social initiatives. Although firms generally communicate their social initiatives to elicit positive consumer responses, the results obtained herein suggest that, in some circumstances, these initiatives have an insufficient effect.
### Moderating Effect of Corporate Stereotype
To examine the moderating effect of corporate stereotypes on the relationship between corporate socially responsible activities and consumer response, the data were analyzed using an independent \\(t\\)-test (Figure 1).
Figure 1: _Cont._
#### 5.3.1 HWHC Company
When presented with a warm and competent company, there was no significant difference between the participants who read the philanthropic giving scenario and those who read the CSV scenario in terms of corporate evaluation (M\\({}_{\\text{PG}}\\) = 3.81, M\\({}_{\\text{CSV}}\\) = 3.66, \\(p\\) = 0.113) or perceived firm value (M\\({}_{\\text{PG}}\\) = 124,401, M\\({}_{\\text{CSV}}\\) = 119,017, \\(p\\) = 0.181). Consumers perceive socially responsible corporate activities as being mutually beneficial when a company has a good reputation [26]. The emotion of admiration is closely linked to people's perception of an entity as warm and competent [40]. Thus, consumers will perceive even those warm and competent companies that include self-interested motives in communicating their socially responsible activities as mutually beneficial (cf., CSV).
#### 5.3.2 LWLC Company
By contrast, the moderating role of the low-warmth low-competence stereotype in the effect of socially responsible corporate initiatives was proven to be insignificant. There was no difference between the participants who read the philanthropic giving scenario and those who read the CSV scenario in terms of corporate evaluation (M\\({}_{\\text{PG}}\\) = 3.48, M\\({}_{\\text{CSV}}\\) = 3.52, \\(p\\) = 0.392) and perceived firm value (M\\({}_{\\text{PG}}\\) = 115,322, M\\({}_{\\text{CSV}}\\) = 115,019, \\(p\\) = 0.474). The expectation was that philanthropic giving would have a stronger effect than CSV, but the results revealed no difference between the two initiatives. This may be explained by the emotions linked to each combination of high-low competence-warmth stereotypes. Studies have shown that LWLC stereotype may induce the emotion of contempt [40]. According to Cuddy et al. [40], contempt-related emotions elicit passively harmful actions, such as distancing, exclusion, and rejection. In cases of LWLC companies, consumers may distance themselves from or reject socially responsible corporate activities, resulting in no difference between philanthropic giving and CSV. This assertion is supported by the results presented in Table 3. Corporate stereotypes resulted in significant differences in corporate evaluation (F = 5.842, \\(p\\) < 0.01), with the post-hoc test showing that the LWLC stereotype evoked the lowest corporate evaluations.
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
**Corporate Stereotype** & **N** & **Mean** & **S.D.** & **F** & \\(p\\) & **Post-Hoc (Duncan)** \\\\ \\hline LWLC & 112 & 3.50 & 0.684 & & & 3.50 & \\\\ HWLC & 122 & 3.83 & 0.585 & & & & 3.83 \\\\ LWHC & 120 & 3.65 & 0.545 & & & 3.65 & \\\\ HWHC & 124 & 3.73 & 0.664 & & & & 3.73 & 3.73 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Difference in corporate evaluation of different corporate stereotypes.
Figure 1: Results of the independent \\(t\\)-test of corporate evaluation (**a**) and perceived firm value (**b**).
#### 5.3.3 HWLC Company
The moderating role of the high-warmth low-competence stereotype in the effect of corporate social initiatives was proven to be significant. When a warm but less competent company was presented, the participants who read the CSV scenario evaluated the company as more valuable (M\\({}_{\\text{PG}}\\) = 107,966, M\\({}_{\\text{CSV}}\\) = 122,648, \\(p\\) < 0.01). However, the participants' corporate evaluations of the scenarios of philanthropic giving and CSV were not very different (M\\({}_{\\text{PG}}\\) = 3.80, M\\({}_{\\text{CSV}}\\) = 3.86, \\(p\\) = 0.271).
The emotion of pity is closely related to companies perceived as HWLC [40]. Cuddy et al. [40] suggested that pity involves sadness and depression, which may lead to inaction, avoidance, or neglect. An example of such a behavior would be turning off the TV during a commercial showing starving children. When an HWLC company initiates philanthropic giving, the emotion of pity may be enhanced through the assimilation effect, potentially resulting in negative attitudes or behaviors toward the company's socially responsible activities.
#### 5.3.4 LWHC Company
The moderating role of the low-warmth high-competence stereotype in the effect of social initiatives on consumer response was partially proven. When presented with a less warm but competent company, the participants who read the CSV scenario evaluated the company as more attractive (M\\({}_{\\text{PG}}\\) = 3.56, M\\({}_{\\text{CSV}}\\) = 3.75, \\(p\\) < 0.05). However, there was no significant difference between the participants who read the philanthropic giving scenario and those who read the CSV scenario in terms of perceived firm value (M\\({}_{\\text{PG}}\\) = 119,838, M\\({}_{\\text{CSV}}\\) = 121,817, \\(p\\) = 0.329).
Kervyn et al. [47] asserted that LWHC companies represent luxury brands because they specifically target wealthy consumers, resulting in the general public perceiving them as less warm. Luxury brands generally activate self-enhancement values of dominance over people and resources [53]. These self-enhancement values may present conflicts when such companies initiate prosocial activities. Because CSV is a more self-interested activity, it may produce less conflict than philanthropic giving.
## 6 Discussion
### Theoretical Implications
First, this study is one of the first investigations of the effect of CSV on consumer response. Prior research on CSV has focused on extending CSV-related theory [61; 62] or on developing business strategy in specific societal fields [56; 63]; in other words, these studies have focused more on the internal perspective of the firm. As companies initiate prosocial activities based on their assumptions of consumers' rewards for their support [12], this study contributes to the extant knowledge by examining CSV from an external perspective.
Second, though the originality and effectiveness of CSV are still debated, our results show that the two studied activities affect consumers differently, according to the type of company. This suggests the possibility of CSV use as one communication strategy and socially responsible corporate activity that corporations can follow.
Third, this study examines the moderation effect of prior attitudes toward companies. A few prior studies have addressed how evaluations of corporate social initiatives differ depending on firms' prior reputations [15; 16; 26]. This study expands the scope of these prior attitude-related studies by using the two fundamental dimensions of warmth and competence employed when judging other people or objects. The results of the present study show that the effects of socially responsible corporate activities vary depending on consumers' prior attitudes towards the implementing companies.
Fourth, this study expands the use of the stereotype content model when evaluating companies or brands. The two dimensions of warmth and competence have typically been used in studies judging other people or nations. Because attempts to use these dimensions in brand evaluations are relatively recent, only a limited number of studies exist [39; 47]. This study categorizes and examines the concrete effects of four corporate stereotypes of a hypothetical company. The corporate history timeline ofthe company was manipulated to describe warm, competent, and neutral events. The participants perceived the four types of timelines differently (\\(p<0.001\\)).
Lastly, this study supports the existence of the halo effect of prior reputation. The halo effect is the \"bias\" of a rare who fails to discriminate a measure that spills over to another measure (Rae and Cameron, 2016). According to Bae and Cameron (2016), people infer corporate social activities as mutually beneficial when a company has a good reputation, but as self-interested when a company has a bad reputation. However, our findings show that the halo effect exists only for positive stereotypes. For warm and competent companies that evoke the emotion of admiration, even a self-interested activity, such as CSV, can be evaluated positively due to the halo effect. According to this logic, consumers should evaluate the self-interested social activities of companies without considering their warmth or competence; this explains the participants' more negative evaluations of CSV compared to philanthropic giving. However, the findings show no difference in the effects of philanthropic giving and CSV. We can assume that there is no halo effect when a negative stereotype exists a priori because consumers either ignore or reject evaluations of any associated socially responsible corporate activity.
### Practical Implications
Rather than focusing only on the effects of socially responsible activities themselves, the present study attempted to discover more appropriate activities for each type of agrifood organization. Our findings support the view that considering corporate stereotypes is key to understanding how consumers respond to different types of social activities.
One way that the Korean government has sought to grow the sustainable agrifood industry is by supporting the foundation of small and medium enterprises (SMEs) in agrifood. The number of these agricultural SMEs has increased gradually, from 10,644 in 2010 to 17,585 in 2014 (Rae and Cameron, 2016). Such SMEs are founded exclusively within the agricultural industry and related value chain activities (e.g., process, retail, restaurant, or lodging businesses). The findings of this study may offer guidance to top management and corporate social activity practitioners within various types of agrifood companies for establishing strategic and effective social activities. In other words, this research provides some guidance to agrifood companies that choose either philanthropic giving or CSV activities.
Companies should communicate their social value creation activities carefully based on their consumer stereotypes. For some companies, too much appeal based on social activities may trigger relatively negative consumer responses. Firstly, for competent but less warm organizations, CSV is a more appropriate corporate social activity than philanthropic giving. To change their solely competence-oriented images, many organizations initiate corporate social activities that aim to create social value. However, consumers may perceive such activities as contradictory. Compared to philanthropic giving, CSV approaches may be more effective in reducing this contradiction. Secondly, CSV is also more appropriate as a socially responsible corporate activity for warm but incompetent agrifood organizations. As consumers already perceive these organizations as warm, philanthropic giving, which emphasizes only the creation of social value, may trigger avoidance or neglect. Most warm organizations tend to match their social activities with their missions and visions by focusing exclusively on social value. However, the results of the present study reveal the possibility of consumers' non-preference for these companies' exclusive emphasis on social value.
Some companies may be less concerned about the choice between philanthropic giving and CSV. The results of the present study show that consumers may ignore the type of socially responsible corporate activity for companies perceived as warm and competent or not warm and incompetent. However, practitioners should recognize the different reasons for these perceptions. For warm and competent companies with good reputations, consumers may perceive even self-interested social activities as mutually beneficial. By contrast, for unwarm and incompetent companies with bad reputations, consumers may not even concern themselves with evaluating the companies' socially responsible corporate activities. Such situations might occur due to the structure of Korea's agrifood industry, which is characterized by a large number of SMEs. SMEs have limited opportunities to communicate directly with consumers due to their limited resources. In such cases, knowledge about a company's reputation or evaluation is important for planning the company's communication strategy.
## 7 Conclusions
The main goal of the present study was to examine the influence of two socially responsible corporate activities on people's response toward the company. In addition, the present study addresses the effects of priori perceptions of companies, by using corporate stereotypes as moderators. The results show that the type of socially responsible corporate activity (CSV vs. philanthropic giving) does not influence corporate evaluations. However, this result depends on consumers' prior corporate stereotype. Although firms generally communicate their social initiatives to elicit a positive consumer response, the results show an insufficient effect in some circumstances. Consumers evaluate an unwarm but competent company more attractively and place higher value on an incompetent but warm company, according to the responses of the subjects who read the CSV scenario (vs. Philanthropic giving scenario).
However, this study has several limitations. Although the present study provides first insights into the nuances of consumer responses toward CSV and philanthropic giving, the external validity of the experiments conducted herein must be strengthened through follow-up field research.
First, this study uses a fictitious company to examine the effects of corporate social activity. In reality, consumer responses to philanthropic giving are inevitably influenced by brand preferences for specific firms. To improve the externality of the present study, additional research is needed to categorize real companies into four types of corporate stereotypes.
Second, the present study examines the effects of socially responsible corporate activities using cases involving only one type of social activity at once. Usually, companies engage in several simultaneous social activities over a long period of time. Thus, future research should examine the complex effects of several socially responsible activities.
Third, this research is also limited in that it considers only the agrifood industry and Korean consumers. Furthermore, half of the study sample comprised undergraduate or graduate students. As a result, the study findings may represent the views of Korean people, primarily students, rather than the entire population, and they may only be relevant for the agrifood industry. Thus, the findings should be interpreted with caution to avoid overgeneralization. Future studies on the effects of socially responsible corporate activities across various industries and from a demographic perspective would be highly relevant.
An additional limitation concerns company size. Hartmann Hartmann (2005) showed that if an industry structure is heterogeneous in terms of the size of its enterprises and if it relies primarily SMEs, then societal pressure regarding socially responsible corporate activities is likely differ across the food chain. The Korean agrifood industry has a heterogeneous structure characterized by a gradual increase in the number of SMEs. Future studies should therefore consider company size when evaluating a company's socially responsible corporate activities.
Lastly, this study focuses only on consumer responses to socially responsible corporate activities. The holistic stakeholder approach has been an important issue in sustainability studies because companies addressing stakeholder concerns have been shown to perform better than companies that do not address these concerns [66]. Moreover, different stakeholders may have different CSR preferences and needs [67]. Further studies could broaden our understanding of the participant spectrum by including suppliers and/or employees.
This work was carried out with the support of \"Cooperative Research Program for Agriculture Science & Technology Development (Project No. PJ0113902016)\" Rural Development Administration, Republic of Korea.
**Author Contributions:** All of the authors made contributions to the work in this paper. Dongmin Lee, Junghoon Moon, Jaeseok Jeong and Young Chan Choe designed the experiments; Dongmin Lee and Junghoon Moon performed the experiments, analyzed the data, and wrote the paper. Dongmin Lee, Junghoon Moon, Jeseok Jeong and Young Chan Choe revised the manuscript. All authors read and approved the final manuscript.
**Conflicts of Interest:** The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. The authors declare no conflict of interest.
## Appendix A Events Used to Describe ABC AgriFood Inc.'s Corporate History Timelines
### Warm Events (5)
* Promulgation of ethical management.
* Consumer Complaints Management Program (CCMS) certification acquisition.
* Selection as \"Good Korean company to work for\" (GWP Korea, Fortune Korea).
* Initiation of animal welfare programs.
* Official recognition as an eco-friendly management company.
### Competent Events (5)
* All-out execution for zero chemical additives.
* Hazard Analysis and Critical Control Point (HACCP) certification acquisition in all factories.
* #1 Market share in Home Meal Replacement domestic market.
* First development of space food in Korea.
* Renewal Enterprise Resource Planning (ERP) System.
### Neutral Events (11)
* Changing of the company name to \"ABC AgriFood Inc.\".
* Completion of headquarters building and declaration of corporate culture.
* Launching of official homepage.
* Isu Park assumes role of CEO.
* Establishment of a general organization for management support.
* Ceremony for 10th anniversary of incorporation and establishment of new CI.
* Publishing of history book of ABC AgriFood Inc.
* Completion of training institute at Icheon, Gyeonggi province.
* Separation of frozen food business department.
* Relocation of office building to Jangchung-dong.
## Appendix B Operationalization of Variables
### Perceived Competence
ABC AgriFood Inc. is
(1) effective; (2) capable; (3) competitive.
### Perceived Warmth
ABC AgriFood Inc. is
(1) warm; (2) friendly; (3) good-natured.
### Perceived Congruency
ABC AgriFood Inc
1. is an expert in this social activity.
2. is sufficiently believable to initiate this social activity.
3. is sufficiently knowledgeable to initiate this social activity.
4. is qualified to initiate this social activity.
5. has good match with this social activity.
### Perceived Business Value
ABC AgriFood Inc. can
1. increase profitability through this initiative.
2. enlarge market share through this initiative.
3. increase its growth rate through this initiative.
4. obtain new customers through this initiative.
5. increase its growth rate through this initiative.
6. develop new products or services through this initiative.
7. open new markets through this initiative.
8. increase its stock price through this initiative.
### Perceived Social Value
ABC AgriFood Inc. can
1. solve social problems through this initiative.
2. give something back to the community through this initiative.
3. bring benefits to society through this initiative.
### Perceived Firm Value
The average stock price of food companies similar to ABC AgriFood Inc. is 114,920 KRW. Assume that their firm size, total sales, and number of listed stocks are the same. What do think is the stock price of _ABC_ AgriFood Inc.?
### Corporate Evaluation
ABC AgriFood Inc. is very unfavorable (1)-very favorable (5).
## Appendix C Scenarios about Socially Responsible Corporate Activity
### Philanthropic Giving Scenario (387 Words)
The number of vulnerable social groups, especially elderly and grandparent-led families (consisting of grandparents and grandchildren), are in rapid increase in rural areas of Korea and the necessity of maintaining their dietary lives and nutrition intake has increased. Acknowledging this issue since 2011, ABC AgriFood Inc., started a donation program for the nutritional improvement of vulnerable social groups in rural areas.
Porridge was selected as the product which will be donated for the nutrition improvement program. Porridge is suitable for the elderly and children who have weak digestive functions and is easy to eat. Moreover, studies indicate that the elderly and children living in rural areas are suffering from severe vitamin and calcium deficiency, and the necessary nutrients of porridges can easily be altered by the ingredients. In particular, since porridge is one of the major products of ABC AgriFood Inc., existing products could be donated to the nutrition improvement program.
After six months of preparation, starting from June 2011, ABC AgriFood Inc. is donating two types of its products, the \"Healthy Porridge Series\" (Healthy Tuna Porridge, Healthy Vegetable Porridge), to selected vulnerable social groups in the rural area. The products are regularly delivered with the help of regional volunteer groups. Employees of _ABC_ AgriFood Inc. are also taking part occasionally to deliver the products, which is encouraging other employees to volunteer as well. In 2011, the product was initially delivered to the rural areas of one province, Gyeongsangnam-do Province, but currently in 2014, the donation project has been expanded to three provinces; Gyeongsangnam-do, Gyeongsangbuk-do, and Jeollanam-do Provinces. The porridges are delivered to families or to village halls, etc. situated in the rural areas.
Research showed that _ABC_ AgriFood Inc.'s \"Healthy Porridge Series Donation Program\" had a great influence on improving the nutrition of the elderly and children of the subject areas. Nutrient intake measurement of the main targets of this program showed that the rate of subjects who have reached the recommended calcium requirement has increased from 21.4% in 2011 to 59.8% in 2013. Even in the case of vitamins, the rate of subjects who have reached the recommended requirement has improved from 50.5% in 2011 to 76.2%. ABC AgriFood Inc. has invested about 300,000,000 won for this project and passed the break-even point in 2013. The project's return is continuing to rise currently in 2014.
### CSV Scenario (396 Words)
The number of vulnerable social groups, especially the elderly and grandparent-led families (consisting of grandparents and grandchildren), are in rapid increase in rural areas of Korea and the necessity of maintaining their dietary lives and nutrition intake has increased. Acknowledging this issue since 2011, ABC AgriFood Inc., started developing a new product for the nutritional improvement of vulnerable social groups in rural areas.
Porridge was selected as the product which will be donated for the nutrition improvement program. Porridge is suitable for the elderly and children who have weak digestive functions and is easy to eat. Moreover, studies indicate that the elderly and children living in rural areas are suffering from severe vitamin and calcium deficiency, and the necessary nutrients of porridges can easily be altered by the ingredients. In particular, since porridge is one of the major products of ABC AgriFood Inc., a new product was developed effectively based on existing know-how.
After approximately six months of preparation, two types of \"Healthy Porridge Series\", have been launched (Healthy Tuna Porridge, Healthy Vegetable Porridge) by A Foods Inc. Each porridge product weighs 180 g and its calorie content has been adjusted for the elderly and children. The price of the product is 1000 won. The new product is 30% smaller in volume compared to the existing porridge product produced by ABC AgriFood Inc. but is 40% lower in price. The products were initially sold only in the rural areas of a single province, Gyeongsangnam-do Province in 2011, but currently in 2014, it has been expanded to three provinces; Gyeongsangnam-do, Gyeongsangbuk-do, and Jeollanam-do Provinces. The products are currently supplied to major retail stores in rural areas, and to convenience stores and retail stores near schools in particular.
Research showed that ABC AgriFood Inc.'s \"Healthy Porridge Series Donation Program\" had a great influence on improving the nutrition of the elderly and children of the subject areas. Nutrient intake measurement of the main targets of this program showed that the rate of subjects who have reached the recommended calcium requirement has increased from 21.4% in 2011 to 59.8% in 2013. Even in the case of vitamins, the rate of subjects who have reached the recommended requirement has improved from 50.5% in 2011 to 76.2%. ABC AgriFood Inc. has donated about 300,000,000 won for this program for the past three years and is making great effort to continue the program.
## References
* (1) De Bakker, F.G.; Groenewegen, P.; Den Hond, F. A bibliometric analysis of 30 years of research and theory on corporate social responsibility and corporate social performance. _Bus. Soc._**2005**, _44_, 283-317. [CrossRef]
* (2) Polonsky, M.J.; Speed, R. Linking sponsorship and cause related marketing: Complementarities and conflicts. _Eur. J. Market._**2001**, _35_, 1361-1389. [CrossRef]
* (3) Collins, M. Global corporate philanthropy and relationship marketing. _Eur. Manag. J._**1994**, _12_, 226-233. [CrossRef]
* (4) Stroup, M.A.; Neubert, R.L.; Anderson, J.W., Jr. Doing good, doing better: Two views of social responsibility. _Bus. Horiz._**1987**, _30_, 22-25. [CrossRef]
* (5) Porter, M.E.; Kramer, M.R. The competitive advantage of corporate philanthropy. _Harvard Bus. Rev._**2002**, _80_, 56-68.
* (6) Hildebrand, D.; Sen, S.; Bhattacharya, C. Corporate social responsibility: A corporate marketing perspective. _Eur. J. Market._**2011**, _45_, 1353-1364. [CrossRef]
* (7) Prahalad, C.K.; Hammond, A. Serving the world's poor, profitably. _Haro. Bus. Rev._**2002**, _80_, 48-59. [PubMed]
* (8) Porter, M.E.; Kramer, M.R. Creating shared value. _Haro. Bus. Rev._**2011**, _89_, 62-77.
* (9) Hartmann, M. Corporate social responsibility in the food sector. _Eur. Rev. Agric. Econ._**2011**, _38_, 297-324. [CrossRef]
* (10) Jones, P.; Comfort, D.; Hillier, D.; Eastwood, I. Corporate social responsibility: A case study of the UK's leading food retailers. _Br. Food J._**2005**, _107_, 423-435. [CrossRef]
* (11) Liapakis, A.; Costopoulou, C.; Sideridis, A. The corporate social responsibility in the greek agri-food sector. In Proceedings of the 7th International Conference on Information and Communication Technologies in Agriculture, Food, and Environment, Kavala, Greece, 17-20 September 2015.
* (12) Levy, R. _Give and Take: A Candidor Count of Corporate Philanthropy_; Harvard Business Press: Boston, MA, USA, 1999.
* (13) Bhattacharya, C.B.; Sen, S. Doing better at doing good: When, why, and how consumers respond to corporate social initiatives. _Calif. Manag. Rev._**2004**, _47_, 9-24. [CrossRef]
* (14) Du, S.; Bhattacharya, C.; Sen, S. Maximizing business returns to corporate social responsibility (CSR): The role of CSR communication. _Int. J. Manag. Rev._**2010**, _12_, 8-19. [CrossRef]
* (15) Dean, D.H. Consumer perception of corporate donations effects of company reputation for social responsibility and type of donation. _J. Adver._**2003**, _32_, 91-102. [CrossRef]
* (16) Torelli, C.J.; Monga, A.B.; Kaikati, A.M. Doing poorly by doing good: Corporate social responsibility and brand concepts. _J. Consum. Res._**2012**, _38_, 948-963. [CrossRef]
* (17) Michelini, L. Innovation for social change. In _Social Innovation and New Business Models: Creating Shared Value in Low-Income Markets_; Springer Science & Business Media: New York, NY, USA, 2012; pp. 1-18.
* (18) Campbell, L.; Gulas, C.S.; Gruc, T.S. Corporate giving behavior and decision-maker social consciousness. _J. Bus. Ethics_**1999**, _19_, 375-383. [CrossRef]
* (19) Gardberg, N.A.; Fombrun, C.J. Corporate citizenship: Creating intangible assets across institutional environments. _Acad. Manag. Rev._**2006**, _31_, 329-346. [CrossRef]
* (20) Crane, A.; Palazzo, G.; Spence, L.J.; Matten, D. Contesting the value of \"creating shared value\". _Calif. Manag. Rev._**2014**, _56_, 130-153. [CrossRef]
* (21) Piacentini, M.; MacFadyen, L.; Eadie, D. Corporate social responsibility in food retailing. _Int. J. Retail Distrib. Manag._**2000**, _28_, 459-469. [CrossRef]
* (22) Carroll, A.B. The pyramid of corporate social responsibility: Toward the moral management of organisational stakeholders. _Bus. Horiz._**1991**, _34_, 39-48. [CrossRef]
* (23) Lii, Y.-S.; Lee, M. Doing right leads to doing well: When the type of CSR and reputation interact to affect consumer evaluations of the firm. _J. Bus. Ethics_**2012**, _105_, 69-81. [CrossRef]
* (24) Jeong, H.J.; Paek, H.-J.; Lee, M. Corporate social responsibility effects on social network sites. _J. Bus. Res._**2013**, _66_, 1889-1895. [CrossRef]
* (25) McAlister, D.T.; Ferrell, L. The role of strategic philanthropy in marketing strategy. _Eur. J. Market._**2002**, _36_, 689-705. [CrossRef]
* (26) Bae, J.; Cameron, G.T. Conditioning effect of prior reputation on perception of corporate giving. _Public Relat. Rev._**2006**, _32_, 144-150. [CrossRef]* (27) Meenaghan, T. The role of sponsorship in the marketing communications mix. _Int. J. Advert._**1991**, _10_, 35-47.
* (28) Quester, P.; Plewa, C.; Palmer, K.; Mazodier, M. Determinants of community-based sponsorship impact on self-congruity. _Psychol. Market._**2013**, _30_, 996-1007. [CrossRef]
* (29) Cornwell, T.B., Maignan, I. An international review of sponsorship research. _J. Advert._**1998**, _27_, 1-21. [CrossRef]
* (30) Nan, X.; Heo, K. Consumer responses to corporate social responsibility (CSR) initiatives: Examining the role of brand-cause fit in cause-related marketing. _J. Advert._**2007**, _36_, 63-74. [CrossRef]
* (31) Vanhamme, J.; Lindgreen, A.; Reast, J.; van Popering, N. To do well by doing good: Improving corporate image through cause-related marketing. _J. Bus. Ethics_**2012**, _109_, 259-274. [CrossRef]
* (32) Grau, S.L.; Folse, J.A.G. Cause-related marketing (CRM): The influence of donation proximity and message-framing cues on the less-involved consumer. _J. Advert._**2007**, _36_, 19-33. [CrossRef]
* (33) Ross, J.K.; Patterson, L.T.; Stutts, M.A. Consumer perceptions of organizations that use cause-related marketing. _J. Acad. Market. Sci._**1992**, _20_, 93-97. [CrossRef]
* (34) Smith, S.M.; Alcorn, D.S. Cause marketing: A new direction in the marketing of corporate responsibility. _J. Ser. Market._**1991**, \\(5\\), 21-37. [CrossRef]
* (35) Fiske, S.T.; Cuddy, A.J.; Glick, P.; Xu, J. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. _J. Pers. Soc. Psychol._**2002**, _82_, 878-902. [CrossRef] [PubMed]
* (36) Cuddy, A.J.; Fiske, S.T.; Glick, P. Warmth and competence as universal dimensions of social perception: The stereotype content model and the BIAS map. _Adv. Exp. Soc. Psychol._**2008**, _40_, 61-149.
* (37) Aaker, J.; Vohs, K.D.; Mogilner, C. Nonprofits are seen as warm and for-profits as competent: Firm stereotypes matter. _J. Consum. Res._**2010**, _37_, 224-237. [CrossRef]
* (38) Capraiello, P.A.; Cuddy, A.J.; Fiske, S.T. Social structure shapes cultural stereotypes and emotions: A causal test of the stereotype content model. _Group Process. Intergr._**2009**, _12_, 147-155. [CrossRef] [PubMed]
* (39) Aaker, J.L.; Garbinsky, E.N.; Vohs, K.D. Cultivating admiration in brands: Warmth, competence, and landing in the \"golden quadrant\". _J. Consum. Psychol._**2012**, _22_, 191-194. [CrossRef]
* (40) Cuddy, A.J.; Fiske, S.T.; Glick, P. The BIAS map: behaviors from intergroup affect and stereotypes. _J. Pers. Soc. Psychol._**2007**, _92_, 631-648. [CrossRef] [PubMed]
* (41) Judd, C.M.; James-Hawkins, L.; Yzerbyt, V.; Kashima, Y. Fundamental dimensions of social judgment: Understanding the relations between judgments of competence and warmth. _J. Pers. Soc. Psychol._**2005**, _89_, 899-913. [CrossRef] [PubMed]
* (42) Yzerbyt, V.; Provost, V.; Corneille, O. Not competent but warm really? Compensatory stereotypes in the French-speaking world. _Group Process. Intergr. Relatsh._**2005**, \\(8\\), 291-308. [CrossRef]
* (43) Chemers, M.M. Leadership effectiveness: An integrative review. In _Blackwell Handbook of Social Psychology: Group Processes_; Blackwell Publishers Ltd.: Oxford, UK, 2001; pp. 376-399.
* (44) Sinclair, L.; Fehr, B. Voice versus loyalty: Self-construals and responses to dissatisfaction in romantic relationships. _J. Exp. Soc. Psychol._**2005**, _41_, 298-304. [CrossRef]
* (45) Casciaro, T.; Lobo, M.S. When competence is irrelevant: The role of interpersonal affect in task-related ties. _Admit. Sci. Q._**2008**, _53_, 655-684. [CrossRef]
* (46) Choy, M.K.; Kim, M.S.; Kim, J.I. The effect of purchase incentive type and company awareness on consumer response: Focusing on the warmth-competence perception. _J. Korean Market. Assoc._**2013**, _28_, 21-44.
* (47) Kervyn, N.; Fiske, S.T.; Malone, C. Brands as intentional agents framework: How perceived intentions and ability can map brand perception. _J. Consum. Psychol._**2012**, _22_, 166-176. [CrossRef] [PubMed]
* (48) Du, S.; Bhattacharya, C.; Sen, S. Reaping relational rewards from corporate social responsibility: The role of competitive positioning. _Int. J. Res. Market._**2007**, _24_, 224-241. [CrossRef]
* (49) Becker-Olsen, K.L.; Cudmore, B.A.; Hill, R.P. The impact of perceived corporate social responsibility on consumer behavior. _J. Bus. Res._**2006**, _59_, 46-53. [CrossRef]
* (50) Hovland, C.I.; Janis, I.L.; Kelley, H.H. _Communication and Persuasion; Psychological Studies of Opinion Change_; Yale University Press: New Haven, CT, USA, 1953.
* (51) Campbell, M.C.; Kirmani, A. Consumers' use of persuasion knowledge: The effects of accessibility and cognitive capacity on perceptions of an influence agent. _J. Consum. Res._**2000**, _27_, 69-83. [CrossRef]
* (52) Ellen, P.S.; Webb, D.J.; Mohr, L.A. Building corporate associations: consumer attributions for corporate socially responsible programs. _J. Acad. Market. Sci._**2006**, _34_, 147-157. [CrossRef]* Chartrand et al. (2008) Chartrand, T.L.; Huber, J.; Shiv, B.; Tanner, R.J. Nonconscious goals and consumer choice. _J. Consum. Res._**2008**, _35_, 189-201. [CrossRef]
* Verplanken and Holland (2002) Verplanken, B.; Holland, R.W. Motivated decision making: Effects of activation and self-centrality of values on choices and behavior. _J. Pers. Soc. Psychol._**2002**, _82_, 434-447. [CrossRef] [PubMed]
* Edmondson et al. (2014) Edmondson, A.C.; Moingeon, B.; Dessain, V.; Jensen, A.D. Global Knowledge Management at Danone (A). Available online: [http://www.hbs.edu/faculty/Pages/item.aspx?num=35308](http://www.hbs.edu/faculty/Pages/item.aspx?num=35308) (accessed on 24 April 2014).
* Pinson (2012) Pinson, M. Social entrepreneurs as the paragons of shared value creation? A critical perspective. _Soc. Enterp. J._**2012**, \\(8\\), 31-48. [CrossRef]
* Varadarajan and Menon (1988) Varadarajan, P.R.; Menon, A. Cause-related marketing: A coalignment of marketing strategy and corporate philanthropy. _J. Market._**1988**, _52_, 58-74. [CrossRef]
* Homburg et al. (2013) Homburg, C.; Stierl, M.; Bornemann, T. Corporate social responsibility in business-to-business markets: How organizational customers account for supplier corporate social responsibility engagement. _J. Market._**2013**, _77_, 54-72. [CrossRef]
* Brown and Dacin (1997) Brown, T.J.; Dacin, P.A. The company and the product: Corporate associations and consumer product responses. _J. Market._**1997**, _61_, 68-84. [CrossRef]
* Orlitzky et al. (2003) Orlitzky, M.; Schmidt, F.L.; Rynes, S.L. Corporate social and financial performance: A meta-analysis. _Organ. Stud._**2003**, _24_, 403-441. [CrossRef]
* Moon et al. (2011) Moon, H.-C.; Parc, J.; Yim, S.H.; Park, N. An extension of porter and kramer's creating shared value (CSV): Reorienting strategies and seeking international cooperation. _J. Int. Area Stud._**2011**, _18_, 49-64.
* Szmigin and Rutherford (2013) Szmigin, I.; Rutherford, R. Shared value and the impartial spectator test. _J. Bus. Ethics_**2013**, _114_, 171-182. [CrossRef]
* Spitzeck and Chapman (2012) Spitzeck, H.; Chapman, S. Creating shared value as a differentiation strategy--The example of BASF in Brazil. _Crop. Gov._**2012**, _12_, 499-513. [CrossRef]
* Thorndike (2015) Thorndike, E.L. A constant error in psychological ratings. _J. Appl. Psychol._**1920**, \\(4\\), 25-29. [CrossRef]
* Agricultural Corporation Statistics (2015) Agricultural Corporation Statistics. 2015. Available online: [http://agricorp.ipsoskorea.com/sub3.html](http://agricorp.ipsoskorea.com/sub3.html) (accessed on 15 September 2016).
* Polonsky and Scott (2005) Polonsky, M.J.; Scott, D. An empirical examination of the stakeholder strategy matrix. _Eur. J. Market._**2005**, _39_, 1199-1215. [CrossRef]
* McDonald and Lai (2011) McDonald, L.M.; Lai, C.H. Impact of corporate social responsibility initiatives on Taiwanese banking customers. _Int. J. Bank Market._**2011**, _29_, 50-63. [CrossRef] | The variety of socially responsible corporate activities employed in the agrifood industry has been broadening. An increasing number of agrifood companies have been employing strategic approaches to socially responsible activities, reinforced by Porter and Kramer's concept of creating shared value (CSV). This study compares the effects on corporate evaluations of two socially responsible corporate activities: philanthropic giving and CSV. Because prior studies concerning the effects of corporate prosocial behaviors on consumer responses have yielded mixed results, the present study examines the effects of a priori perceptions of companies by using corporate stereotypes as moderators. The results show that the type of socially responsible corporate activity (CSV vs. philanthropic giving) does not influence corporate evaluations. However, in cases of CSV (vs. philanthropic giving), consumers evaluate an unwarm but competent company more attractively and place higher value on an incompetent but warm company. This research is important not only for enriching existing literature, but also for providing guidelines to practitioners with respect to selecting appropriate corporate initiatives based on perceived consumer stereotypes.
corporate social responsible activity; corporate social responsibility; corporate stereotype; creating shared value; stereotype content model; agrifood industry +
Footnote †: journal: _Sustainability_ | Write a summary of the passage below. | 240 |
arxiv-format/1906_05352v1.md | Uncovering Dominant Social Class in Neighborhoods through Building Footprints: A Case Study of Residential Zones in Massachusetts using Computer Vision
Qianhui Liang and Zhoutong Wang
## 1 Introduction
Winston Churchill says, \"we shape our buildings, and afterwards our buildings shape us.\" Although Churchill is not making any general statement, the philosophy behind this quote is that, buildings and city form are connected to and influence people living in them. In fact, scholars have widely accepted that cities are mirrors and microcosms of society and culture at large, with every viewpoint contributing something to their understanding [1]. For instance, in urban expansion theory, different patterns of urban expansion could be associated with specific environmental costs, such as land consumption and mobility generation [2]. Those costs are related with housing prices, which lead to different market behaviors of different social classes and results in the concentration of dominant classes and exclusion of other classes. Jacobs [3] also observed different parts of New York to conclude that a compact city form, compared to a less dense one will better improve the overall well-being of city dwellers. Another example is that, indefensible urban form[4], e.g. neighborhoods in Five Oaks, Dayton, Ohio, has evoked the transition in homeowners from middle-income to minorities, renters which has results in a substantially declined in property value.
Meanwhile, US communities are growing increasingly homogenized. A recent study [5] suggests that income segregation between neighborhoods within the nation's greater metropolitan areas, such as Boston, increased by an average of 20 percent from 1990 to 2010. Affluent people choose to live in neighborhoods where almost everyone else is affluent, and poor people are driven to concentrate in the same communities. That is to say, rather than having a mixed and ambiguous state, a certain community tends to be occupied by a clear-cut social class. This trend has grown into a insidious problem in the U.S., leading to social and economic issues.
Given that: first, urban forms have strong connections with social attributes; second, in most cases, a neighborhood is occupied by a dominant social class, we hypothesize that it is possible to arrive at the dominant social class in the area by analyzing its corresponding urban form. In this paper, we train a model using deep convolutional neural network architecture to predict income using figure-ground footprint maps of Massachusetts. The performance of the prediction model suggests that footprint image dataset can shed light on the social class dwelling in a certain zip-code area. Through hand-crafted feature engineering, we analyzed the importance of a series of more interpretable morphological properties in predicting social class from images using a random forest model. This methodology provides an explanation on how urban form is related to social classand why other urban forms are outliers. The results are useful in providing new insight into social segregation and can be utilized as a tool for urban studies.
## 2 Literature Review
In recent years, a common representation of urban form extracts the \"matrix of the longest and fewest lines\", in other words, the \"axial map\" embeded from urban space. Spatial syntax [6] has been one of the quantitative methods to analysis the spatial configuration by examining this type of representation. It analyzes urban form by translating the representation of line matrices into a graph. This technique is widely used as a tool to examine the social impact of works by urban planners. However, established upon the theory of topology, the representation of the city discards all metric information and is rather limiting [7].
Meanwhile, a simpler representation is urban texture, reflected in a way of figure-ground maps (Figure 1) which shows the pattern of built spaces, open spaces and so on. It has been widely used as a simple but powerful way to distill structural urban information. Some influential scholars including Colin Rowe [8], discussed the inversion of the solid-void relationship as a characteristic of \"model cities\". Through professional map-reading training, architects and urban designers have the ability to interpret figure-ground maps for analysis and comparison. In the late 1990s, some quantitative analysis was piloted. Richens [9] proposed the utilization of image processing techniques to analyze the figure-ground map representation and Ratti [10] suggested to add height information (DEM) on figure-ground maps using image processing to measure geometric parameters.
Though informative, creating figure-ground diagrams was a time consuming process a decade ago, and therefore researchers couldn't afford mass production. Hence, it has never been worked with in large scale. Since 2006, the OpenStreetMap has become a welcome source of data as a crowdsourcing platform for figure-ground maps. However, as a Wikipedia for maps, constant quality control is provided by the community itself. Thus, it is hard to verify the
Figure 1: Figure-ground footprint map as a prevailing representation method in urban study
quality and completeness of the uploaded maps. Recently paralleled by significant advances in the field of computer vision, Microsoft applied a Deep Convolutional Neural Networks to identify building footprints from Bing satellite imagery and released 124 Million building footprints to the OSM community.
In urban theory, income differences have been considered to be related to spatial characteristic [11]. However, previous social class and segregation research which heavily rely on the Census data [11][12][13], based on a GPS-coded survey and is often limited by time and cost. Differentiating from previous theories that claim figure-ground maps to reflect 'content underneath' of the cities in the field of urban studies, we have three arguments: first, the figure-ground map is a simple but fruitful representation of urban form that can be further utilized to infer social classes. Second, it captures urban form in a more comprehensive manner with a complex set of metrics instead of single aspect. Third, using this method, it is possible to discover which aspects of the visual features of urban form contribute to the interpretation of social class and to what extend.
## 3 Methodology
To answer those questions, we are seeking help from Deep Convolutional Neural Networks (DCNN). DCNN has achieved great success in various computer vision tasks, such as object detection, scene segmentation, etc. DCNN has the ability to capture and interpret visual features, though the reason for its decisions remains unknown.
Arguably, although the accuracy of the DCNN model is very high, it is difficult to understand how the model is making its decisions. To further explain how social classes are inferred from figure-ground maps, we hand-crafted feature engineering algorithms to distill morphological features, e.g., building density, that have specific meanings in urban form. Further, we train a random forest model to fit the image to its corresponding income level. Through analyzing the importance of each variable in the model, we can discover how each feature contributes to the translation.
### Data Preprocessing
In this experiment, for the consistency of income and land use data, we limited the study zone to all residential areas in Massachusetts. We first obtain land-use data from the Massachusetts government website. Subsequently, we select geo-shapes that only correspond to residential zones [14]. We randomly sample 500,000 points in all the residential land-use areas while constraining every point to be at least 80 meters away from one another. Using each sample point as the centroid, we generate figure-ground image patches with the bounding box area of 200m * 200m from the footprint GeoJSON data of Microsoft footprint dataset [15]. Using zip-code level income data from the U.S. census in 2017 [16], we annotate each image with the income levels of the corresponding zip-code which were generated by income figures where its located. The income level is sorted into 8 categories according to U.S. Census survey (Table 1).
Within each category, the amount of the data is unbalanced. Therefore, to achieve better testing performance, we randomly sample 50000 images from those categories and divided the image corpora into training set, validation set, test set with the ratio of \\(0.7:0.15:0.15\\).
Figure 2: Examples of images from 8 categories
### Convolutional Neural Networks
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline Household & Millions of & Percent of & Comments & Training \\\\ Income Range & Household & Total & & Category \\\\ \\hline Less than \\$15,000 & 14.1 & 11.2\\% & & 0 \\\\ \\hline \\multirow{2}{*}{\\(\\$15,000\\) - \\(\\$24,999\\)} & \\multirow{2}{*}{12.1} & \\multirow{2}{*}{9.6\\%} & \\multirow{2}{*}{Federal Poverty Level} & \\multirow{2}{*}{1} \\\\ \\cline{1-1} \\cline{5-5} & & & & \\\\ \\hline \\multirow{2}{*}{\\(\\$25,000\\) - \\(\\$34,999\\)} & \\multirow{2}{*}{11.9} & \\multirow{2}{*}{9.4\\%} & Low Income & 2 \\\\ \\hline \\multirow{2}{*}{\\(\\$35,000\\) - \\(\\$49,999\\)} & \\multirow{2}{*}{16.3} & \\multirow{2}{*}{12.9\\%} & Middle Class & 3 \\\\ \\hline \\multirow{2}{*}{\\(\\$50,000\\) - \\(\\$74,999\\)} & \\multirow{2}{*}{21.5} & \\multirow{2}{*}{17.0\\%} & Median & 4 \\\\ \\hline \\multirow{2}{*}{\\(\\$75,000\\) - \\(\\$99,999\\)} & \\multirow{2}{*}{15.5} & \\multirow{2}{*}{12.3\\%} & Middle Class & 5 \\\\ \\hline \\multirow{2}{*}{\\(\\$100,000\\) - \\(\\$149,999\\)} & \\multirow{2}{*}{17.8} & \\multirow{2}{*}{14.1\\%} & Upper Class & 6 \\\\ \\hline \\multirow{2}{*}{\\(\\$150,000\\) - \\(\\$199,999\\)} & \\multirow{2}{*}{8.3} & \\multirow{2}{*}{6.6\\%} & High Income & 7 \\\\ \\hline \\multirow{2}{*}{\\(\\$200,000+\\)} & \\multirow{2}{*}{8.8} & \\multirow{2}{*}{7.0\\%} & Obama, Trump High Income & \\\\ \\hline Total & 126.3 & 100\\% & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: *Household Income Survey, \" U.S. Census. and our training category
Figure 4: Total number of images collected in each category
Figure 3: Generated location point within residential areaIn this paper, we utilize a DenseNet network which connects each layer to every other layer in a feed-forward process. As explained in the paper [17], the main idea that separate it from other DCNN architectures is the \\(l^{th}\\) layer receives the feature-maps of all preceding layers as inputs by concatenating the previous inputs into a single tensor:
\\[x_{l}=H_{l}([x_{0},x_{1},\\cdots,x_{l-1}])\\]
First, to interpret DCNN, we adopt the Classification Activation Mapping [18] technique to calculate and visualize discriminative regions. Combining the DenseNet model architecture with the CAM technique can generate a map that shows to what extent different regions contribute to the decision making process.
### Feature Engineering and Random Forest
After confirming that the figure-ground maps have strong indications toward income levels using DCNN,, the next step is to find the discriminative feature which can be explained in the relationship between income and pattern configurations.
Then we use hand-crafted feature engineering. We define four categories: building density, building size, contour complexity, directionality. These four categories are chosen according to two criteria: first, they are related with income and housing in the urban theory and second, they are visually extractable from the image. Feature for each category is extracted as a 10 dimension vector.
#### 3.3.1 Feature Engineering
Building densityHere building density is defined by the coverage of footprint as: \\(\\frac{\\sum S_{f}}{S}\\), where \\(S\\) and \\(S_{f}\\) denotes the total area and building area. In order to capture local density distribution rather than the entire image, we apply sliding windows of different sizes: 224 * 224, 112 * 112, 56 * 56 pixel, to each image (e.g., for a 224*224 image, after applying a 2*2 mask, we get 16 sub-images of size 112 * 112 and 64 sub-images of size 64 * 64). The windows are slided in the image with the stride of half mask length and reflective padding of size stride/2. For each step, we
Figure 5: Network structure credit to Huang et al.[17]
further compute the density of the sub-image of the sliding window. The computation of density for a sub-image is trivial: \\(\\textit{density}=\\frac{\\textit{number of black pixels}}{\\textit{number of all pixels}}\\). Eventually, for each category, we calculate the frequency of number of windows that values fall into it. The categories are [0, 10%), [10%, 20%), [20%, 30%), [30%, 40%), [30%, 40%, 50%), [50%, 60%), [60%, 70%), [70%, 80%), [80%, 90%), [90%, 100%].
#### 4.2.2 Building size
For each image, we compute the area of every single geometry and sort them into different buckets. We design the range of the buckets roughly according to the distribution of areas of different housing types. The categories are: [0, 50), [50, 100), [100, 150), [150, 200), [200, 250), [250, 300), [300, 350), [350, 40 0), [400, 1000), [1000, \\(+\\,\\infty\\)). The average footprint area of small single-family houses, duplexes, mid-rise apartments in the U.S are 400ft\\({}^{2}\\), 2900ft\\({}^{2}\\) and 18200ft\\({}^{2}\\), which is 37 m\\({}^{2}\\), 269 m\\({}^{2}\\) and 1690 m\\({}^{2}\\) in international unit [21]. Therefore, most single-family houses, duplexes, mid-rise apartments should fall into the 1st category, 6th and 10th category respectively. For every image, we then compute 10 by 1 vector representing the frequency of numbers of house that falls into 10 categories.
#### 4.2.3 Contour complexity
In most of the high-income communities, housings are designed with a more complicated floor plan like adding porches. This will be reflected on the complexity of the footprint contours.
To calculate the contour complexity, we simplified it by measure \\(\\frac{l_{c}}{\\sqrt{S_{c}}}\\). Here, more accurate algorithm can be tested out taking the convex into condition.
#### 4.2.4 Directionality
Figure 6: Masks for extract density feature
For most of the poor-income communities, the footprint layout is dense and follows the direction of the street. They represent more with such kind of 'grid-like' patterns While in the high-income communities, the directions of houses are more diverse. Here, terrain can be an outlier factors where houses usually follow the direction of the isotypes line. For simplification, we simply ignore this factor.
The directionality is calculated by implementing the image with (Histogram of Oriented Gradients) HOG. Using kernels in Figure 7, the horizontal and vertical gradient is calculated. Next, we can find the magnitude and direction of gradient using the formula. By \\(g=\\sqrt{g_{x}^{2}+g_{y}^{2}}\\) and \\(\\theta=arctan\\frac{g_{x}}{g_{y}}\\), we can compute the direction of the gradient. Further, we sort out a histogram with the frequency of pixels that fall into category of [0-1
intervals, and representing each feature by the probability that each image instance's values fall into each prospective discrete interval as a 10 dimensional vector. The four vectors are then concatenated into a 40 dimensional vector to represent each image instance.
#### 3.3.2 Random Forest Model
Random Forest is an ensemble learning method [19] for classification and regression tasks. It generates a series of decision trees and counts the majority of their votes to make a final decision. Random Forest uses a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. Random forests can be used to rank the importance of variables [20] in a classification problem. The process of fitting the out-of-bag error for each data point is recorded and averaged over the forest. In order to measure the importance of the j-th feature after training, the values of the j-th feature are permuted among the training data and the out-of-bag error is again computed on this perturbed dataset.
## 4 Result
Figure 8: An example of representation of images by feature engineering: original image (top image) and 40 embedded feature (bottom image).
### Convolutional Neural Network
We implement the DenseNet 121 on PyTorch framework and train the model using stochastic gradient descent with a batch size of 64 for 100 epochs. The training images are resized into 80 * 80 pixels to fit into GPU memory. The learning rate is initialized as 0.1 and reduced to 1/10 every 30 epochs. The model achieve 87% accuracy overall on the test set.
### Random Forest
We use those manually engineered 40 dimension features of each image to train a random forest classifier to classify the income level for each image. The model achieves 56% accuracy on average. And the overall average weight for each feature is calculated (Figure 9).
\\begin{table}
\\begin{tabular}{c
Among all these four categories: building density, building size, contour complexity and directionality, the directionality is more determinant than other features. Against intuition, though density usually considered to be a significant feature to distinguish different social class in common sense, it is less important compared to other geometric properties from the calculation. The appearance and amount of small footprint area buildings (encoded as \"area0\" and \"area1\") are informative in inferring the dominant social class.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline Density & Building size & Contour complexity & Directionality \\\\ \\hline
0.1369 & 0.1632 & 0.2581 & 0.4416 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Important feature
Figure 9: Importance of different features
## 6 Discussion
In this paper, we introduced a methodology to uncover social class through training a DCNN model on figure-ground footprint maps and a random forest model on manually engineered features from the footprint maps. Our preliminary result provides an alternative approach, in addition to the conventional analytic framework, in predicting social class: rather than the traditional data sources such as demography or housing properties, we utilize simple but informative figure-ground maps.
The major contribution of the work is twofold. First, we have validated the connection between urban forms and social classes through DCNN models Our model achieved 87% accuracy in testing set, indicating that the characteristics of figure-ground maps can capture and predict the dominant social classes. Second, we have proposed a methodology empowered by computer vision to analyze and interpret figure-ground maps, which was only been investigated by professionals in the realm of urban design and planning. The method is scalable and can be generalized into a broader application.
Although the proposed method provides clear value and novel perspectives to the existing research, it also has several limitations. Meanwhile, many potential possibilities can be developed on top of this paper. First, it is worth trying other representations, e.g., figure-ground maps with DEM (a grayscale image containing height information), instead of the binary (black-and-white) figure-ground maps. Moreover, in terms of the feature engineering, spectrums generated by fourier transformation of such binary images may also be informative. Besides, our sample size is limited. In future studies, we are scaling the methodology to more states and validate its performance. Lastly, the scope of our study is only confined
Figure 11: Typical footprint typology identified by the modelto interpreting social class from footprints without analyzing the mechanism of its occurrence. It is necessary to combine local insights to explain prediction results.
## References
* [1] Batty, Michael, and Paul Longley. Fractal Cities: a Geometry of Form and Function. Academic Press, 1994.
* [2] Camagni, Roberto, Maria Cristina Gibelli, and Paolo Rigamonti. \"Urban mobility and urban form: the social and environmental costs of different patterns of urban expansion.\" Ecological economics 40.2 (2002): 199-216.
* [3] Jacobs, Jane. _The Death and Life of Great American Cities_., 1961. Print.
* [4] Newman, Oscar. Creating Defensible Space. Washington, D.C: U.S. Dept. of Housing and Urban Development, Office of Policy Development and Research, 1996. Print.
* [5] Owens, Ann. \"Inequality in children's contexts: Income segregation of households with and without children.\" American Sociological Review 81.3 (2016): 549-574.
* [6] Hillier, Bill, et al. \"Space syntax.\" Environment and Planning B: Planning and design 3.2 (1976): 147-185.
* [7] Ratti, Carlo. \"Space syntax: some inconsistencies.\" Environment and Planning B: Planning and Design 31.4 (2004): 487-499.
* [8] Rowe, Colin, and Fred Koetter. Collage city. MIT press, 1983
* [9] Richens, Paul. \"Image processing for urban scale environmental modelling.\" Proceedings of the Int. Conf. on Building Simulation' 97. 1997.
* [10] Ratti, Carlo, and Nick Baker. \"Urban infoscapes: new tools to inform city design and planning.\" arq: Architectural Research Quarterly 7.1 (2003): 63-74.
* [11] Pendall, Rolf, and John I. Carruthers. \"Does density exacerbate income segregation? Evidence from US metropolitan areas, 1980 to 2000.\" Housing Policy Debate 14.4 (2003): 541-589.
* [12] Miranda, Arianna Salazar. 2016. \"The Shape of Segregation : The Role of Urban Form in Immigrant Assimilation by Arianna Salazar Miranda,\" no. 2010.
* [13] Alkay, Elif, and Hasan Serdar Kaya. \"Socio-spatial distribution of urban residents in a small-sized city.\" Journal of European Real Estate Research 11.3 (2018): 399-426.
* [14] Massachusetts Document Repository, MassGIS Data: Land Use (2005), June 2009, [https://docs.digital.mass.gov/dataset/massgis-data-land-use-2005](https://docs.digital.mass.gov/dataset/massgis-data-land-use-2005)* [15] Microsoft, US building footprints, June 2018, [https://github.com/Microsoft/USB](https://github.com/Microsoft/USB) BuildingFootprints
* [16] Income By Zip Code, US Income Statistics, October, 2018 [https://www.incomebyzipcode.com/massachusetts](https://www.incomebyzipcode.com/massachusetts)
* [17] Huang, Gao, et al. \"Densely Connected Convolutional Networks.\" 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, doi:10.1109/cvpr.2017.243.
* [18] Zhou, Bolei, et al. \"Learning Deep Features for Discriminative Localization.\" 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, doi:10.1109/cvpr.2016.319.
* [19] Ho, Tin Kam. \"Random Decision Forests.\" Proceedings of 3rd International Conference on Document Analysis and Recognition, doi:10.1109/icdar.1995.598994.
* [20] Breiman, Leo. \"Machine Learning.\" Machine Learning, vol. 45, no. 3, 2001, pp. 261-277., doi:10.1023/a:1017934522171.
* [21] Thurston Regional Planning Council. Housing Type and Characteristics, www.trpc.org/DocumentCenter/. | In urban theory, urban form is related to social and economic status. This paper explores to uncover zip-code level income through urban form by analyzing figure-ground map, a simple, prevailing and precise representation of urban form in the field of urban study. Deep learning in computer vision enables such representation maps to be studied at a large scale. We propose to train a DCNN model to identify and uncover the internal bridge between social class and urban form. Further, using hand-crafted informative visual features related with urban form properties (building size, building density, etc.), we apply a random forest classifier to interpret how morphological properties are related with social class.
Q. Liang
77 Massachusetts Avenue, Cambridge, Massachusetts
Email: [email protected]
Z. Wang (Corresponding author)
48 Quincy Street, Cambridge, Massachusetts
Email: [email protected] | Summarize the following text. | 181 |
mdpi/039bd5fe_805f_4aac_868e_419192e26d77.md | Socioeconomic Reinvention and Expanding Engagement with Climate Change Policy in American Rust Belt Cities
Scott E. Kalafatis
Falk School of Sustainability and Environment, Chatham University, Pittsburgh, PA 15232, USA; [email protected]
Received: 16 October 2020; Accepted: 3 December 2020; Published: 7 December 2020
## 1 Introduction
Cities are currently critical niches for the development of climate change policies [1; 2]. However, the continued development of social scientific theories of local climate change politics and governance requires cultivating a deeper understanding of the local political and governing processes that drive the emergence of these efforts [3]. Many studies have identified factors associated with local governments pursuing climate change mitigation and adaptation [4; 5; 6]. This work shed some light on what conditions generally make the pursuit of these efforts more likely to emerge in the first place, expand, and be sustained. But the level of insight it has been able to provide related to the specificities of how climate change governance plays out in individual contexts and the extent to which necessary deeper social changes are actually occurring in these contexts has been limited [7]. More attention is needed regarding how climate change policy efforts relate to local governments' mundane, everyday activities to gain a better perspective on how climate change policy relates to cultural and socioeconomic change [7].
Analysis of the ideas and ways of thinking, i.e., the governmental rationalities or governmentalities, that underlie the processes of climate change governance can contribute to the development of a more robust understanding [8; 9]. When successfully coupled with explanatory theories of behavior, analysis of governmentalities can illustrate why patterns emerge in climate change policy. Understanding how these logics provide traction for climate change policy in communities that are not immediately recognizable as global leaders is particularly critical for understanding how these efforts can be made relevant and how they will play out in the vast majority of cities in the world that are not viewed as global exemplars [10; 11; 12; 13]. Such cities' considerations are currently under-researched, limiting understanding of how climate change policies will ultimately scale up through diffusion beyond better-known global frontrunners [10; 14].
This paper explores how the logic of political economic rationalism shapes the pursuit of climate change policy. Specifically, it examines how financial pressures, economic development considerations, and the influence of other cities affect whether or not cities embrace climate change as an issue that impacts their decision making. Attention was focused on cities in the eight U.S. states surrounding the Great Lakes, i.e., Illinois, Indiana, Michigan, Minnesota, New York, Ohio, Pennsylvania, and Wisconsin. This area is associated with the U.S. \"rust belt\" whose formal manufacturing base has shrunk dramatically since the mid-20th century, resulting in cultural disruptions and economic redevelopment pressure [15; 16]. Climate change is expected to make the region warmer, shift rain and snow patterns, and to increase exposure to more frequent and severe extreme events [17; 18]. The region will likely experience threats related to water quality, tourism revenue losses (especially winter activities), uncertainty about future Great Lake water levels, and advancing invasive species destroying fragile native ecosystems [19]. Nevertheless, the region as whole lacks a clear, distinctive, and overwhelming climate change-based threat such as sea-level rise or glacial retreat, meaning that responses to climate change in the Great Lakes region might be relatively generally applicable to other areas. At the same time, this region's experience of a slowly developing, widespread economic crisis driven by global factors makes it a particularly fertile area for exploring cities' strategic responses to global change. The focus was on smaller cities in this region, those with a population between 5000 and 100,000 people, to bring attention to the common, everyday experience of pursuing climate change policy in this region. Much of the attention surrounding urban climate change policy in this region has focused on the region's largest cities such as Chicago and Minneapolis, but 98% of the cities across these eight states have populations of less than 100,000.
The next section provides background on the perceived political economic rationality of cities that is the basis for the assumptions used in the analysis about the impact of financial concerns, economic development considerations, and the influence of other cities on engagement with climate change policy. In Section 3, further light is shed on this topic by describing the methods and results related to 32 interviews conducted across 15 cities that were used to compare the political economic rationalities of cities that were highly engaged with climate change policy and those that were not pursuing climate change policy at all. In Section 4, building off of these observations and survey responses from over 200 cities, logistic regression analyses are performed to assess the association of factors related to city finances, economic development, and the influence of other cities with whether a city is highly engaged with climate change or not engaged at all. These results serve to examine how broadly applicable the findings from the interviews are in the region and how predictive certain factors are of high engagement with climate change policy. A discussion of the implications of these results follows in Section 5 before the conclusions in Section 6.
## 2 The Political Economy of City Policy Decisions
Understanding governance as an activity that emerges from historical, technological, and institutional conditions, i.e., as a product of a particular governmentality, provides a foundation for identifying the underlying principles that give rise to observable decision making and how these principles vary [8; 20]. However, governmentality's flexibility, as a diagnostic tool, undermines its ability to act as a stand-alone approach for predicting and explaining policy behavior, as it offers a strategy for analysis but not its own set of testable causal theories about social change [20] (p. 2). Therefore, identifying a particular governmentality at play is required for a fuller account of urban climate change politics that offers explanatory frameworks and testable theories about the forces underlying action.
For decades, a rationalist approach has been commonly used for understanding city policy decisions. Cities are assumed to use the provision of public services such as roads, water, sewers, and parks as a currency for competing with other cities for desired residents and investment [21; 22]. Providing these services requires raising the necessary funds through taxes and service fees which might make living or investing in the community less attractive. Cities act strategically to meet their residents' demands for public services as efficiently as possible, thereby supplying desirable services and amenities at the lowest possible cost [23]. This \"fiscal imperative\" [24] leads cities to prioritize economic development over other concerns because it is a relatively uncontroversial way to ensure quality of life improvements [25]. Economic development generates additional revenue and investment for a city, which allows the city to invest in public services that attract and retain wealthy residents who might have relatively low service demands and be able to provide further contributions to city revenue [25].
These predictions about the influence of financial and economic development considerations on city policy decisions have been tested and debated extensively, especially in the USA. In the USA, many residents have attempted to live in places that reflect their service demands [26; 27; 28], a trend that has been influential enough to have factored into major public policy challenges such as spatial inequality, white flight, and de facto segregation [29]. Cities risk losing residents and investment if they are relatively inefficient and unresponsive [30; 31]. This is especially the case if they fail to meet the needs of the particularly well-informed subset of wealthy individuals who may ultimately shape cities' considerations about providing public services [32]. Cities also competitively respond to the policy actions of other surrounding cities and experience greater competition when they share the same metropolitan region with many other cities [23; 33].
While there is evidence that the political economy of city policy decisions is influenced by considerations about fiscal pragmatism and economic development, recently, there has been more attention given to cities' efforts to attract desired residents and investment through tailoring their reputations [34; 35]. In a globalized economy featuring advanced telecommunications, urban development is no longer centered on developing and exchanging goods, but on developing and exchanging knowledge and innovation as well [36; 37]. Many cities have responded by focusing on marketing themselves; oftentimes, as places with cultural, technological, and knowledge resources that might attract the increasingly mobile circulation of global capital and residents [38; 39]. City brand development is a dialogue that continuously redefines a place's collective identity [35]. To be successful, this identity dialogue must balance internal and external perceptions with residents' perceptions of their culture, leave an impression on outsiders, and feature self-reflection about how outsiders' perceptions can inform a brand that effectively mirrors expectations [35]. Therefore, maintaining a city's reputation is a dynamic process that constantly evolves based on how a city engages with the influence of other places [35].
Researchers have identified connections between the emergence and development of climate change policy with cities' financial concerns, economic development, and efforts to establish and maintain a reputation relative to other cities. The ability to reframe climate change policy in a manner that strategically bundles it with other prevailing municipal concerns has been a significant factor behind the success of these policies [40; 41]. Cities that have successfully pursued climate change policies have often done so as a means of reducing spending or meeting economic development goals. Synergies between climate change policies and these other interests have led them to view climate change actions as a \"co-benefit\" [42] or a \"triple-win\" that simultaneously allows the city to address mitigation, adaptation, and development [43; 44]. In particular, there is evidence that cities overlap economic development considerations with their mitigation and adaptation efforts [45] and that economic development and financial considerations factor into their pursuit of climate change policy [46]. Tying together climate goals with financial or economic development goals or bringing economic development interests into the pursuit of climate change has also been used as a strategy to expand coalitions supporting climate change policies and to sustain interest over time [47; 48; 49; 50]. Previousstudies have also found evidence that changes in a city's efforts related to economic development can provide opportunities for the pursuit of climate chance policy interventions [46; 48; 49; 50].
There is also evidence that some cities' climate change efforts are being influenced by what is going on in other cities. On the one hand, the emergence of policy entrepreneurly associated with climate change may spread amongst cities clustered together in the same metropolitan areas [51]. On the other hand, competition between cities clustered within larger metropolitan areas might present greater collective action challenges as smaller cities can \"free ride\" off of the actions of the larger cities present [52]. This might particularly affect the interest that small cities in metropolitan areas have to pursue climate change mitigation [46; 53]. Studies have also found that practitioners have been framing climate change policies in ways that align with the reputation of being a progressive, competitive city. Climate change has offered an opportunity for at least some cities to strategically differentiate themselves as leaders on the world stage [54; 55; 56; 57].
## 3 Interviews
### Survey Determining City Engagement
The present study attempts to explore, in more depth, the relationships among financial concerns, economic development considerations, and the influence of other cities. However, first, an understanding of the influence of climate change on city policy making was needed. In the spring/summer of 2015, a survey was sent to staff members in 803 cities across the eight Great Lakes states. This survey population represented every city in these states with a population between 5000 and 500,000, which the researcher did not have a pre-existing relationship with and that had functioning email addresses to contact. The survey included sixteen different policy actions covering seven areas of policy making (e.g., land use and transportation) that the city could be undertaking. The list of these actions is included in Appendix A (also see [45] for more information). For each action, the respondent first answered whether or not the city had \"taken or been involved in\" each of these actions in the last five years and whether or not climate change mitigation or climate change adaptation had influenced these actions. There were 281 completed responses collected (response rate of 35%).
### City Selection for Interviews
Following the completion of the survey, cities were selected for more in-depth interviews about the connections between climate change policy and their considerations about economic development, financial concerns, and the influence of other cities. To help highlight distinctions among cities most and least engaged with climate change, cities that were highly engaged with climate change and those who had reported that climate change did not influence their policy actions at all were both chosen. Cities that had undertaken six or more policies influenced by climate change mitigation or adaptation were considered to be \"highly engaged\" cities because this put them in the 75th percentile of those cities surveyed. Table 1 provides a summary of the cities interviewed, the state they are in, the number of policy actions that they had reported were influenced by climate change in the survey (CC), and the number of interviews (# Inter.) that were completed for each. To make sure that insight was being gained into experiences across a wide range of political economic contexts, cities were also selected that varied along many other factors. The first was the unemployment rate which was divided between those cities that were high amongst those analyzed in the study population (the 75th percentile or higher) or low (the 25th percentile or lower). Studies of policy innovation and adoption in local governments have used unemployment as an indicator of local socioeconomic deprivation [58; 59; 60], which can shape how authorities search for strategies to match the complexity of their environment and more effectively address the needs of their citizens [58]. Similarly, cities were selected with a range of median household income as well. The third was whether or not a city was in one of the large metropolitan areas in the region, which other studies have found might affect cities' considerations about undertaking climate change policy actions and how other cities influence them [46; 51; 53]. The political partisanship of the voting population was also varied based on the share of the vote Barack Obama (D) received relative to Mitt Romney (R) in city precincts in the 2012 Presidential Election because climate change is still a highly politicized issue in the USA [61]. In Table 1 below, partisanship differences amongst constituents are represented by subtracting the share of the vote candidate Romney received from the share of the vote Obama received (Partisan.)--therefore, the higher the number, the more the city has a Democratic-leaning voting population. Finally, to the extent possible, cities were selected from across the eight states. This selection approach resulted in seven cities that were highly engaged with climate change, and the interview responses from these cities could be compared to eight cities that were taking no actions influenced by climate change.
### Interviews and Coding
Attempts were made to interview at least two officials in each city, at least one elected and one non-elected staff person, to get a more balanced perspective on the city's experience. Thirty-two interviews were conducted across the fifteen cities, including four mayors, twelve council members, eight city managers or city administrators, six economic development or community development directors, and two city planners. These semi-structured interviews were intended to last for 30 min, but extended longer as necessary. Topics that were covered included the following: their city's financial and economic development concerns, what influenced the development of environmental and climate change-related efforts in the city, how other cities influenced their city, and efforts to learn about what other cities were doing and apply what they learned in their city. A more detailed list of example questions asked is included in Appendix B.
Following the interviews, the responses were coded based on issues relevant to financial challenges, economic development, and the influence that other cities have on them that emerged as consistent themes. This coding protocol included the following categories:
Financial and economic development considerations:
**Concern** Is the primary financial concern that the city has based on expenditures or revenue based on the interviews?
**ED Change** Did interviewees from the city describe a perceived need for innovative economic development strategies? (X denotes yes in Table 2 below.)
**Built Out** Did interviewees describe that their city had developed all available land? (X denotes yes in Table 2 below.)
\\begin{table}
\\begin{tabular}{c c c c c c c c c} \\hline \\hline
**City Name** & **State** & **CC** & **\\# Inter.** & **Pop.** & **Income** & **Unemp.** & **Metro** & **Partisan.** \\\\ \\hline \\multicolumn{8}{c}{Cities Highly Engaged with Climate Change} \\\\ \\hline Crystal & MN & 9 & 2 & 22,151 & 60,234 & Low & X & 23.00 \\\\ Edina & MN & 14 & 3 & 47,491 & 84,349 & Low & X & 6.72 \\\\ Harper Woods & MI & 6 & 3 & 14,236 & 44,778 & High & X & 56.49 \\\\ Ithaca & NY & 10 & 3 & 30,014 & 28,760 & Low & & 72.54 \\\\ Ludington & MI & 8 & 2 & 8076 & 32,010 & High & & \\(-\\)2.84 \\\\ McHenry & IL & 7 & 2 & 26,992 & 66,297 & High & X & \\(-\\)6.34 \\\\ Monmouth & IL & 7 & 2 & 9444 & 33,842 & High & & 5.80 \\\\ \\hline \\multicolumn{8}{c}{Cities Not Engaging with Climate Change at All} \\\\ \\hline Bryan & OH & 0 & 1 & 8545 & 37,171 & High & & \\(-\\)3.65 \\\\ Lake Geneva & WI & 0 & 2 & 7651 & 43,205 & High & & 1.46 \\\\ Plymouth & MN & 0 & 2 & 70,576 & 84,392 & Low & X & 1.94 \\\\ Pontiac & MI & 0 & 1 & 59,515 & 27,528 & High & X & 77.80 \\\\ Saline & MI & 0 & 2 & 8810 & 63,958 & Low & & 5.86 \\\\ Southfield & MI & 0 & 2 & 71,739 & 49,841 & High & X & 79.17 \\\\ Springboro & OH & 0 & 2 & 17,409 & 96,094 & Low & X & \\(-\\)40.79 \\\\ Whitewater & WI & 0 & 3 & 14,390 & 29,784 & Low & & 23.57 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Summary of cities interviewed.
Influence from other cities:
**Research** The amount of research that interviewees described that their city did about other cities' policy efforts (coded as relatively low, medium, or high in Table 2 below).
**Scope** The scale where the other locations that these cities researched were located (coded as local (same metropolitan or micropolitan area), state, Midwest (region), national, or international in Table 2 below).
**Apply** Did the interviewees describe clear examples of successful application of policies that their city had learned about based on their research of other cities? (X denotes yes in Table 2 below.)
### Interview Results
Differences between highly engaged cities and those not influenced by climate change considerations at all that arose in the discussions with these city officials are summarized in Table 2. When asked about financial challenges facing the city, highly engaged cities were more likely to emphasize controlling expenditures than acquiring (or losing) revenue (six of seven versus one of eight). Highly engaged cities were also more likely to describe that their community was entirely built out than cities who had no actions influenced by considerations about climate change (five out of seven versus one out of eight). One interviewee from a highly engaged city described how being built-out shaped their mindset in the following way, \"being completely built-out and boxed-in, the focus now is on redevelopment, making the most out of what we already have.\"
Regarding economic development, highly engaged cities were more likely to describe that they had shifted their approach to economic development based on changing conditions (six out of seven versus zero out of eight). These shifts were associated with a perception that they needed to find innovative economic development strategies to respond to changing conditions. As an illustration, the following responses came from interviews in two rural cities with a higher-education institution; on the one hand, this first city was not associating policy efforts with climate change:
Respondent 1, \"We have diversified (our economic development efforts) somewhat but we're not trying to move from column A to column B or anything like that per se--we're not walking away from our manufacturing base.\"
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{**City Name**} & \\multicolumn{3}{c}{**Finances/Development**} & \\multicolumn{3}{c}{**Influence from Other Cities**} \\\\ \\cline{2-7} & **Concern** & **Built Out** & **ED Change** & **Research** & **Scope** & **Apply** \\\\ \\hline \\multicolumn{7}{c}{Cities Highly Engaged with Climate Change} \\\\ \\hline Crystal & Expend. & X & X & Medium & National/Local & X \\\\ Edina & Expend. & X & X & High & National & X \\\\ Harper Woods & Rev. & X & X & Medium & State/National & X \\\\ Ithaca & Expend. & & X & High & National/Inter. & X \\\\ Ludington & Expend. & X & X & Medium & State/Local & X \\\\ McHenry & Expend. & X & & Low & Local/National & \\\\ Monmouth & Expend. & & X & High & Midwest/Inter. & X \\\\ \\hline \\multicolumn{7}{c}{Cities Not Engaging with Climate Change at All} \\\\ \\hline Bryan & Rev. & & Low & Local/State & & \\\\ Lake Geneva & Expend. & & Low & Local & & \\\\ Plymouth & Rev. & & Medium & Local & & \\\\ Pontiac & Rev. & X & Medium & Local & & \\\\ Saline & Rev. & & Medium & Local/State & X \\\\ Southfield & Rev. & & Low & Local & & \\\\ Springboro & Rev. & X & Low & Local & & \\\\ Whitewater & Rev. & & Low & Local & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Interview coding summary: City political economy and engagement with climate change.
Respondent 2, \"The root problem is that for all the awareness, the community just doesn't grasp the importance (of altering economic development efforts). We're not very forward thinking.\"
On the other hand, the following respondent represented a second city that was highly engaged with climate change; their discussion of economic development explicitly ties together the city's past struggles dealing with global economic changes and their present motivation to reinvent themselves in order to remain competitive:
In most rural places, people look backwards, they want to return to the past but that doesn't work anymore In 1998, the predecessor to XXX here closed down. We really got knocked down because we weren't diversified and didn't have a strategy for something like that happening. How do we keep a major company like XXX here? How do we keep the college going and in town? Small colleges in the Midwest are really struggling and sometimes close down or relocate.
There were also distinctions related to how they engaged with the influence of other cities on their policy decisions. Highly engaged cities described more effort from those in city government regarding research on other cities, often stimulated by city council members. Out of the seven cities highly engaged with climate change, three cities described a relatively high level of research on other cities, three cities described a medium level, and one city described a low level. Among the eight cities that were not associating climate change with their policy actions, three cities described a relatively medium level of research on other cities, and five cities described a low level. Highly engaged cities with climate change were also more likely to describe examples of how the city had successfully applied what had been observed as well (six of seven versus one of eight). The contrast exhibited between the following two responses (both from city managers) were quite similar to those previously mentioned in this section, despite being from a different set of contrasting cities. The respondent from a city where no policy actions were influenced by climate change described a lack of interest in applying policies from other places in their government as, \"I personally try to keep an idea about what is going on but [redacted] people love [redacted] and they won't look outside so it's really hard to get any traction on anything from outside.\" The respondent from a highly engaged city described a very different attitude about applying ideas from other cities, \"plagiarism is a sin in academia, but it is a necessity for those working in cities. We steal each other's ideas all of the time You always end up copying from others and adapt their strategies.\"
Cities highly engaged with climate change were also influenced by cities at a broader geographic range. Six of the seven highly engaged cities described the influence of other cities nationally or internationally on their policy decisions, while cities whose policy actions were not being influenced by climate change only described the influence of other cities locally or at the state level. After a follow-up question, one respondent from a city whose policies were not influenced by climate change linked this locally focused perspective to the city's lack of desire to change, \"We don't look much to other states or countries. I think we're missing out. We don't look to other places as much as I think we should. Why don't we? Our council got old and stale in the past. We got too comfortable as a city and those in the government were too comfortable too.\" In contrast, one of the respondents from a highly engaged city made a similar connection between the scope of their search and their city's desire to try new things, \"we like being progressive, so, we look at what other progressive cities all over the country and even world are doing, especially Colorado recently.\" Cities highly engaged with climate change also appeared to be more likely to actually adapt the strategies that they saw being used in other places into their own policies as well. Six of the seven cities highly engaged with climate change gave examples of their city successfully applying strategies that they had learned about other cities using, while only one of the ones without climate change influencing a policy action did.
## 4 Regression Analysis
### Model Development
To more broadly explore the relationship between these factors and smaller cities' engagement with climate change policy, logistic regressions were performed. To differentiate between factors associated with high engagement with climate change mitigation versus adaptation, a dependent variable of high engagement (1) versus no engagement (0) was made for each. As above, high engagement was defined as being in the 75th percentile of those surveyed in terms of the number of policy actions they pursued that were influenced by climate change mitigation (six or more in this analysis) and climate change adaptation (three or more). There were 281 cities, with a population of 100,000 or less, that responded to the aforementioned survey; 72 cities (26%) were highly engaged with climate change mitigation, 79 cities (28%) were highly engaged with climate change adaptation, and 49 cities had to be removed from the logistic regression analysis because the necessary financial data could not be found. For the final mitigation logistic regression, 62 cities were highly engaged (1) and 135 were not engaged at all (0). For the final adaptation logistic regression, 70 cities were highly engaged (1) and 138 were not engaged at all (0).
Nine independent variables were included, based on the literature and interviews. The first three variables addressed financial conditions. In the interviews, those in highly engaged cities were more focused on expenditure-based challenges; therefore, it was hypothesized that experiencing an increase in expenditures would be associated with being highly engaged with mitigation and adaptation. A binary variable describing whether or not the city had experienced an increase in its expenditures from its 2006-2007 to 2009-2010 budgets was created based on available financial audits. The second was the city's unemployment rate (2010 five-year estimates from the American Community Survey), based on the hypothesis that existing economic strain (in this case, a higher unemployment rate) might push a city to engage with these relatively novel policy issues. A measure of the available financial resources in the city in the form of median household income was also included which had been negatively associated with engaging in climate change policy efforts [4; 5].
Next, two variables addressed the theme of changing approaches to economic development from the interviews. The first was whether or not an individual was present in the city who was advocating for the city to change its approach to economic development, i.e., an economic policy entrepreneur, based on a survey described in a related study [51]. The second was changes in reliance on the manufacturing sector, i.e., the change in the percentage of residents employed in the manufacturing sector from the 2000 to 2010 American Community Surveys. According to the interviews, it was hypothesized that this variable would be negatively associated with high engagement because continued dependence on manufacturing might reflect a lack of willingness to pursue new economic development strategies.
The interviews also highlighted ways in which attention to the efforts of other cities and adoption of their ideas might influence engagement with climate change policy. Four independent variables were used to address the potential for a city to be influenced by others. The first was a variable that the interviews implied might be a catalyst of greater levels of research being done on other cities, i.e., the number of city council members. The more city council members that represent a city, the more possibilities there are for at least one to explore efforts in other places [51]. To consider the capacity for translating these ideas into action, a binary variable describing whether or not there was a committee in the city dedicated to environmental issues or sustainability was included. These smaller cities were unlikely to employ a staff person or department dedicated to environmental issues, therefore, committees represented arenas where people could bring ideas together and find ways to apply them towards environmental concerns. A binary variable describing whether the city was located in a metropolitan area (1) or not (0) was also included, as being located in a metropolitan region might lead smaller cities to perceive that they could \"free ride\" on the central city's actions and be less engaged with climate change policies [53]. Finally, a single binary variable described whether (1) or not (0) thecity was participating in at least one of the following multi-city environmental networks: the Great Lakes St. Lawrence Cities Initiative, the ICLEI, the U.S. Conference of Mayors Climate Protection Agreement, or the UN's Compact of Mayors. Participation in any of networks was considered to be a binary indicator of the city's engagement with the policy actions of other cities at the national and international levels.
In order to account for other possible explanations of climate change policies in cities and avoid omitted variable bias, six other potential climate change policy drivers were included. The first policy driver was the city's population with the hypothesis that larger cities were more likely to engage with climate change policies [4]. The second policy driver was whether the city had a council-manager form of government or not because their perceived greater attention to efficiency might influence how cities would undertake climate change-related efforts [62]. A binary variable describing whether a climate change policy entrepreneur was present in the city or not was also used [51], as this advocates for climate change policies have been tied to cities' climate change actions [4,5]. The level of education attainment in the city has also been linked to the pursuit of climate change policies [4,5,63], therefore, the city's bachelor's degree attainment rate was included. Finally, the number of natural disasters that a city had experienced was included because there was an assumed connection between natural disasters and the emergence of climate change policy action in cities [4,64]. Assessment of the impact of states on city actions was not a focus of this study, but the state that the city was in was included, to control for differences among the eight states.
### Regression Model Results
Table 3 provides a summary of the results of the regression model assessing the factors underlying these smaller cities' high engagement with climate change mitigation and adaptation policy. Overall, the mitigation and adaptation models explained about 45% and 40% of the variation in the data, based on their adjusted R-squared values. The three financial conditions variables included (increased expenditures, unemployment, and median household income) were each positively associated with high engagement with mitigation (\\(p<0.05\\)), but none were associated with adaptation.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{**Variable**} & \\multicolumn{2}{c}{**Mitigation**} & \\multicolumn{2}{c}{**Adaptation**} \\\\ \\cline{2-5} & **Coeff. 1** & **SE** & **Coeff. 1** & **SE** \\\\ \\hline \\multicolumn{5}{c}{Financial Drivers} \\\\ Increased expenditures & 1.372 * & 0.62 & 0.412 & 0.49 \\\\ Unemployment rate & 0.312 * & 0.15 & 0.171 & 0.12 \\\\ Median household income & 0.546 * & 0.22 & 0.372 & 0.20 \\\\ Economic change drivers & & & & \\\\ Econ dev entrepreneur & 1.406 ** & 0.47 & 1.766 ** & 0.46 \\\\ Manufacturing change & 0.133 * & 0.07 & 0.114 * & 0.06 \\\\ Influence from other cities’ drivers & & & & \\\\ Council members & 0.225 ** & 0.07 & 0.080 & 0.07 \\\\ Environmental commission & 1.395 * & 0.58 & 1.649 ** & 0.53 \\\\ Metropolitan location & \\(-0.586\\) & 0.60 & \\(-1.157\\)* & 0.53 \\\\ Network participation & 1.056 & 0.66 & 1.018 & 0.68 \\\\ Other climate change policy drivers & & & & \\\\ Population & \\(-0.113\\) & 0.13 & \\(-0.060\\) & 0.12 \\\\ Council manager & 0.459 & 0.54 & 0.588 & 0.51 \\\\ CC entrepreneur & 1.024 & 0.56 & 0.749 & 0.53 \\\\ Bachelor’s degree attainment & 0.002 & 0.02 & \\(-0.005\\) & 0.02 \\\\ Political partisanship & 0.014 & 0.01 & \\(-0.001\\) & 0.01 \\\\ Natural disasters & 0.102 & 0.12 & 0.139 & 0.12 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Logistic regression results: City high engagement with climate change.
Regarding the economic change variables, as expected based on the interviews, the presence of an economic development policy entrepreneur was positively associated with high engagement with climate change mitigation and adaptation (\\(p<0.01\\)). However, contrary to expectations, the change in the percentage of those employed in the manufacturing sector was positively associated with high engagement with mitigation and adaptation (\\(p<0.05\\)). There was also evidence that the factors examining the potential for a city to be influenced by others were associated with being highly engaged with climate change mitigation and adaptation. The presence of more council members was positively associated with high engagement with climate change mitigation (\\(p<0.01\\)), but not adaptation. An environmental commission was positively associated with the city being highly engaged with climate change mitigation (\\(p<0.05\\)) and adaptation (\\(p<0.01\\)). Presence in a metropolitan area was negatively associated with high engagement with climate change adaptation (\\(p<0.05\\)), but not with mitigation. There was no evidence for an association between participating in a larger multi-city network and high engagement with either climate change mitigation or adaptation. There was also no evidence found for such an association among high engagement and the other climate change policy drivers included in the model.
To better understand the extent to which shifts in the variables in this analysis translate into cities being more or less likely to be highly engaged with climate change mitigation or adaptation versus not being engaged with these issues at all, Table 4 provides a summary of predicted probabilities for the financial, economic development, and influence on other cities variables. In general, the average probability that a city in the model would be highly engaged with climate change mitigation was 31% and 34% for climate change adaptation. For each variable for both the mitigation and adaptation models, all values in the dataset were then set to a low value or a high value so that the resulting changes in the average probability that a city is highly engaged with either issue could be assessed. For the non-binary variables, the low value was the 10th percentile value of the cities in the analysis and the high value was the 90th percentile. Resulting probabilities from combinations of the financial, economic change, and influence variables were also included. For the mitigation model, changes related to altering median household income were the largest. Setting household income at $31,534 (10th percentile) and $78,000 (90th percentile) shifted the average probability that a city is highly engaged with mitigation from 0.20 to 0.53. The next largest shift occurred from changes in the number of councilmembers which rose from 0.21 to 0.46 based on the number being 0 (10th percentile) versus 8 (90th percentile). For the adaptation model, the presence of an environmental commission had the most pronounced impact on the average probability, shifting it from 0.28 to 0.57. The presence
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline \\multirow{2}{*}{**Variable**} & \\multicolumn{2}{c}{**Mitigation**} & \\multicolumn{2}{c}{**Adaptation**} \\\\ \\cline{2-5} & **Coeff. 1** & **SE** & **Coeff. 1** & **SE** \\\\ \\hline \\multicolumn{5}{c}{States} \\\\ Illinois (reference group) & & & & \\\\ Indiana & 3.118 ** & 1.11 & 1.301 & 0.904 \\\\ Michigan & 1.756 & 0.93 & 0.118 & 0.818 \\\\ Minnesota & 0.693 & 0.99 & \\(-\\)1.409 & 0.883 \\\\ New York & 2.143 & 1.38 & 0.599 & 1.243 \\\\ Ohio & 2.378 ** & 0.91 & \\(-\\)0.059 & 0.729 \\\\ Pennsylvania & 4.130 ** & 1.41 & 1.320 & 1.146 \\\\ Wisconsin & 1.645 & 0.97 & 0.131 & 0.743 \\\\ \\hline Log likelihood & \\(-\\)84.365 & & \\(-\\)97.738 \\\\ Prob. \\(>\\) Chi-sq. & 0.000 & & 0.000 \\\\ Adjusted R-squared & 0.453 & & 0.397 \\\\ N & 197 & & 208 \\\\ \\hline \\multicolumn{5}{c}{\\({}^{1}\\)\\({}^{\\star}\\)\\(p<0.05\\), ** \\(p<0.01\\).} \\\\ \\end{tabular}
\\end{table}
Table 3: _Cont._of an economic development entrepreneur had the second most pronounced impact, shifting the average probability from 0.17 to 0.43. The combination of the financial drivers shifted the likelihood of mitigation high engagement more than adaptation (0.06 to 0.68 and 0.17 to 0.59) while the combination of economic change drivers shifted the likelihood of high engagement with adaptation slightly more than mitigation (0.12 to 0.51 and 0.13 to 0.46). The shifts related to the combined influence from other cities drivers was the same for the two models (0.14 to 0.82 and 0.18 to 0.86).
## 5 Discussion
The results of these interviews and regression models help shed some light on how the logic of political economic rationalism shapes cities' considerations about engaging with climate change policy. According to the interviews, compared with cities where climate change was having no influence on policy actions, cities that were highly engaged with climate change were places that were more likely to be focused on managing expenditures than revenue and were compelled to pursue economic development innovations that would move the city beyond the industrial past of the rust belt region. This attitude was complemented by a greater interest in researching policy actions in other cities, a broader perspective on where such policy ideas might come from, and a greater likelihood of applying inspiration found in other contexts. Taken together, these findings point to a distinctive mindset amongst the highly engaged cities where their practical considerations about economic development and their cities' relationship to the world more broadly are already oriented towards change and innovation. This national or even international perspective is all the more remarkable given that these were all smaller cities with populations under 100,000. Climate change policy was something that was attached to this mindset and was something that was associated with being the kind of forward-thinking community that they aspired to be.
The results of the regression models provided additional support for this perceived connection between high engagement with climate change policy and financial pressures, economic development considerations, and the potential influence of other cities. Evidence was found that most of the variables used to assess these factors were associated with these cities being highly engaged with mitigation and adaptation in contrast to other climate change policy drivers. However, these results also further complicated the picture of exactly what differentiates small cities that are highly engaged with mitigation and adaptation versus those who were not engaging with these issues at all. The financial conditions in cities that were more likely to be highly engaged with climate change mitigation featured rising expenditures, higher unemployment, and higher median household incomes. Financially
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{**Variables**} & \\multirow{2}{*}{**Based on**} & \\multicolumn{2}{c}{**Mitigation**} & \\multicolumn{2}{c}{**Adaptation**} \\\\ \\cline{3-6} & & **Low** & **High** & **Low** & **High** \\\\ \\hline Financial Drivers & & & & & \\\\ Increased expenditures & 0 and 1 & 0.18 & 0.35 & 0.29 & 0.35 \\\\ Unemployment rate & 10/90 Perc. & 0.23 & 0.41 & 0.28 & 0.39 \\\\ Median household income & 10/90 Perc. & 0.20 & 0.53 & 0.25 & 0.51 \\\\ Financial drivers combined & & 0.06 & 0.68 & 0.17 & 0.59 \\\\ Economic Change Drivers & & & & & \\\\ Econ dev entrepreneur & 0 or 1 & 0.19 & 0.38 & 0.17 & 0.43 \\\\ Manufacturing change & 10/90 Perc. & 0.24 & 0.39 & 0.26 & 0.41 \\\\ All economic change drivers & & 0.13 & 0.46 & 0.12 & 0.51 \\\\ Influence from Other Cities Drivers & & & & & \\\\ Council members & 10/90 Perc. & 0.21 & 0.46 & 0.29 & 0.39 \\\\ Environmental commission & 0 or 1 & 0.27 & 0.49 & 0.28 & 0.57 \\\\ Metropolitan location & 1 or 0 & 0.29 & 0.38 & 0.29 & 0.47 \\\\ Network participation & 0 or 1 & 0.29 & 0.45 & 0.32 & 0.49 \\\\ All influence drivers & & 0.14 & 0.82 & 0.18 & 0.86 \\\\ \\hline Original probability & & & 0.31 & & 0.34 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Summary of predicted probabilities for likelihood of high engagement.
speaking then, experiencing financial pressures but still possessing economic wealth that could be mobilized in support of initiatives to address these challenges is a combination that makes it more likely that a smaller city will be highly engaged with mitigation. The results of the adaptation regression model did not provide support for a similar dynamic occurring surrounding climate change adaptation, meaning it is possible that, as of now, these cities more directly perceived that there was a possible connection between mitigation and financial gains than they perceived for adaptation.
For both the mitigation and adaptation models, high engagement was associated with both the presence of an economic development entrepreneur and a smaller change in manufacturing employment. The role of the presence of an economic development entrepreneur is consistent with the narrative about reinvention that emerged from the interviews, but the finding about manufacturing change provides an indication that the dynamic surrounding economic change and manufacturing employment is a complex one. In the interviews, there were examples where development demand afforded cities the ability to make developers fulfill their environmental and climate change goals. One interviewee explained, \"so many people want to develop here right now that we don't worry about incentives. For us it's the opposite, the environmental work is something that developers give us in return for allowing them to have zoning approvals and follow through with their projects.\" On the one hand, similar to the association between high engagement with mitigation and adaptation and higher median household income, cities maintaining strong economic conditions in the form of manufacturing development might be able to more easily commit resources to climate change policies. On the other hand, a city's ability to retain more of its manufacturing employment might indicate the kind of proactive and innovative thinking related to economic development that would also drive them to pursue novel policy strategies such as climate change mitigation or adaptation. Deeper qualitative studies are needed to more carefully unpack the dynamic relationships among adjustments that cities are making to both economic and climate change-driven challenges.
The regression results related to the potential to be influenced by other cities were similarly supportive of the notion that having more opportunities for those in the city to pick up and potentially apply lessons from outside was associated with high engagement, but also pointed to the influence that cities have on one another being a more complex phenomenon for future research to unpack. For example, future research should assess the extent to which variables often associated with governance, such as the number of council members or the existence of departments or commissions, are impactful because they represent conduits for accessing and applying ideas from other places. However, future research should also look further into assessing the impact of the information-seeking behavior of individuals in a city that was described by the interviews. Unfortunately, the quantitative data available was not able to capture the behavioral aspects of cities bringing in ideas and adapting new policy actions based on the influence of others. Still, the observation that membership in the larger networks included in the regression was not associated with being highly engaged with climate change mitigation or adaptation underscores that these types of networks were not what the cities interviewed in this study meant when they described being influenced by other cities. (At the time of the interviews, two cities included in the analysis were members of transnational climate networks; Ithaca, New York was a member of ICLEI and the Compact of Mayors while Edina, MN was a member of ICLEI.) While there has been considerable attention given in the literature to how relationships between cities in international climate change policy networks influence climate change governance [65; 66; 67], the interviewees were describing the more typical, everyday ways in which cities constantly scope out what other comparable cities are doing to get a better sense of opportunities across a range of issues to better serve their residents. Attention to cities' more practical perceptions about peer cities that they already actively compare themselves to and learn from (rather than on global policy networks specialized for climate change policy) might be another opportunity to explore and better understand the everyday experience of pursuing climate change policy [7].
The ways in which practitioners' relatively informal peer networks, such as these, mix together guidance about pursuing novel new policy initiatives like climate change and more mundane sharedconcerns about resources and strategies can make these networks a particularly rich source of guidance on how climate change relates to their work [68]. The adoption of new climate change interventions amongst the more innovative members of cities' local peer networks might help to demonstrate the viability and utility of these policies, encouraging other cities in these local peer networks who are more likely to act as followers or laggards to ultimately adopt these innovations themselves [69]. The specific character of individual city's peer networks might also shape how cities determine strategies for tailoring specific policy actions to suit their own particular needs [68], providing insight into the learning and adaptive aspects of the diffusion process associated with climate change policy [14]. Future research should attend more to these kinds of everyday connections, particularly among smaller cities like these, as potential key bridges between the most exceptional, innovative global frontrunners and the vast majority of cities. While the cities whose policy actions did not include considerations about climate change reported a relatively lower interest in policy efforts in other cities, they did still pay attention to what other cities in their immediate area were doing and could be influenced by their actions. In this analysis, the highly engaged cities represented 25% or more of the cities studied. Following Rogers' [70] (p. 281) categorization of the process of innovation adoption, the highly engaged cities in this study would begin representing an emerging \"early majority\" of cities adopting climate change into their policy considerations. Their experiences are the ones that will create the momentum that will drive even less-likely cities to adopt these initiatives [70] (p. 284), especially if their experiences demonstrate that climate change policy can, in fact, help them address the kinds of financial or economic development concerns that they share with other smaller cities in their area that pay attention to them. However, differences between the highly engaged cities and those not engaged at all from the interviews and regression analyses provide some indications that the financial and economic contexts might be different between this early majority and the remaining potential late majority and laggard cities [70] (p. 284), representing a potential barrier to such a transition occurring. More attention to this early majority and how it grapples with climate change alongside its other everyday concerns is therefore needed to help those interested in the diffusion of climate change policy better understand how these policies are (and are not) adapted and evolve as they transition from innovative experiments into the everyday actions of an increasing majority of cities.
## 6 Conclusions
This study used 32 interviews and over 200 survey responses to explore ways in which the logic of political economic rationalism shapes whether or not smaller cities are highly engaged with climate change policy. The interviews compared responses about financial concerns, economic development considerations, and the influence of other cities, from officials in cities highly engaged with climate change policy and those where climate change did not influence their policy actions at all. A distinctive mindset amongst the highly engaged cities emerged from the interviews, defined by a desire to move beyond the industrial past of the region through economic development innovation informed by a greater interest in researching policy actions undertaken in other cities, a wide geographic perspective from which they could draw these new ideas from, and a high likelihood of actually adopting policies seen elsewhere in their own city. Then, two logistic regression analyses were conducted to gain a broader perspective of the extent to which factors associated with financial concerns, economic development considerations, and the potential influence of other cities were associated with cities either being highly engaged with climate change mitigation or adaptation versus not engaging at all. These results provided some support for the notion that these factors can help to explain the pursuit of climate change policy in cities and some insight into how such considerations are shaping decisions. However, they also emphasized that the dynamics between climate change policy and these factors were complex and that more qualitative research will need to be done to more effectively unpack the relationships involved [3]. Furthermore, this study's attention to smaller cities whose engagement with climate change policy is currently under researched, highlights that an early majority of cities may be playing a key role in shaping how climate change policy is being adapted to meet the more everyday financial and economic concerns of cities [7] as it diffuses into the majority of them. But the differences that this study also identified between these highly engaged cities and those not engaging with climate change policy at all, also emphasized that the politics surrounding adoption of these efforts would likely continue to shift in the move from this emerging early majority towards a growing majority. The role of the logics surrounding the financial concerns, economic development considerations, and influence from other cities explored in this paper are an essential but complex aspect of the pursuit of climate change policy in cities. Future research should continue to focus on these issues as cities all over the world continue to adapt and respond to a changing climate, a changing economy, and each other.
This research received no external funding.
**Acknowledgments:** The author would like to thank the editors and staff of the journal _Atmosphere_, as well as the anonymous reviewers for this article who offered a great deal of time, expertise, and insight that greatly enhanced the content presented here.
**Conflicts of Interest:** The authors declare no conflict of interest.
## Appendix A Sixteen Policy Actions
* Measures to increase pedestrian transportation
* Enhanced parks
* Reduced energy use
* Increased building efficiency
* Altered stormwater management
* Promoted reuse of brownfields
* Increased tree canopy
* Altered wastewater management
* Promoted greater development density
* Made changes to fleet vehicles
* Enhanced public transportation options
* Altered building codes
* Altered emergency management strategy
* Developed alternative energy on buildings
* Developed water recycling or reuse
* Developed alternative energy options
## Appendix B Example Interview Questions
* What are the most important challenges currently facing your city concerning the city's budget and finances?
* What are the most important challenges currently facing your city concerning growth and economic development?
* What (or who) has shaped the development of \"environmental\" work in your city over time? Follow up questions about climate change specifically were added as necessary.
* How do other cities influence policies undertaken in your own city? Can you give me any specific examples?
* Why is it important for you to understand the work that is going on in other cities?
* What other cities do you think about when making decisions about your own work? Where are they located?
* How do you learn about what these other cities are doing?
## References
* (1) Bulkeley, H. Cities and the Governing of Climate Change. _Annu. Rev. Environ. Resour._**2010**, _35_, 229-253. [CrossRef]
* (2) Broto, V.C.; Bulkeley, H. A survey of urban climate change experiments in 100 cities. _Glob. Environ. Chang._**2013**, _23_, 92-102. [CrossRef]
* (3) van der Heijden, J. Studying urban climate governance: Where to begin, what to look for, and how to make a meaningful contribution to scholarship and practice. _Earth Syst. Gov._**2019**, \\(1\\), 100005. [CrossRef]
* (4) Krause, R.M. Policy Innovation, Intergovernmental Relations, and the Adoption of Climate Protection Initiatives by U.S. Cities. _J. Urban Aff._**2011**, _33_, 45-60. [CrossRef]
* (5) Krause, R.M. Political Decision-Making and the Local Provision of Public Goods: The Case of Municipal Climate Protection in the US. _Urban Stud._**2012**, _49_, 2399-2417. [CrossRef]
* (6) Yeganeh, A.J.; McCoy, A.P.; Schenk, T. Determinants of climate change policy adoption: A meta-analysis. _Urban Clim._**2020**, _31_, 100547. [CrossRef]
* (7) Broto, V.C.; Westman, L.K. Ten years after Copenhagen: Reimagining climate change governance in urban areas. _WIREs Clim. Chang._**2020**, _11_, e643.
* (8) Bulkeley, H.; Stripple, J. Conclusion: Towards a Critical Science of Climate Change? In _Governing the Climate: New Approaches to Rationality, Power and Politics_; Stripple, J., Bulkeley, H., Eds.; Cambridge University Press: Cambridge, UK, 2013; pp. 243-260.
* (9) Okereke, C.; Bulkeley, H.; Shroeder, H. Conceptualizing Climate Governance beyond the International Regime. _Glob. Environ. Politics_**2009**, \\(9\\), 58-78. [CrossRef]
* (10) van der Heijden, J. From leaders to majority: A frontrunner paradox in built-environment climate governance experimentation? _J. Environ. Plan. Manag._**2018**, _61_, 1383-1401. [CrossRef]
* (11) van der Heijden, J. _Innovations in Urban Climate Governance: Voluntary Programs for Low Carbon Buildings and Cities_; Cambridge University Press: Cambridge, UK, 2017.
* (12) Kalafatis, S.E. Colleagues, Competitors, Creators: City Governance among Peers and Its Implications for Addressing Climate Change. Ph.D. Dissertation, University of Michigan, Ann Arbor, MI, USA, 2016.
* (13) Hodson, M.; Marvin, S. Urban Ecological Security': A New Urban Paradigm? _Int. J. Urban Reg. Res._**2009**, _33_, 1468-2427. [CrossRef]
* (14) van der Heijden, J. Understanding voluntary program performance: Introducing the diffusion network perspective. _Regul. Gov._**2020**, _14_, 44-62. [CrossRef]
* (15) High, S. _Industrial Sunset: The Making of North America's Rust Belt, 1969-1984_; University of Toronto Press: Toronto, ON, Canada, 2003; ISBN 978-0802085283.
* (16) Longworth, R.C. _Cagult in the Middle: America's Heartland in the Age of Globalism_; Bloomsbury: New York, NY, USA, 2009.
* (17) Pryor, S.C.; Scavia, D.; Downer, C.; Gaden, M.; Iverson, L.; Nordstrom, R.; Patz, J.; Robertson, G.P. Ch. 18: Midwest. In _Climate Change Impacts in the United States: The Third National Climate Assessment_; Melillo, J.M., Richmond, T.C., Yohe, G.W., Eds.; U.S. Global Change Research Program: Washington, DC, USA, 2014; pp. 418-440.
* (18) Baule, W.; Gibbons, E.; Briley, L.; Brown, D. _Synthesis of the Third National Climate Assessment for the Great Lakes Region_; Integrated Sciences + Assessments: Great Lakes, MI, USA, 2014.
* (19) Kalafatis, S.E.; Campbell, M.; Fathers, F.; Laurent, K.L.; Friedman, K.B.; Krantzberg, G.; Scavia, D.; Creed, I.F. Out of control: How we failed to adapt and suffered the consequences. _J. Great Lakes Res._**2015**, _41_, 20-29. [CrossRef]
* (20) Walters, W. _Governmentality: Critical Encounters_; Routledge: New York, NY, USA, 2012.
* (21) Tiebout, C.M. A Pure Theory of Local Expenditures. _J. Political Econ._**1956**, _64_, 416-424. [CrossRef]
* (22) Ostrom, V.; Tiebout, C.M.; Warren, R. The Organization of Government in Metropolitan Areas: A Theoretical Inquiry. _Am. Political Sci. Rev._**1961**, _55_, 831-842. [CrossRef]
* (23) Schneider, M. _The Competitive City: The Political Economy of Suburbia_; University of Pittsburgh Press: Pittsburgh, PA, USA, 1989.
* (24) Wolman, H.; Spitzley, D. The Politics of Local Economic Development. _Econ. Dev. Q._**1996**, _10_, 115-150. [CrossRef]
* (25) Peterson, P.E. _City Limits_; University of Chicago Press: Chicago, IL, USA, 1981.
* (26) Eberts, R.W.; Gronberg, T.J. Jurisdictional Homogeneity and the Tiebout Hypothesis. _J. Urban Econ._**1981**, _10_, 227-239. [CrossRef]
* (27) Hamilton, B.W. Zoning and property taxation in a system of local governments. _Urban Stud._**1975**, _12_, 205-211. [CrossRef]
* (28) Pack, H.; Pack, J.R. Metropolitan Fragmentation and Suburban Homogeneity. _Urban Stud._**1977**, _14_, 191-201. [CrossRef]
* (29) Massey, D.S.; Denton, N.A. _American Apartheid: Segregation and the Making of the Underclass_; Harvard University Press: Cambridge, MA, USA, 1993.
* (30) Epple, D.; Zelenitz, A. The Implications of Competition Among Jurisdictions: Does Tiebout Need Politics? _J. Political Econ._**1981**, _89_, 1197-1217. [CrossRef]
* (31) Sjoquist, D.L. The Effect of the Number of Local Governments on Central City Expenditures. _Natl. Tax J._**1982**, _35_, 79-87.
* (32) Teske, P.; Schneider, M.; Mintron, M.; Best, S. Establishing the micro foundations of a macro theory: Information, movers, and the comparative local market for public goods. _Am. Political Sci. Rev._**1993**, _87_, 702-713. [CrossRef]
* (33) Basolo, V.; Lowery, D. Delineating the Regional Market in Studies of Inter-City Competition. _Urban Geogr._**2010**, _31_, 369-384. [CrossRef]
* (34) Braun, E. City Marketing: Towards an integrated approach. Ph.D. Dissertation, Erasmus University, Rotterdam, The Netherlands, 2008.
* (35) Kavaratzis, M.; Hatch, M.J. The dynamics of place brands: An identity-based approach to place branding theory. _Mark. Theory_**2013**, _13_, 69-86. [CrossRef]
* (36) Florida, R. _The Rise of the Creative Class: And How It's Transforming Work, Leisure, Community and Everyday Life_, Basic Books: New York, NY, USA, 2002.
* (37) Hospers, G.-J. Creative cities: Breeding places in the knowledge economy. _Knowl. Technol. Policy_**2003**, _16_, 143-162. [CrossRef]
* (38) Lucarelli, A.; Berg, P.O. City branding: A state-of-the-art review of the research domain. _J. Place Manag. Dev._**2011**, \\(4\\), 9-27. [CrossRef]
* (39) Antitrioiko, A.-V. City Branding as a Response to Global Intercity Competition. _Growth Chang._**2015**, _46_, 233-252. [CrossRef]
* (40) Heinrichs, D.; Krellenberg, K.; Fragkias, M. Urban responses to climate change: Theories and governance practice in cities of the Global South. _Int. J. Urban Reg. Res._**2013**, _37_, 1865-1878. [CrossRef]
* (41) Aggarwal, R.M. Strategic Bundling of Development Policies with Adaptation: An Examination of Delhi's Climate Change Action Plan. _Int. J. Urban Reg. Res._**2013**, _37_, 1902-1915. [CrossRef]
* (42) Metz, B.; Davidson, O.; Swart, R.; Pan, J. _Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report. Climate Change 2001: Mitigation_; Cambridge University Press: Cambridge, MA, USA, 2001.
* (43) Denton, F.; Wilbanks, T.J.; Abeysinghe, A.C.; Burton, I.; Gao, Q.; Lemos, M.C.; Masui, T.; O'Brien, K.L.; Warner, K. Climate-resilient pathways: Adaptation, mitigation, and sustainable development. In _Climate Change. 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change_; Cambridge University Press: New York, NY, USA, 2014; pp. 1101-1131.
* (44) Kalafatis, S.E. Identifying the Potential for Climate Compatible Development Efforts and the Missing Links. _Sustainability_**2017**, \\(9\\), 1642. [CrossRef]
* (45) Kalafatis, S.E. When Do Climate Change, Sustainability, and Economic Development Considerations Overlap in Cities? _Environ. Politics_**2018**, _27_, 115-138. [CrossRef]
* (46) Kalafatis, S.E. Comparing Climate Change Policy Adoption and Its Extension across Areas of City Policymaking. _Policy Stud. J._**2018**, _46_, 700-719. [CrossRef]
* (47) Kovsky, C.; Schneider, S.H. Global climate policy: Will cities lead the way? _Clim. Policy_**2003**, \\(3\\), 359-372. [CrossRef]
* (48) Bulkeley, H.; Kern, K. Local government and the governing of climate change in Germany and the UK. _Urban Stud._**2006**, _43_, 2237-2259. [CrossRef]
* (49) Jeffers, J.M. Double exposures and decision making: Adaptation policy and planning in Ireland's coastal cities during a boom-bust cycle. _Environ. Plan. A_**2013**, _45_, 1436-1454. [CrossRef]* Cashmore and Wejs (2014) Cashmore, M.; Wejs, A. Constructing legitimacy for climate change planning: A study of local government in Denmark. _Glob. Environ. Chang._**2014**, _24_, 203-212. [CrossRef]
* Kalafatis and Lemos (2017) Kalafatis, S.E.; Lemos, M.C. The emergence of climate change policy entrepreneurs in urban regions. _Reg. Environ. Chang._**2017**, _17_, 1791-1799. [CrossRef]
* Hendrick and Shi (2015) Hendrick, R.; Shi, Y. Macro-Level Determinants of Local Government Interaction: How Metropolitan Regions in the United States Compare. _Urban Aff. Rev._**2015**, _51_, 414-438. [CrossRef]
* Sharp et al. (2011) Sharp, E.B.; Daley, D.M.; Lynch, M.S. Understanding Local Adoption and Implementation of Climate Change Mitigation Policy. _Urban Aff. Rev._**2011**, _47_, 433-457. [CrossRef]
* Tanner et al. (2009) Tanner, T.; Mitchell, T.; Polack, E.; Guenther, B. Urban Governance for Adaptation: Assessing Climate Change Resilience in Ten Asian Cities. _IDS Work. Pap._**2009**, _315_, 1-47. [CrossRef]
* Anguelovski and Carmin (2011) Anguelovski, I.; Carmin, J. Something borrowed, everything new: Innovation and institutionalization in urban climate governance. _Curr. Opin. Environ. Sustain._**2011**, \\(3\\), 1-7. [CrossRef]
* Delman (2014) Delman, J. Climate change politics and Hangzhou's 'green city making'. In _Branding Chinese Megacities_; Berg, P.O., Bjorner, E., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2014; pp. 249-261.
* Ooi (2011) Ooi, C.-S. Paradoxes of City Branding and Societal Challenges. In _City Branding: Theory and Cases_; Dinnie, K., Ed.; Palgrave Macmillan: Houndsmills, UK, 2011; pp. 54-61.
* Boyne et al. (2005) Boyne, G.A.; Gould-Williams, J.S.; Law, J.; Walker, R.M. Explaining the adoption of innovation: An empirical analysis of public management reform. _Environ. Plan. C Gov. Policy_**2005**, _23_, 419-435. [CrossRef]
* Damanpour and Schneider (2009) Damanpour, F.; Schneider, M. Characteristics of Innovation and Innovation Adoption in Public Organizations: Assessing the Role of Managers. _J. Public Adm. Res. Theory_**2009**, _19_, 495-522. [CrossRef]
* Nelson and Svara (2012) Nelson, K.L.; Svara, J.H. Form of Government Still Matters: Fostering Innovation in U.S. Municipal Governments. _Am. Rev. Public Adm._**2012**, _42_, 257-281. [CrossRef]
* Marquart-Pyatt et al. (2014) Marquart-Pyatt, S.T.; McCright, A.M.; Dietz, T.; Dunlap, R.E. Politics eclipses climate extremes for climate change perceptions. _Glob. Environ. Chang._**2014**, _29_, 246-257. [CrossRef]
* Bae and Feiock (2013) Bae, J.; Feiock, R. Forms of Government and Climate Change Policies in US Cities. _Urban Stud._**2013**, _50_, 1-13. [CrossRef]
* Krause (2012) Krause, R.M. An Assessment of the Impact That Participation in Local Climate Networks Has on Cities' Implementation of Climate, Energy, and Transportation Policies. _Rev. Policy Res._**2012**, _29_, 585-604. [CrossRef]
* Zahran et al. (2008) Zahran, S.; Brody, S.D.; Vedlitz, A.; Grover, H.; Miller, C. Vulnerability and Capacity: Explaining Local Commitment to Climate-Change Policy. _Environ. Plan. C Gov. Policy_**2008**, _26_, 544-562. [CrossRef]
* Lee (2014) Lee, T. _Global Cities and Climate Change: Translocal Relations of Environmental Governance_; Routledge: New York, NY, USA, 2014.
* Lee and Jung (2018) Lee, T.; Jung, H.A. Mapping city-to-city networks for climate change action: Geographic bases, link modalities, functions, and activity. _J. Clean. Prod._**2018**, _182_, 96-104. [CrossRef]
* Davidson et al. (2019) Davidson, K.; Coenen, L.; Acuto, M.; Gleeson, B. Reconfiguring urban governance in an age of rising city networks: A research agenda. _Urban Stud._**2019**, _56_, 3540-3555. [CrossRef]
* Kalafatis et al. (2015) Kalafatis, S.E.; Lemos, M.C.; Lo, Y.-J.; Frank, K.A. Increasing information usability for climate adaptation: The role of knowledge networks and communities of practice. _Glob. Environ. Chang._**2015**, _32_, 30-39. [CrossRef]
* Kern (2019) Kern, K. Cities as leaders in EU multilevel climate governance: Embedded upscaling of local experiments in Europe. _Environ. Politics_**2019**, _28_, 125-145. [CrossRef]
* Rogers (2003) Rogers, E.M. _Diffusion of Innovations_, 5th ed.; Free Press: New York, NY, USA, 2003.
**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Despite an appreciation for the role of cities in addressing global climate change, more studies are needed that explore how climate change policies relate to cities' everyday governing concerns. Such insights are critical for understanding how climate change policy will expand, play out, and evolve as it moves from experimental efforts in particularly innovative cities to the majority of cities. This study addresses these needs using 32 interviews and over 200 survey responses from smaller cities (populations under 100,000) in the American rust belt. In the interviews comparing cities' financial concerns, economic development considerations, and how other cities influence them, a distinctive mindset amongst cities highly engaged with climate change emerged. Highly engaged cities were those pursuing socioeconomic reinvention, informed by efforts to identify and apply policy ideas from a wide range of other cities across the United States and internationally. Results of the regression analyses supported the notion that financial concerns, economic development considerations, and the influence of other cities shape decisions about climate change policy in these cities. However, they also highlighted the complexity of these issues and that the role these factors had in shaping climate change policy will likely continue to evolve as these policies continue to diffuse to more places.
urban; municipal; diffusion; innovation; city networks; co-benefits +
Footnote †: journal: | Condense the content of the following passage. | 256 |
arxiv-format/2106_10581v1.md | # Supervised learning for crop/weed classification based on color and texture features
Faiza Mekhalfa
_Division Productique et Robotique_
_Centre de Developpement de Technologies Avancees_
Algiers, Algeria
[email protected]
Fouad Yacef
_Division Productique et Robotique_
_Centre de Developpement de Technologies Avancees_
Algiers, Algeria
[email protected]
## I Introduction
In recent years, there has been a strong activity in precision agriculture (PA), particularly the monitoring aspect. Precision agriculture employs data from multiple sources for the purpose of improving crop yields and increasing the cost-effectiveness of crop management strategies including fertilizer inputs, irrigation management, and pesticide application [1]. PA offers the opportunity for a farmer to apply the right amount of treatment at the right time and at the right place [2]. Nowadays, (UAVs) can be exploited in a variety of applications related to crops management, by capturing high spatial and temporal resolution images of the entire agricultural field. UAVs have been considered more efficient, compared to the ground robot or satellite acquisitions, since they allow a fast acquisition of the field with very high spatial resolution and at a low cost [3].
Among the most popular application of UAVs in Precision Agriculture is weed mapping [4][5]. Weeds are undesirable plants, which grow in agricultural crops and can cause several problems. They are competing for available resources such as water or even space, causing losses to crop yields [3]. The knowledge of weed infestation is an essential procedure for the use of preventive measures in their control. The challenge of crop/weed classification was addressed by considering various machine learning techniques. The most popular classification techniques are the Artificial Neural Networks (ANNs) family [6] and the Random Forest algorithm [7].
Based on the principle of structure minimum according to the statistical learning theory developed by Vapnik and Chervonenkis [8], Support Vector Machines (SVMs) can solve practical problems encountered with traditional classifiers in the aspects of small training samples, nonlinearity, high dimension and local extreme values [9]. The aim of this work is to use Support Vector Machine (SVM) classifier to perform the identification of weeds in relation to soybean and soil and classification of them in grass and broadleaf, aiming to apply the specific herbicide to weed detected. The critical component of any classification challenge is the available data. The publicly available dataset is used to train and evaluate machine learning algorithm. The image database collected by Dos Santos Ferreira and al. [5] contains over fifteen thousand images of the soil, soybean, broadleaf and grass weeds. As a classical pattern recognition problem, crop/weed classification primarily consists of two critical subproblems: feature extraction and classifier designation. Feature extraction is crucial step to find the suitable descriptors that can provide good discrimination between different classes. Principally, there are three main approaches for weed detection: based on color, shape and texture analysis. Various texture analysis techniques exist, texture features derived from gray-level co-occurrence matrix (GLCM) [10] and Local Binary Pattern (LBP) [11] are the most popular because of their simplicity and adaptability. In this study, color, as a primary input feature, was combined with the texture features for training SVM classifier in order to discriminate between crop/weed plants and soil.
The remainder of the paper is organized as follows. Section II describes the methodology to develop the weed detection and classification system. In section III we present the experimental results to classify weeds in soybean crop images where performance comparison between different features is carried out, and we conclude our work in section IV.
## II Methodology
### _Image dataset_
The database used in this study is available online and can be downloaded from _[https://www.kaggle.com/fpeccia/weed-detection-in-soybean-crops_](https://www.kaggle.com/fpeccia/weed-detection-in-soybean-crops_). It was built by Dos Santos Ferreira and al. [5], using 400 images of soybean crop captured by the UAV. The Simple Linear Iterative Clustering (SLIC) algorithm [12] was used to segment the UAV images. The segments of each image, that identified one of the four classes used in this experiment, were annotated manually. The image dataset contained 15,336 segments, being 3249 of soil, 7376 of soybean, 3520 grass and 1191 of broadleaf weeds.
### _Feature extraction_
Feature extraction is one of the most important stages in pattern recognition. It generates an input vector called descriptor for each image which is then used as input to multiclass classifiers. Although color attributes make sense in distinguishing between vegetation and Soil, they become less effective when applied to classify plant species. Sometimes, the color of weeds and crop leaves look almost the same. In this study, color, as a primary input feature, was combined with the texture features (GLCM, LBP) for discriminating soybean/weed plants and soil.
1) Color features: The color features are means and standard deviations of the three RGB and HSV image bands.
2) Gray-Level Cooccurrence Matrix (GLCM): Textural analysis is a very useful tool for discrimination of weeds from the main crop [13]. One of the earliest methods used for texture feature extraction was proposed by Haralick et al. [10], known as Gray-Level Cooccurrence Matrix (GLCM) and since then it has been widely used in many texture analysis applications.
GLCM is a second-order statistical texture analysis method. It examines the spatial relationship among pixels and defines how frequently a combination of pixels are present in an image in a given direction \\(\\theta\\) and distance \\(d\\).Various research studies show d values ranging from 1, 2 to 10. Applying large displacement value to a fine texture would yield a GLCM that does not capture detailed textural information. GLCM directions \\(\\theta\\) of analysis are: Horizontal (0\\({}^{\\circ}\\)or 180\\({}^{\\circ}\\)),Vertical (90\\({}^{\\circ}\\)or 270\\({}^{\\circ}\\)), Right Diagonal (45\\({}^{\\circ}\\)or 225 \\({}^{\\circ}\\)) and Left diagonal (135\\({}^{\\circ}\\)or 315\\({}^{\\circ}\\)) (See Fig 1).
Let \\(I\\) be a given grey scale image. Let _Ng_ be the total number of grey levels in the image. The Grey Level Co-occurrence Matrix defined by Haralick [14] is a square matrix \\(p\\), where the (_i_,_j_)\\({}^{th}\\) entry of \\(p\\) represents the number of occasions a pixel with intensity \\(i\\) is adjacent to a pixel with intensity \\(j\\). The normalized co-occurrence matrix _p_\\({}_{d}\\) is obtained by dividing each element of \\(p\\) by the total number of co-occurrence pairs in \\(p\\).
The fourteen textural features proposed by Haralick et al [10] contain information about image texture characteristics such as homogeneity, gray-tone linear dependencies, contrast, number and nature of boundaries present and the complexity of the image. We used nine textural features in our study. The following equations define these features [14].
Energy: This statistic is also called Uniformity or Angular second moment. It measures the textural uniformity that is pixel pair repetitions. Energy reaches a maximum value equal to one for a constant image.
\\[Energy=\\sum_{i}\\sum_{j}p_{d}(i,j)^{2} \\tag{1}\\]
Contrast is a measure of intensity or gray level variations between a pixel and its neighbor over the whole image. Large contrast reflects large intensity differences in GLCM. Contrast is 0 for a constant image.
\\[Contrast=\\sum_{i}\\sum_{j}(i-j)^{2}p_{d}(i,j) \\tag{2}\\]
Entropy: This feature measures the disorder or complexity of an image. Complex textures tend to have high entropy.
\\[Entropy=\\sum_{i}\\sum_{j}p_{d}(i,j)\\log(p_{d}(i,j)) \\tag{3}\\]
Homogeneity: This feature is also called as Inverse Difference Moment. It measures image homogeneity as it assumes larger values for smaller gray tone differences in pair elements. It is more sensitive to the presence of near diagonal elements in the GLCM. Homogeneity is 1 for a diagonal GLCM.
\\[Homogeneity=\\sum_{i}\\sum_{j}\\frac{p_{d}(i,j)}{1+(i-j)^{2}} \\tag{4}\\]
Correlation: The correlation feature is a measure of gray tone linear dependencies in the image. Correlation is 1 or -1 for a perfectly positively or negatively correlated image.
\\[Correlation=\\sum_{i}\\sum_{j}p_{d}(i,j)\\frac{(i-\\mu_{x})(j-\\mu_{y})}{\\sigma_{x}^ {2}\\sigma_{y}^{2}} \\tag{5}\\]
where \\(\\mu_{x},\\mu_{y}and\\sigma_{x},\\sigma_{y}\\) are the means and standard deviations and are expressed as:
Fig. 1: The direction angles for GLCM.
\\[\\begin{split}\\mu_{x}&=\\sum_{i}\\sum_{j}ip_{d}(i,j)\\\\ \\mu_{y}&=\\sum_{i}\\sum_{j}jp_{d}(i,j)\\\\ \\sigma_{x}&=\\sqrt{\\sum_{i}\\sum_{j}(i-\\mu_{x})^{2}p_{ d}(i,j)}\\\\ \\sigma_{y}&=\\sqrt{\\sum_{i}\\sum_{j}(j-\\mu_{y})^{2}p_{ d}(i,j)}\\end{split} \\tag{6}\\]
The moments are the statistical expectation of certain power functions of a random variable and are characterized as follows. Moment 1 is the mean which is the average of pixel values in an image and it is represented as
\\[Mean=\\sum_{i}\\sum_{j}(i-j)p_{d}(i,j) \\tag{7}\\]
Moment 2 is the standard deviation that can be denoted as
\\[Standarddeviation=\\sum_{i}\\sum_{j}(i-j)^{2}p_{d}(i,j) \\tag{8}\\]
Moment 3 measures the degree of asymmetry in the distribution and it is defined as skewness
\\[Skewness=\\sum_{i}\\sum_{j}(i-j)^{3}p_{d}(i,j) \\tag{9}\\]
Moment 4 measures the relative peak or flatness of a distribution and is also known as kurtosis:
\\[Kurtosis=\\sum_{i}\\sum_{j}(i-j)^{4}p_{d}(i,j) \\tag{10}\\]
3) Local binary patterns (LBP):
Local Binary Patterns (LBP) is a kind of gray-scale texture operator that is used for describing the spatial structure of an image texture [11]. Due to its discriminative power and computational simplicity, LBP texture extractor has become a popular approach in various applications [15].
The original LBP operator [11] forms labels for the image pixels by thresholding the 3 x 3 neighborhood of each pixel with the center value and considering the result as a binary number. The histogram of these \\(2^{8}=256\\) different labels can then be used as a texture descriptor.
The LBP operator was extended to use neighborhoods of different sizes [16]. Using a circular neighborhood and bilinearly interpolating values at non-integer pixel coordinates allow any radius and number of pixels in the neighborhood. Fig. 2 illustrates three neighbor-sets, where the notation (_M_, _R_) denotes a neighborhood of \\(M\\) sampling points on a circle of radius of \\(R\\).
Given a pixel at \\((x_{c},y_{c})\\), the resulting LBP can be expressed in decimal form as:
\\[LBP_{M,R}(x_{c},y_{c})=\\sum_{i=0}^{M-1}s(i_{m}-i_{c})2^{m} \\tag{11}\\]
where \\(i_{m},i_{c}\\) are respectively gray-level values of the central pixel and \\(M\\) surrounding pixels in the circle neighborhood with a radius \\(R\\), and function _s_(_x_) is defined as:
\\[s(x)=\\left\\{\\begin{array}{ll}1&\\quad\\text{if }x\\geq 0\\\\ 0&\\quad\\text{if }x<0\\end{array}\\right. \\tag{12}\\]
After the LBP extraction, each pixel in an image is replaced by a binary pattern, except at the borders of the image where all of the neighbor values do not exist. The feature vector of an image consists of a histogram of the pixel LBPs. The length of the histogram is \\(2^{M}\\) since each possible LBP is assigned a separate bin. In order to remove rotation effect, a rotation-invariant LBP is proposed in [16]
\\[LBP_{M,R}^{ri}=\\min\\{ROR(LBP_{M,R},i),i=0,1, ,M-1\\} \\tag{13}\\]
where \\(ROR(x,i)\\) performs an \\(i\\) -step circular bit-wise right shift on \\(x\\).
Uniform Local Binary Patterns are patterns with at most two circular 0-1 and 1-0 transitions. For example, 00000000 (0 transitions) and 01110000 (2 transitions) are both uniform whereas 11001001 (4 transitions) and 01010011 (6 transitions) are not. Selecting only uniform patterns contributes to both reducing the feature dimensionality and improving the performance of classifiers using the LBP features.
### _Support vector machine classifier_
Once a feature descriptor is calculated, the next step deals with crop/weeds classification. Support vector machine (SVM) classifier is one of the most successful machine learning methods, because it is robust, accurate and is effective even when using a small training sample.
Support vector machines are originally developed for binary classification. But, they can be adopted to handle the multiclass classification tasks. The basic theory of SVM consists to draw an optimal hyperplane separating data points of different classes. Both separable and non-separable problems are handled by SVM in the linear and nonlinear cases. The idea behind SVM is to map the original data points from the input space to a high dimensional one, called feature space. The mapping is done by a suitable choice of Kernel function [9].
Fig. 2: LBP Neighboring Pixels System.
To implement SVM on image classification we are given a certain number n of training data, each data has two parts: the d-dimensional vector of image features and the corresponding labels of classes (either +1 or -1) [17]:
\\[E=\\{(x_{i},y_{i})/x_{i}\\in\\mathbb{R}^{d},y_{i}\\in\\{-1,1\\},i=1, n\\} \\tag{14}\\]
SVM maps the d-dimensional input vector \\(x\\) from the input space to the \\(d_{h}\\)-dimensional feature space using a nonlinear function \\(\\varphi(.):\\mathbb{R}^{d}\\longrightarrow\\mathbb{R}^{d_{h}}\\). The separating hyperplane in the feature space is then defined as
\\[w.\\varphi(x)+b=0\\ /\\ w\\in\\mathbb{R}^{d_{h}},b\\in\\mathbb{R} \\tag{15}\\]
The classifier should satisfy the condition of existence of \\(w\\) and \\(b\\) such that:
\\[y_{i}(w.\\varphi(x_{i})+b)\\geq 1 \\tag{16}\\]
However, in practical applications, data of both classes are overlapping, which makes a perfect linear separation impossible. Therefore, a restricted number of misclassifications should be tolerated around the margin. The resulting optimization problem for SVM where the violation of the constraints is penalized is given as:
\\[\\left\\{\\begin{array}{l}\\min\\frac{1}{2}\\|w\\|^{2}+C\\sum_{i=1}^{n}\\xi_{i}\\ \\ \\ \\ \\ \\ \\ \\ \\text{such that}\\\\ y_{i}(w.\\varphi(x_{i})+b)\\geq 1-\\xi_{i}\\\\ \\xi_{i}\\geq 0\\end{array}\\right. \\tag{17}\\]
where \\(\\xi_{i}\\) is the relaxation factor considering classification error and \\(C\\) is the cost parameter that controls the tradeoff between allowing training errors and forcing strict margins (i.e. empirical risk minimization [9].
Typically, the constrained optimization problem is referred as the primal optimization problem, which can be written in the dual space by Lagrange multipliers \\(\\alpha_{i}\\geq 0\\). The solution should maximize the following expression:
\\[\\begin{array}{l}L(w,b,\\xi_{i},\\alpha_{i})=\\frac{1}{2}\\|w\\|^{2}+C \\sum_{i=1}^{n}\\xi_{i}\\\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\
four classes (100 images for each class) were extracted from the database. They include Soil, Soybean (crop), Broadleaf and Grass (weeds). A sample of the segmented images of each class used in the experiments is shown in Fig. 4.
As the primary input feature in the SVM classifier, we used the mean and standard deviation attributes of each channel of RGB and HSV color spaces. Then the color features were combined with the texture features (GLCM and LBP) for discriminating soybean/weed plants and soil. The horizontal direction 0\\({}^{\\circ}\\) with a range of 1 (nearest neighbor) was used to calculate the GLCM. Then, the nine feature values mentioned in section II (contrast, correlation, energy, entropy, homogeneity, mean, standard deviation, skewness and kurtosis) were calculated as the texture features of each image. The LBP method uses the uniform-rotation invariant LBP operator to extract texture features. The number of the neighboring pixel M is set at 8 and the radius R is set at 1. The LBP algorithm generates a feature matrix with 10 image features. The obtained feature matrices and label values were divided randomly into two sets: the training set, and testing set, in order to train the SVM classifiers (one.vs.all and one.vs.one). The SVM kernel type is set to be Linear Basis Function. The training stage was carried out using sequential minimal optimization (SMO) algorithm [20].
All the algorithms were developed in Matlab environment and tested on Intel Core i3 computer with a 2.53 GHz processor and 3 Gigabytes of RAM. The performance was evaluated by means of the classification accuracy, which is refer to the ability of the algorithm to predict the correct class label for instances of unknown class labels (testing set), and the confusion matrices which present the correctness of each class and the percentage of confusion of a class with the others. Because the trained data are generated randomly, the forecoming presented scores are averaged on 10 iterations, for all experimentsmost outstanding performance compared to the one.vs. all method. The overall classification accuracy, using one.vs. one approach reaches 96.17% yields over 2% higher accuracy than the one.vs. all approach (i.e., \\(93.92\\%\\)). As shown in execution times, the SVM one.vs.one strategy is much faster than one.vs.all approach. Indeed, the training time of SVM classifier increases significantly with the number of training samples. Thus, since the number of samples which are needed to train the SVM classifier of one.vs. one strategy become smaller, it is generally faster to train the \\(6\\) SVMs of the one.vs.one method than the \\(4\\) SVMs of the one.vs. all approach.
## IV Conclusion
In this work, we have applied SVM classifier on UAV images to discriminate crops, weeds and soil. We have evaluated the following input features: color attributes, GLCM texture features and LBP extractor. The confusion matrices have been obtained for different combinations. We notice that the mis-classifications occurred mainly between crop and weed classes, and these were improved by adding Texture information to color space. Soil could be accurately discriminated using only color features since soil has a strong color difference with green vegetation. The LBP extractor combined with color features produce more consistent classification accuracy performance. The use of SVM classifier with one versus one approach achieved excellent results, with accuracy higher than 96% in the classification of all classes, and is computationally effective.
In future, we can explore other features for training the classifiers and analyze the effects of other machine learning algorithms for classifying crop images. Particularly, we investigate the use of deep learning to reduce the confusion between crops and weeds.
## References
* [1] M.A. Friedl, \"Remote Sensing of Croplands,\" in Comprehensive Remote Sensing, Vol. 6, Elsevier, 2018, pp.78-95.
* [2] C. Zhang, and J. M. Kovacs, \"The application of small unmanned aerial systems for precision agriculture: a review,\" Precision agriculture vol. 13, pp. 693-712, 2012.
* [3] D. C.Tsouros, S. Bibt, P. G. Sarigianmidis, \"A Review on UAV-Based Applications for Precision Agriculture,\" Information. vol. 10, pp. 349-375, 2019.
* [4] M. D. Bah, A. Hafiane, R. Canals, \"Weeds detection in UAV imagery using SLIC and the hough transform,\" In Proceedings of the Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, Canada, pp. 1-6, 28 November-1 December 2017.
* [5] A. Dos Santos Ferreira, D. M. Freitas, G. G. da Silva, H. Pistori, and M. T. Folhes, \"Weed detection in soybean crops using ConvNets,\" Comput. Electron. Agric. vol. 143, pp. 314-324, 2017.
* [6] H. Huang, J. Deng, Y. Lan, A. Yang, X. Deng, and L. Zhang, \"A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery,\" PLoS ONE, vol. 13, pp. 1-19,2018.
* [7] De Castro, Ana I and Torres-Sanchez, Jorge and Pena, Jose M and Jimenez-Brenes, Francisco M and Stillik, Ovidiu and Lopez-Granados, Francisa \" An Automatic Random Forest-OBIA Algorithm for Early Weed Mapping between and within Crop Rows Using UAV Imagery\". Remote Sensing.vol. 10, pp. 285-306, 2018.
* [8] V. Vapnik and A. Chevronenko, \"A note on one class of perceptrons\", Automation and Remote Control, vol. 25. 1964.
* [9] N. Cristianin and J. Shawe-Taylor, Support vector machines, Cambridge University Press, 2000.
* [10] R. Haralick, K. Shanmugam, and I. Dinstein, (1973) \"Textural Features for Image Classification\", IEEE Trans. on Systems, Man and Cybernetics, SMC- vol. 3, pp. 610-621, 1973.
* [11] T. Ojala, M. Pietikainen, D. A. Harwood, \"Comparative study of texture measures with classification based on feature distributions,\" J.Pattern Recognition,vol.29.pp. 51-59,1996.
* [12] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua and S. Susstrunk, \"Slic superpixels compared to state-of-the-art superpixel methods,\" IEEE Trans. Pattern Anal. Mach. Intell. vol. 34, pp. 2274-2282, 2012.
* [13] K. Kawamura, H. Asai, T. Yasuda, P. Soisouvankne and S. Phongchanmixay, \"Discriminating crops/weeds in an upland rice field from UAV images with the SLIC-RF algorithm,\" PLANT PRODUCTION SCIENCE, Taylor and Francis 2020,pp. 1-18.
* [14] N. Zayed and H.A. Elnemr, \"Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities\". Int J Biomed Imaging, 2015.
* [15] D. Huang, C. Shan, M. Ardabilian, Y. Wang and L. Chen, \"Local Binary Patterns and Its Application to Facial Image Analysis: A Survey,\" in IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 41, no. 6, pp. 765-781, Nov. 2011.
* [16] Ojala, Timo and Pietikainen, Matti and Maenpa, Topi \"Multiresolution Gray-scale and Rotation Invariant Texture Classification with Local Binary Patterns,\" IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24.no.7,pp. 971-987,2002.
* [17] F. Mekhalfa and N. Naccreddine, \"Multiclass Classification of Weld Defects in Radiographic Images based on Support Vector Machines\", 10th International Conference on Signal-Image Technology and Inter-Based Systems (SITIS), 23-27 November 2014, Marrakech, Morocco, pp. 1-6.
* [18] C. J. C Burges, \"A tutorial on support vector machines for pattern recognition\", Data Min. Knowl. Discov., vol.2. 1998, pp.121-167.
* [19] J. Milgram, M. Cheriet and R. Sabourin, One Against One or One Against All: Which One is Better for Handwriting Recognition with SVMs, Proceedings of 10th International Workshop on Frontiers in Handwriting Recognition, 2006.
* [20] J. C. Patt, \"Fast training of support vector machines using sequential minimal optimization,\" http://research. microsoft.com/pubs/68391/smo-book.pdf. | Computer vision techniques have attracted a great interest in precision agriculture, recently. The common goal of all computer vision-based precision agriculture tasks is to detect the objects of interest (e.g., crop, weed) and discriminating them from the background. The Weeds are unwanted plants growing among crops competing for nutrients, water, and sunlight, causing losses to crop yields. Weed detection and mapping is critical for site-specific weed management to reduce the cost of labor and impact of herbicides. This paper investigates the use of color and texture features for discrimination of Soybean crops and words. Feature extraction methods including two color spaces (RGB, HSV), gray level Co-occurrence matrix (GLCM), and Local Binary Pattern (LBP) are used to train the Support Vector Machine (SVM) classifier. The experiment was carried out on image dataset of soybean crop, obtained from an unmanned aerial vehicle (UAV), which is publicly available. The results from the experiment showed that the highest accuracy (above \\(95\\%\\)) was obtained from the combination of color and LBP features.
Crop/weed classification, Color features, Gray Level Co-occurrence matrix (GLCM), Local Binary Pattern (LBP), Support Vector Machine (SVM) | Write a summary of the passage below. | 252 |
arxiv-format/2207_02201v1.md | # Efficient Spatial-Temporal Information Fusion
for LiDAR-Based 3D Moving Object Segmentation
Jiadai Sun Yuchao Dai\\({}^{*}\\) Xianjing Zhang Jintao Xu Rui Ai Weihao Gu Xieyuanli Chen
J. Sun and Y. Dai are with Northwestern Polytechnical University, China.X. Zhang, J. Xu, R. Ai and W. Gu are with HAOMO.AI Tech. Co., Ltd.\\({}^{*}\\) corresponding author: [email protected] work has partially been funded by the National Key Research and Development Program of China under Grant 2018AAA0102803, and by the HAOMO.AI Technology Co. Ltd.
## I Introduction
Environmental perception can help vehicles observe and understand the surrounding situation. The ability to recognize and distinguish between dynamic and static objects in the environment is the key to safe and reliable autonomous navigation. At the same time, this information can also be used for many downstream tasks, such as avoiding obstacles [1], static map construction [2], and path planning [3]. Therefore, being able to perform accurate and reliable moving object segmentation (MOS) based on LiDAR sequences online is a key capability to improve the perception of autonomous mobile systems. To reason about the motion of the surrounding objects, one needs to exploit 4D spatio-temporal information.
MOS can be viewed as a higher-level two-class \"semantic\" segmentation task. Instead of distinguishing the basic semantic classes, e.g., humans, vehicles, and buildings, MOS infers the dynamic properties of objects and separates the _actually moving_ objects, e.g., driving cars and pedestrians, from static or non-moving objects e.g., buildings and parked cars. For point cloud segmentation, popular existing solutions can be divided into point cloud-based [4, 5], voxel-based [6, 7], and range image-based [8, 9, 10]. Point-based methods can extract effective features from unordered point clouds, but they are difficult to scale effectively to large-scale point cloud data. Sparse voxels convolution [6, 11] can reduce the computational burden of point clouds, but voxelization will introduce information loss. Range images are used as a comparably lightweight intermediate representation and attractive for online applications. However, it causes boundary-blurring issues due to back-projection. Instead of using a single representation, we propose to first use a range image-based backbone to obtain a _coarse_ segmentation and then use a lightweight 3D voxel sparse convolution module to _refine_ the segmentation results. Using such a coarse-to-fine architecture, our method combines the advantages of different representation modalities of LiDAR scans to alleviate the boundary-blurring issue, while maintaining good efficiency. Example results are shown in Fig. 1.
Different from existing segmentation methods, which are done on a single LiDAR scan, determining whether an object is moving or not usually requires multi-frame observations. Chen et al. [2] propose LMNet to directly use off-the-shelf segmentation networks [8, 9, 10]. It exploits the spatial-temporal information by simply concatenating the residual images calculated from multiple continuous scans. In contrast to LMNet, we propose a novel dual-branch structure that first deals with spatial and temporal information separately and then fuses them using motion-guided attention modules.
The main contribution of this paper is a novel deep neural network to tackle online LiDAR-MOS in 3D data. Our method uses a dual-branch structure bridged by motion-guided attention modules to exploit spatial-temporal information from sequential LiDAR scans. We use a coarse-to-fine architecture fusing range-image and point-cloud representations to reduce
Fig. 1: LiDAR-MOS comparison between our method and LMNet. The upper row shows the segmentation results on range images and the lower row shows the results in 3D point clouds. Red pixels/points are moving objects while black ones represent static objects. Blue circles highlight the wrong predictions. (a) There are many artifacts on the borders of objects, but not obvious in the range image. (b) Using spatial-temporal and different representation of LiDAR scans, our method reduces the artifacts and achieves _sota_ performance.
the artifacts on the borders of the objects without applying a kNN post-processing and a semantic refinement. Based on that, our method achieves online performance, i.e., performs faster than the frame rate (10 Hz) of a typical 3D LiDAR sensor. Our method achieves the state-of-the-art LiDAR-MOS performance on the SemanticKITTI-MOS benchmark [2]. When using the proposed extra data from KITTI-road sequences, our method gains around 10% improvement on the hidden test of the benchmark. We will release the extra annotated data of the KITTI-road dataset together with the implementation of our method to support future research.
Our contributions can be summarized as follows: (i) We propose a dual-branch structure bridged by motion-guided attention modules to better exploit the temporal motion information in residual images. (ii) We use a coarse-to-fine architecture to reduce blurred artifacts on object borders. (iii) Our method achieves the state-of-the-art performance in LiDAR-MOS on the SemanticKITTI-MOS benchmark.
## II Related Work
Moving object segmentation (MOS) has been well studied in the literature using image sequences [12] or RGB-D data [13]. However, it is still challenging for LiDAR data due to the sparsity and uneven distribution of the range measurements. Also, how to exploiting 4D spatio-temporal information from point cloud sequence is still an open question. Here, we focus on approaches using only LiDAR sensors and refer for visual approach to existing surveys [12, 13].
There are some geometric-based methods [14, 15] to tackle the LiDAR-MOS problem, they do not need training data and procedure but occasionally result in incomplete or inaccurate detection of moving objects. There are map cleaning-based methods that can be used to separate moving objects from static LiDAR maps. For example, Kim et al. [16] exploit the consistency check between the query scan and the pre-built map to remove dynamic points. The map is then refined using a multi-resolution false prediction reverting algorithm. Lim et al. [17] remove dynamic objects by checking the occupancy of each sector of LiDAR scans and then revert the ground plane by region growth. In contrast, Arora et al. [18] segment the ground plane and then remove the \"ghost effect\" caused by the moving object during mapping. Chen et al. [19] use the map cleaning method with clustering and multi-object tracking to track the trajectories of different objects and generate training labels for LiDAR-MOS based on the tracking results. Even though such map cleaning methods can distinguish moving and static objects, they usually can only run offline and are not suitable for online MOS.
For online LiDAR-MOS, there are deep network-based methods, which use generic end-to-end trainable models to learn local and global statistical relationships directly from data. For example, there are point cloud scene flow methods [20, 21, 22], which usually estimate motion vectors between two consecutive scans. Based on the predicted motion vectors, it separates moving and non-moving objects by estimating the velocity of every point, which may not differentiate between slowly moving objects and sensor noise. It is worth mentioning that most of them are hard to handle large scans (about 100k points) obtained by the LiDAR sensor, and the real-time performance is difficult to guarantee. In addition, it is also possible to determine whether the object is moving according to the displacement of the bounding box, which requires some prior information for target detection or tracking [23, 24, 25, 26, 27].
Semantic segmentation can be viewed as a related step towards MOS. Recently, LiDAR-based semantic segmentation methods operating only on sensor data have achieved great success [4, 8, 9, 10, 28, 29]. However, most single LiDAR-frame semantic segmentation methods only find _movable_ objects, e.g., vehicles and humans, but do not distinguish between _actually moving_ objects, such as walking pedestrians or driving cars, and non-moving/static objects, like building structures or parked cars. Wang et al. [30] also tackle the problem of segmenting things that could move from 3D laser scans of urban scenes, e.g., cars, pedestrians, and bicyclists. Ruchti et al. [31] use a learning-based method to predict the probabilities of potentially movable objects. Based on the semantic segmentation results [8], Chen et al. [32] propose a semantic LiDAR SLAM, which detects and filters out moving objects by checking the semantic consistency between online observation and semantic map representation.
In contrast to the single-frame methods, some methods [4, 28, 29] operate on multiple point clouds or an aggregated point cloud submap to achieve better segmentation results and at the same time separate moving and non-moving objects. However, these methods perform operations directly on point clouds, which are often laborious and difficult to train. Furthermore, most of them are both time-consuming and resource-intensive, which might not be applicable for autonomous driving.
The most related work to ours is LMNet [2], which also separates moving and non-moving objects using LiDAR scans. Instead of designing a new network structure, it reuses the off-the-shelf LiDAR semantic segmentation methods [8, 9, 10]. To obtain inter-frame motion information, it feeds the multi-frame residual images directly into the existing structure as extra channels to the range image. Such simple concatenation without special design often can not accurately exploit the motion information contained in the spatio-temporal scan sequence. Moreover, LMNet only uses range image representation, bringing many artifacts during back-projection to point clouds, as shown in Fig. 1. Different from LMNet, we propose a novel network. It uses two specific branches to extract appearance features from range images and temporal motion features from residual images, respectively. And then, it uses motion-guided attention modules with different scales to fuse them. In the final stage of decoding, we back-project the 2D features to the 3D point cloud, and use a lightweight sparse convolution module to refine the segmentation results.
## III Our Approach
### _Preliminaries_
**Range Image Representation.** It is a lightweight data representation obtained by projecting the 3D point cloud into 2D space. The advantages of range representation are that it can alleviate the massive consumption caused by the direct processing of point cloud data, and it can facilitate the use of mature 2D convolutional neural networks that have been well studied in vision-based tasks. The range image is widely adopted in various tasks [8, 9, 10, 33, 34], and we only give a quick review. For each LiDAR point \\(\\mathbf{p}=(x,y,z)\\) with Cartesian coordinates, a spherical mapping \\(\\Pi:\\mathbb{R}^{3}\\mapsto\\mathbb{R}^{2}\\) is used to transform it to image coordinates, as following,
\\[\\left(\\begin{array}{c}u\\\\ v\\end{array}\\right)=\\left(\\begin{array}{cc}\\frac{1}{2}\\left[1-\\arctan(y,x) \\,\\pi^{-1}\\right]&w\\\\ \\left[1-\\left(\\arcsin(z\\,r^{-1})+\\mathrm{f_{up}}\\right)\\mathrm{f^{-1}}\\right] \\ h\\end{array}\\right), \\tag{1}\\]
where \\((u,v)\\) are image coordinates, \\((h,w)\\) are the height and width of the desired range image, \\(\\mathrm{f}=\\mathrm{f_{up}}+\\mathrm{f_{down}}\\) is the vertical field-of-view of the sensor, and \\(r=||\\mathbf{p}||_{2}\\) is the range of each point. After that, we can use \\((u,v)\\) to index the 3D point and integrate its coordinates \\((x,y,z)\\), range \\(r\\), and intensity \\(e\\) as the five channels of the range image.
**Residual Images.** We follow LMNet [2] using residual images to exploit the spatial-temporal information from sequential LiDAR scans. To generate a residual image between the current frame \\(l\\) and the previous frame \\(k\\), there are three steps. First, using the relative pose to transform the previous scans \\(k\\) to the current coordinate system. Second, re-projecting the transformed past LiDAR points into range image. Third, computing the residual \\(d_{k,i}^{l}\\) for each pixel \\(i\\) by computing the normalized absolute difference between the ranges of the current frame \\(l\\) and the transformed frame \\(k\\) as
\\[d_{k,i}^{l}=\\begin{cases}|r_{i}-r_{i}^{k\\to l}|/r_{i}&i\\in\\text{ valid pixels},\\\\ 0&\\text{otherwise},\\end{cases} \\tag{2}\\]
where \\(r_{i}\\) is the range value of \\(\\mathbf{p}_{i}\\) from the current frame located at image coordinates \\((u_{i},v_{i})\\) and \\(r_{i}^{k\\to l}\\) is the corresponding range value from the transformed scan located at the same image pixel. Please refer to [2] for more details.
**Meta-Kernel Convolution.** As argued by Fan et al. [34], using 2D convolution on range images cannot fully exploit the 3D geometric information due to the dimensionality reduction of the spherical projection. Exploiting meta-kernel convolution, we can take advantage of the 3D geometric information by using the relative Cartesian coordinates of the \\(3{\\times}3\\) neighbors of the center, as shown in Fig. 3. A shared MLP takes these relative coordinates as input to generate nine weight vectors \\(w_{j}\\), and does element-wise product on the corresponding nine feature vectors \\(f_{j}\\). Finally, by passing a concatenation of the nine neighbors output \\(g_{j}\\) to a \\(1{\\times}1\\) convolution, we aggregate the information from different channels and different sampling locations to update the center feature vector.
### _Network Overview_
We assume a given sequence of LiDAR scans \\(\\{\\mathbf{S}_{t}\\}_{t=1}^{T}\\) and poses \\(\\{\\xi_{t}\\}\\in\\mathbb{SE}(3)\\) provided by a SLAM system, where \\(t\\) represents the time step. The goal is to get accurate point-wise segmentation of moving objects for the current frame, using only the current and previous LiDAR scans. The system architecture is illustrated in Fig. 2. We propose to use both 2D range images and 3D point clouds to obtain accurate 3D segmentation. Our method is mainly based on range images and refined by a lightweight 3D point cloud network.
Our proposed network architecture is built upon the SalsaNext [10], a single encoder-and-decoder network for LiDAR range image-based semantic segmentation. To make it suitable for MOS, we extend and modify it into a dual-branch and dual-head network, consisting of a range image branch (Enc-A) to
Fig. 3: Architecture of the Meta-Kernel Module [34]. According to the 3D coordinates stored in the range image and the input feature map, the weight of \\(3{\\times}3\\) neighborhood can be calculated via the relative coordinates of the center point, and then a \\(1{\\times}1\\)Conv is used to aggregate neighbor features to update the center feature.
Fig. 2: Overview of our method. We extend and modify SalsaNext [10] into a dual-branch and dual-head architecture, consisting of a range image branch (Enc-A) to encode the appearance feature, a residual image branch (Enc-M) to encode the temporal motion information, and use multi-scales motion guided attention module to fuse them. And then an image head with skip connections is used to decode the features from fronts. Finally, we back-project 2D features to 3D points and use a point head to further refine the segmentation results. Specifically, BlockA and BlockE are the ResBlocks with dilated convolution, BlockB is the pooling and optional dropout layer, BlockC is the PixelShuffle and optional dropout layer, BlockD is the skip connection with optional dropout, BlockF is the fully connected layer.
encode the appearance feature, a residual image branch (Enc-M) to encode the temporal motion information, an image head (ImageHead) with skip connections to decode the features from both Enc-A and Enc-M, and a point head (PointHead) to further refine the segmentation results. Specifically, in the feature encoding stage, we first use the meta-kernel operator to better capture the 3D spatial information, and then use a motion-guided attention module to more effectively fuse the motion information extracted from residual images. In the final stage of decoding, besides the loss on the range image dealt with the ImageHead, we also back-project the 2D features to 3D point clouds, and use a lightweight sparse convolution module (PointHead) to refine the segmentation results.
### _Dual Branches with Motion Guided Attention Module_
Different from LMNet [2], which directly concatenates the range image and residual images together as the input of the original SalsaNext, we use two specific branches to extract appearance features from the range images and motion features from the residual images, respectively. To preserve descriptive features, we furthermore replace the average pooling in BlockB with SoftPool as suggested by [35]. In Enc-A, we place one Meta-Kernel convolution layer after the first ResContextBlock [10] to learn dynamic weights from relative Cartesian coordinates and enable the network to obtain more geometric information from the range images, making the convolution more suitable to the range images. Different from the range image branch, in Enc-M, we only use one ResContextBlock to avoid overfitting of the residual images.
Inspired by video-based object segmentation methods [36, 37, 38] using optical flow to obtain appearance features, we add a spatial and channel attention module [36] to extract motion information from residual images. Such motion information enhances the appearance features extracted from range images, i.e., exploiting motion information and emphasizing more salient areas in appearance features. As illustrated in Fig. 4, we use a similar structure to Li et al. [36] to fuse the features of the two branches. We denote \\(f_{a}\\) as the appearance feature of range images from Enc-A and \\(f_{m}\\) as the motion feature of residual images from Enc-M branch, and have:
\\[f^{\\prime}_{a} =f_{a}\\otimes\\mathrm{Sigmoid}\\left(\\mathrm{Conv}_{1\\times 1} \\left(f_{m}\\right)\\right), \\tag{3}\\] \\[f^{\\prime\\prime}_{a} =f^{\\prime}_{a}\\otimes\\left[\\mathrm{Softmax}\\left(\\mathrm{Conv}_{1 \\times 1}\\left(\\mathrm{APool}\\left(f^{\\prime}_{a}\\right)\\right)\\right)\\cdot C \\right]+f_{a}, \\tag{4}\\]
where all \\(f\\) represent feature map of size \\(C\\times h\\times w\\). \\(\\mathrm{APool}(\\cdot)\\) denotes average pooling in the spatial dimensions. Our method first uses a spatial attention to emphasize the spatial locations on the current appearance feature \\(f_{a}\\) using the motion feature \\(f_{m}\\) and generates a motion-salient feature \\(f^{\\prime}_{a}\\). Second, we adopt the channel-wise attention to strengthen the responses of essential attributes by channel-wise attention and generate the final spatial-temporal fused feature \\(f^{\\prime\\prime}_{a}\\).
### _Coarse-to-Fine: Point Refine Module via 3D SparseConv_
Although LMNet [2] can perform LiDAR-MOS only using a 2D segmentation network, the boundary-blurring effect is unavoidable due to the limited resolution of the feature maps and the dimensionality reduction of the range image representation. This leads to false positive predictions around object boundaries. To tackle this issue, we propose a coarse-to-fine strategy. Instead of only using the pixel-wise loss, we propose a point head to refine the segmentation results after the 2D convolutional network. This two-step coarse-to-fine strategy makes the supervision more effective and utilizes both pixel-wise and point-wise supervision. To this end, we back-project/re-index the feature maps of the last layer to the original point locations. Then, we refine point-wise segmentation results by combining this with spatial information performing a relatively lightweight point cloud convolution operation. Back-projection is performed with indices computed when the point cloud is projected to the 2D range image. As shown in Fig. 5, we back-project 2D feature maps \\((C\\times h\\times w)\\) from ImageHead into point-wise features \\((N\\times C)\\). Then, we use points in Cartesian coordinates \\((x,y,z)\\) and point-wise features to initially voxelize 3D sparse voxel. Finally, we use two branches of 3D sparse convolution and point cloud-based MLP to further refine the results using spatial geometry, and reduce the artifacts that occur around the object boundaries.
### _Loss Functions_
Following [2, 10], two loss functions are used to super-vise our network. The total loss function combines both weighted cross-entropy and Lovasz-Softmax losses [39] as \\(\\mathcal{L}=\\mathcal{L}_{wce}+\\mathcal{L}_{ls}\\). To alleviate the imbalanced distribution over different classes, the cross-entropy loss function is weighted with inverse square root frequency for each class, defined as
\\[\\mathcal{L}_{wce}(y,\\hat{y})=-\\sum\\alpha_{i}p\\left(y_{i}\\right)\\log\\left(p \\left(\\hat{y}_{i}\\right)\\right),\\ \\alpha_{i}=1/\\sqrt{f_{i}}, \\tag{5}\\]
where \\(y_{i}\\) and \\(\\hat{y}_{i}\\) are the true and predicted labels and \\(f_{i}\\) is the frequency of the \\(i^{th}\\) class. The Lovasz-Softmax loss can be
Fig. 4: Architecture of the Motion Attention module. The spatial attention and channel attention are used to fuse the moving feature from residual image and appearance feature from range image.
Fig. 5: Architecture of the PointHead module. According to the index of spherical projection, 2D features are back-projected to 3D point, and then we use a sparse voxel-based branch and point-based branch to extract point-wise features for more accurate classification. The upper row represents the voxel-branch and the lower is point-branch.
formulated as follows:
\\[\\mathcal{L}_{ls}=\\frac{1}{|C|}\\sum_{c\\in C}\\overline{\\Delta_{J_{c}}}(m(c)),\\,m_{i }(c)=\\left\\{\\begin{array}{ll}1-x_{i}(c)&\\text{if }c=y_{i}(c)\\\\ x_{i}(c)&\\text{otherwise}\\end{array}\\right., \\tag{6}\\]
where \\(|C|\\) is the class number, \\(\\overline{\\Delta_{J_{c}}}\\) represents the Lovasz extension of the Jaccard index, \\(x_{i}(c)\\in[0,1]\\) and \\(y_{i}(c)\\in\\{-1,1\\}\\) hold the predicted probability and ground truth label of pixel \\(i\\) for class \\(c\\), respectively.
The same loss function is applied to both pixel and point levels in a two-stage training scheme. We first apply the loss on the pixel level to supervise the range image-based encoder-and-decoder backbone. Then, we freeze the 2D range image network and apply the loss point-wisely to train the proposed PointHead. In this way, we train the network fusing supervision from both range image and point cloud representations.
### _Implementation Details_
We use PyTorch [40] library to implement our method, which is trained with 4 NVIDIA RTX 3090 GPUs. The size of the range image is set to \\(64\\times 2048\\). We apply the same data augmentation as used in LMNet [2] during training. We minimize \\(\\mathcal{L}_{wcc}\\) and \\(\\mathcal{L}_{ls}\\) using the stochastic gradient descent with momentum 0.9 and weight decay 0.0001. The initial learning rate is set to 0.01. We use the implementation of 3D sparse convolution from TorchSparse [6] to implement our PointHead. We train the network using a two-stage training scheme. First, we train the 2D convolutional network with the image labels. After that, we freeze the parameters of the 2D encoder-decoder network and use the point cloud labels to train the PointHead separately.
## IV Experiments
In this section, we conduct a series of experiments on the SemanticKITTI-MOS dataset [2] to evaluate the quality of the MOS and different design considerations of our method.
### _Experiment Setups_
**Datasets.** We train and evaluate our method on the SemanticKITTI-MOS dataset [2], which uses the same split for training and test as used in the original odometry dataset and remapping all the 28 semantic classes into only two types: moving and non-moving/static objects. The dataset contains 22 sequences in total, where 10 sequences (19,130 frames) are split for training, 1 sequence (4,071 frames) for validation, and 11 sequences (20,351 frames) for testing.
There are currently only a few datasets available for 3D LiDAR-based MOS, and the ratio of moving objects in the current Semantic-KITTI MOS dataset is relatively small. We call a LiDAR scan a dynamic frame if the number of moving points in that frame is larger than 100, otherwise it is a static frame. The proportion of dynamic frames is only \\(25.77\\%\\) in the train split. To have more data at hand, we use an automatic label generation method [19] to first automatically generate coarse labels for the KITTI-road dataset 1 and then manually refine them to enrich the training data for this task. We label 12 sequences of KITTI-road, where 6 sequences (2,905 frames) are used for training and 6 sequences (2,889 frames) for validation. We will release this additional labeled dataset to facilitate further research.
Footnote 1: [http://www.cvlibs.net/datasets/kitti/raw_data_ph??type=road](http://www.cvlibs.net/datasets/kitti/raw_data_ph??type=road)
Because of the unequal distribution of dynamic and static training samples, as also indicated in [41], we omit frames from continuous static frames to speed up the training, i.e., using a smaller downsampled dataset for faster experiments. It has been verified by experiments that reducing the training data will lead to a slight decrease in the IoU of moving objects, but the effectiveness of each module is still guaranteed.
**Evaluation Metrics.** Following the protocols of LMNet [2], for quantifying the MOS performance, we measure the Jaccard Index or intersection-over-union (IoU) metric [42] over moving objects, which is given by
\\[\\text{IoU}=\\text{TP}/(\\text{TP}+\\text{FP}+\\text{FN}), \\tag{7}\\]
where TP, FP, and FN represent true positive, false positive, and false negative predictions for the moving class.
**Baselines.** Because there are not many existing learning-based implementations for LiDAR-based MOS available, we only choose three typical approaches using different types of inputs as representatives. (1) Range Image View: LMNet [2] uses the residual images as additional channels together with the range images as input to a range image backbone and is trained with binary labels. A kNN post-processing [8] is used to reduce artifacts on objects' borders. And we choose the best setting of LMNet (with SalsaNext backbone) for comparison. (2) Bird's Eye View (BEV): LiMoSeg [41] uses two successive LiDAR scans in 2D BEV representation to perform pixel-wise classification and can run at high frame rates on embedded platforms. (3) Point-Voxel View: Cylinder3D [7] uses cylindrical partition and point-level feature extractor to segmentation. We modify its open source code2 to input two consecutive aligned frames and train it from scratch with MOS-labels to perform MOS.
Footnote 2: [https://github.com/xinge008/Cylinder3D](https://github.com/xinge008/Cylinder3D)
Since the implementation of LiMoSeg is not publicly available, we report the results from the original paper. The results of Cylinder3D and LMNet are from the retrained models using the same setup as used by our methods.
**Protocols.** We follow the protocols of LMNet [2], using the official dataset split to train and validate the network, using 8 residual images as the input of Enc-M and the range image with 5 input channels \\((x,y,z,r,e)\\) as the input of Enc-A as described in Sec. III-A. The generation of the residual image is in line with [2]. Note that, our method _does not_ use any semantic information of different classes to refine predictions, such as vehicle and buildings. This means that our method only needs the binary moving/non-moving labels for training.
### _Evaluation Results and Comparisons_
The 3D LiDAR-MOS evaluation results of moving objects IoU are shown in Tab. I. All the methods are evaluated on both, the validation set (sequences 08), which is unseen during training, and the hidden test split (sequences 11-21) of the benchmark dataset. The implementation of the bird-eye-view method LiMoSeg is not publicly available, and only the validation set result is reported in the original paper [41]. Hence, its test result is missing. For LMNet, we use its released code also with our sampled data protocol to retrain the model from scratch, which performs slightly worse than that reported in the original paper. For a deeper insight, we report the results of our method using only the 2D segmentation network structure called \"Ours-v1\", and the complete structure with the proposed PointHead called \"Ours-v2\".
As can be seen in Tab. I, our method achieves significantly better performance than the state-of-the-art learning-based methods in terms of LiDAR-MOS. The improvements come from our designed dual-branch structure with the motion-guided attention and the coarse-to-fine scheme with the PointHead module. Our method using only the range-image backbone without kNN (Ours-v1) already outperforms most of the baseline methods and is on par with Cylinder3D, which is a dense point cloud-based semantic segmentation method but cannot achieve real-time performance due to the large amount of computation. When training with the proposed extra KITTI-road sequences, our method gains a better generalization and LiDAR-MOS performance on the hidden test set. Our method with the proposed PointHead (Ours-v2) achieves the state-of-the-art LiDAR-MOS performance with an IoU of \\(70.16\\%\\) in the SemanticKITTI-MOS benchmark, by exploiting and fusing both the range image and point cloud representations of the LiDAR data. Moreover, our method could also run online, which will be further discussed in Sec. IV-D.
More qualitative comparisons are shown in Fig. 6. As illustrated, the range image-based LMNet generates many wrong predictions on the borders of the objects, while the point cloud-based method Cylinder3D often misses detecting parts of the moving objects. Using the devised dual-branch structure and motion-guided attention and fusing both different representations of LiDAR scans, our method detects most points belonging to moving objects without bringing artifacts on the objects' borders.
### _Ablation Study_
In this section, we conduct several ablation experiments on the validation set (sequence 08) of the SemanticKITTI-MOS dataset to analyze the effectiveness of different components of our method. For the validation of each setup, we train 3 times and report the averaged results.
We first provide an ablation study on the architecture of the proposed network in Tab. II. We vertically compare each module with different setups of our proposed network to the vanilla LMNet (with SalsaNext backbone) (_a_). \"\\(\\Delta\\)\" refers to the improvement gained by different setups compared to the baseline (_a_). We report the results of using the proposed dual-branch architecture with motion-guided attention (MGAtten.) under (_b_). The dual-branch structure with attention improves the vanilla method (w/o kNN) by \\(2.74\\%\\) in terms of IoU. On this basis, we add the Meta-Kernel convolution (_c_), SoftPool (_d_), and combine them together (_e_). The performances further improve consistently. Our final setup gains an improvement of \\(5.06\\%\\) compared to the baseline without kNN.
We also divide different setups horizontally into three groups: without using a kNN post-processing (w/o kNN), with a kNN post-processing (w/ kNN), and with using our proposed PointHead (w/ PointHead). When comparing the setups with or without using kNN for post-processing, we see a clear improvement after applying just a simple kNN in all setups. Instead of using a kNN, we propose to use a PointHead to further refine the MOS results of the 2D range image network by fusing the supervision from 3D points level. As can be seen, using our proposed PointHead, we obtain even better results than both with and without kNN in all setups. Different to kNN as an extra post-processing module, our PointHead is a part of the network and enables our method to exploit and fuse both representations of LiDAR scans in a more elegant end-to-end manner. More specifically, compared to the same setting without using kNN post-processing, our proposed PointHead achieves a maximum absolute \\(3.35\\%\\) improvement over its counterpart using a kNN and \\(8.25\\%\\) improvement over its counterpart without using post-processing. As shown in Fig. 1 and Fig. 6, the qualitative results also prove that PointHead can better handle the blurred boundary of moving objects. Filtering based on kNN votes is limited by the receptive field (size of k). In contrast, our module is end-to-end trained, and the receptive field is more flexible, so it performs better than with kNN post-processing.
Another ablation study about using extra training data from the KITTI-road sequences is shown in Tab. III. In the original SemanticKITTI dataset, sequence 08 is set as the validation set considering that it contains all the different categories of semantic classes. However, for MOS task, we found that sequence 08 is difficult to represent all different situations to well evaluate the trained models for MOS, since sequence 08 only contains the LiDAR data collected in the urban environment. Therefore, we introduce extra KITTI-road data to provide more training and validation data collected in different environments such as country roads or highways.
As shown in Tab. III, we test both LMNet and our methods using different training setups. As can be seen, the models trained using extra KITTI-road data, (_iv_) and (_v_), achieve better performance on the hidden test set compared to their counterparts, (_ii_) and (_iii_) trained only using SemanticKITTI-MOS data. This indicates that our trained and validated models achieve good generalization ability using the proposed
\\begin{table}
\\begin{tabular}{l c c|c|c} \\hline \\hline
**Methods** & kNN & road & **validation** & **test** \\\\ \\hline LiMoSeg\\({}^{*}\\)[41] & & & 52.60 & - \\\\ Cylinder3D\\({}^{*}\\)[7] & & & 66.29 & 61.22 \\\\ LMNet [2] & & & 58.11 & 50.18 \\\\ LMNet [2] & ✓ & & 62.51 & 54.54 \\\\ LMNet\\({}^{*}\\)[2] & ✓ & & 63.82 & 60.45 \\\\ \\hline Ours-v1 & & & 63.17 & 60.21 \\\\ Ours-v1 & ✓ & & 68.07 & 62.53 \\\\ Ours-v1 & ✓ & ✓ & 66.93 & 69.27 \\\\ \\hline Ours-v2 & & & **71.42** & 64.86 \\\\ Ours-v2 & & ✓ & 69.28 & **70.16** \\\\ \\hline \\multicolumn{4}{l}{\\({}^{*}\\) indicates all frames in training split are used, without downsampling.} \\\\ \\end{tabular}
\\end{table} TABLE I: Evaluation and comparison of moving objects IoU on the validation set (seq08) and the benchmark test set.
additional data. Interestingly, the performance of the models trained with extra KITTI-road data decreased on the original validation set sequence 08, which indicates that the performance/improvement of the validation and test sets are not strictly positively correlated.
### _Runtime and Efficiency_
The runtime is evaluated on sequence 08 (about 122k points per scan) with Intel Xeon Silver 4210R CPU @ 2.40 GHz and a single NVIDIA RTX 3090 GPU. In Table IV, we report the average time comparison of several baseline methods and ours. Due to serial processing, the PointHead block affects the total time. We also provide a lightweight PointHead-Lite (\\({}^{\\dagger}\\)) with fewer layers to trade-off the accuracy and speed, which is faster than the frame rate 10 Hz of a typical 3D LiDAR sensor (e.g., Velodyne or Ouster).
## V Conclusion
In this paper, we have presented a novel and effective network for LiDAR-based online moving object segmentation. Our method uses a dual-branch structure to better explore and fuse the spatial and temporal information that can be obtained from sequential LiDAR data. A point refinement module is designed to significantly reduce the boundary-blurring artifacts of the objects, and this coarse-to-fine strategy enables
\\begin{table}
\\begin{tabular}{c|l|c|c|c|c|c} \\hline & Baseline and components & w/o kNN & \\(\\Delta\\) & w/ kNN & \\(\\Delta\\) & w/ PointHead & \\(\\Delta\\) \\\\ \\hline \\hline \\((a)\\) & LMNet (with SalsaNext) & 58.11 & 62.51 & & **64.05** & \\\\ \\hline \\((b)\\) & + DualBranchWithMGAtten & 60.85 & +2.74 & 65.50 & +2.99 & 67.74 & +3.69 \\\\ \\((c)\\) & + DualBranchWithMGAtten + SoftPool & 61.65 & +3.54 & 66.27 & +3.76 & 68.52 & +4.47 \\\\ \\((d)\\) & + DualBranchWithMGAtten + MetaKernel & 62.75 & +4.64 & 67.57 & +5.06 & 70.26 & +6.21 \\\\ \\((e)\\) & + DualBranchWithMGAtten + MetaKernel + SoftPool & 63.17 & +5.06 & 68.07 & +5.56 & 71.42 & +7.37 \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Ablation study of components on the validation set (seq08). \\({}^{\
eg}\\Delta^{\
eg}\\) shows the improvement compared to the vanilla baseline (\\(a\\)).
\\begin{table}
\\begin{tabular}{c|l|c|c|c|c} \\hline \\hline & & & \\multicolumn{2}{c|}{**validation**} & \\\\ & **Methods** & kNN & road & seq08 & seq08+road & **test** \\\\ \\hline \\((i)\\) & LMNet [2] & ✓ & 62.51 & - & 54.54 \\\\ \\((ii)\\) & LMNet\\({}^{\\star}\\)[2] & ✓ & 63.82 & - & 60.45 \\\\ \\((iii)\\) & Ours-v1 & ✓ & 68.07 & - & 62.53 \\\\ \\hline \\((iv)\\) & LMNet [2] & ✓ & ✓ & 54.26 & 81.80 & 62.98 \\\\ \\((v)\\) & Ours-v1 & ✓ & ✓ & 66.93 & 84.67 & 69.27 \\\\ \\hline \\multicolumn{6}{l}{\\({}^{\\star}\\) indicates all frames in training split are used, without downsampling.} \\\\ \\end{tabular}
\\end{table} TABLE III: Ablation on extra data and different validation setups.
Fig. 6: Qualitative results of different methods for LiDAR-MOS on the validation set of the SemanticKITTI-MOS dataset. Blue circles highlight incorrect predictions and blurred boundaries. Best viewed in color and zoom in for details.
\\begin{table}
\\begin{tabular}{c|l|c|c|c|c} \\hline \\hline LMNet & Cylinder3D & Ours-v1 & Ours-v2 & Ours-v2\\({}^{\\dagger}\\) \\\\ \\hline
13.31 & 124.38 & 41.81 & 116.93 & 85.25 \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE IV: Comparison of running time (ms) with baseline methods.
our method to operate online. We additionally annotated the KITTI-road dataset to enrich the training data, which enhanced the generalization ability of the model. Experimental results on the SemanticKITTI-MOS dataset demonstrate the state-of-the-art performance of our proposed method.
## References
* [1]M. Abadi, S. Yogamani, S. Milz, P. Maeder, H. Gotzig, M. Simon, and H. Rashed (2021) Limose: real-time bird's eye view based lidar motion segmentation. arXiv preprint, 2111.04875. Cited by: SSII-A.
* [2]A. Dave, P. Tokmakov, and D. Ramanan (2019) Towards segmenting anything that moves. In Proc. of the IEEE Intl. Conf. on Computer Vision Workshops, Cited by: SSII-A.
* [3]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [4]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [5]A. Dave, T. Ozturk, and M. B. Blaschko (2018) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [6]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [7]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [8]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [9]A. Dave, T. Ozturk, and M. B. Blaschko (2018) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [10]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [11]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [12]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [13]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [14]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [15]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [16]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [17]A. Dave, T. Ozturk, and M. B. Blaschko (2019) The lovasz-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Cited by: SSII-A.
* [18]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [19]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [20]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [21]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [22]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [23]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [24]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [25]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [26]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [27]A. Milioto, I. Vizzo, D. Dai, C. Stachniss, and J. Gall (2022) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-745. Cited by: SSII-A.
* [28]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [29]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [30]A. Milioto, I. Vizzo, D. Dai, C. Stachniss, and J. Gall (2021) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-745. Cited by: SSII-A.
* [31]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [32]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [33]A. Milioto, I. Vizzo, D. Dai, C. Stachniss, and J. Gall (2021) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-745. Cited by: SSII-A.
* [34]A. Milioto, K. Chen, Y. Liu, D. Dai, C. Stachniss, and J. Gall (2021) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-745. Cited by: SSII-A.
* [35]A. Milioto, K. Chen, Y. Liu, D. Dai, C. Stachniss, and J. Gall (2022) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-7452. Cited by: SSII-A.
* [36]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [37]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [38]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [39]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) RangeNet++: fast and accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [40]A. Milioto, K. Chen, Y. Liu, D. Dai, C. Stachniss, and J. Gall (2021) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-745. Cited by: SSII-A.
* [41]A. Milioto, K. Chen, Y. Liu, D. Dai, C. Stachniss, and J. Gall (2022) Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform. IEEE Robotics and Automation Letters (RA-L)7 (2), pp. 738-745. Cited by: SSII-A.
* [42]A. Milioto, K. Chen, | Accurate moving object segmentation is an essential task for autonomous driving. It can provide effective information for many downstream tasks, such as collision avoidance, path planning, and static map construction. How to effectively exploit the spatial-temporal information is a critical question for 3D LiDAR moving object segmentation (LiDAR-MOS). In this work, we propose a novel deep neural network exploiting both spatial-temporal information and different representation modalities of LiDAR scans to improve LiDAR-MOS performance. Specifically, we first use a range image-based dual-branch structure to separately deal with spatial and temporal information that can be obtained from sequential LiDAR scans, and later combine them using motion-guided attention modules. We also use a point refinement module via 3D sparse convolution to fuse the information from both LiDAR range image and point cloud representations and reduce the artifacts on the borders of the objects. We verify the effectiveness of our proposed approach on the LiDAR-MOS benchmark of SemanticKITTI. Our method outperforms the state-of-the-art methods significantly in terms of LiDAR-MOS IoU. Benefiting from the devised coarse-to-fine architecture, our method operates online at sensor frame rate. The implementation of our method is available as open source at: [https://github.com/haomo-ai/MotionSeg3D](https://github.com/haomo-ai/MotionSeg3D). | Summarize the following text. | 292 |
mdpi/03724322_e647_4f95_b4b9_6a218e8c346d.md | # Need Safer Taxi Drivers? Use Psychological Characteristics to Find or Train!
Kayvan Aghabayk
1School of Civil Engineering, College of Engineering, University of Tehran, Tehran 15119-43943, Iran;
1
Leila Mashhadizade
1School of Civil Engineering, College of Engineering, University of Tehran, Tehran 15119-43943, Iran;
1
Sara Moridpour
2Civil and Infrastructure Engineering Discipline, School of Engineering, RMIT University, Melbourne 3000, Australia2
Received: 23 April 2020; Accepted: 15 May 2020; Published: 20 May 2020
######
t 1 202
approach to scale-measure human behavior. Among these questionnaires, the driver behavior questionnaire (DBQ) is widely used for measuring self-reported driving behavior [12]. DBQ is one of the most widely implemented measurement scales to evaluate the self-reported aberrant driving behavior [13]. The DBQ is designed to classify aberrant driving behaviors into specific categories, which can be used by both researchers and industry personnel to investigate the drivers' behavior and examine the factors associated with crashes.
As mentioned earlier, driving behavior characteristics influence the traffic safety [14]. Limited studies have considered the impact of drivers' psychological characteristics (e.g., subjective norm, instrumental attitudes, and sensation seeking) or the demographic information (e.g., age and gender) of drivers on their driving behavior [15; 16; 17; 18; 19; 20]. However, a few psychological characteristics have been investigated in the literature. This study contributes to answering the question regarding which psychological characteristics can be related to aberrant driving behaviors. The question is Do psychological characteristics of drivers affect their driving? If the answer is yes, which driving behavior of drivers does each of their psychological characteristics affect? Moreover, this study focuses on taxi drivers' driving behavior and their psychological characteristics because taxis play a very important role in developing countries such as Iran. Taxis in developing countries are different from developed countries, and are often used to compensate for shortage of public transportation. In those countries, taxi drivers are at high risk of road fatalities and they are also responsible for a large proportion of road crashes [21; 22]. However, the prevalence of taxi accidents varies by region and country. For instance, a study in Vietnam reported an overall crash prevalence of 22.7% among 1214 taxi drivers for the period of 2006-2009 [23]. A report in Africa indicated that among 712 taxi drivers, 26.4% of them had been involved in a crash within 3 years [24].
The driving behaviors that are considered in this study include violations, aggressive violation, errors, and lapses. The drivers' psychological characteristics that have been considered in this study include instrumental attitude, subjective norm, sensation seeking, aggressive mode, conscientiousness, life satisfaction, premeditation, urgency, and selfishness [2; 25].
This paper is structured as follows. Section 2 presents a review of the literature on driving behavior and the effects of psychological characteristics on the behavior of drivers. Section 3 presents the research methodology, dataset, and data collection procedures. It is followed by the presentation and discussion of the results from statistical analysis. Finally, the paper is concluded, and the future directions of the research are presented.
## 2 Literature Review
### Driver Behavior Questionnaire (DBQ)
The original DBQ was provided by Reason et al. [15] to determine the extent of human contribution in accidents and it comprised 50 questions. The original DBQ only focused on two distinct behaviors including errors and violations [15], the scale has been modified to include \"slips and lapses\" [13]. In order to evaluate people's driving behavior, intentional violations were separated from unintentional violations. Intentional violations were defined as deliberate deviations from the actions required for the safety of the traffic system. Unintentional violations include errors and lapses which can lead to unexpected results for the driver. Errors were described as the failure of planned outcomes. For instance, an error would occur when a driver fails to notice the pedestrians crossing the road when turning into a side-street from a main road. Lapses are those unintentional violations that occur due to memory or attention failures which can lead to an accident [10]. There are different versions of the DBQs which differ in the structure of the questionnaire, and divide the aberrant driving behaviors into different categories and purpose questions based on each group's characteristic. In all different versions of the DBQs, the rate of aberrant driving behaviors is scored [19; 26].
In the DBQ, respondents should emphasize how often they have experienced a specific situation. The structure of the questionnaire in each study depends on different factors including the target country's driving culture, laws, and specific conditions. Parker used a shorter version of the original DBQ shown by Reason and colleagues. The only difference in the factor structure of this DBQ version was moving two items from 'lapses' to 'errors' [27]. Blocky and Hartley [16] applied the method introduced by Reason et al. [15] on the Western Australian drivers' and compared the findings with the results from Reason et al.'s research. The differences between the results of those studies were mainly due to the differences between the driving characteristics of the drivers [16]. Lawton et al. [28] conducted a study using a shortened version of the original DBQ presented by Reason et al. [15] and the results of the analysis demonstrated that in the younger population of drivers' three factors including errors, Highway Code violations, and more interpersonally aggressive violations were found to be the most dominant driving behavior violations. Aberg and Rimmo [17] added new factors to the original DBQ shown by Reason and colleagues, to reflect the driving conditions in Sweden. The questionnaire evaluated sensation seeking, the tendency to engage in risky behaviors (violations, mistakes, inattention, and inexperience errors), traffic offences, and accident involvement. In this study, the \"Lapses\" factor in earlier studies was divided into two new factors, being \"errors due to inattention\" and \"errors due to inexperience\".
### Driving Behavior Modelling
Most articles have examined the relationship between drivers' demographic information (e.g., age and gender) and driving behavior by using linear regression analysis or multi-group analysis of measurement invariance [15; 16; 17; 18; 29; 30; 31]. The results indicated that with the increase in age, violations have declined, but errors have increased. Additionally, the comparison between'males' and 'females' driving behavior has shown that men commit more violations than women, but women's errors are more frequent than men. Lifestyle traits such as'religion/tradition', 'driving aimlessly','sports', and 'culture' were found to be significant predictors of driving behavior in a multiple regression analysis [32]. Reimer has found that attention deficit hyperactivity disorder (ADHD) and age are significantly related to error, lapse, and violation scores [33].
### Aberrant Driving Behavior among Taxi Drivers
Most studies about driving behaviors of taxi drivers have been conducted in developed countries [20; 34; 35], and due to the differences between taxies in these countries and developing countries, their results are not very reliable for developing countries such as Iran. There are few studies conducted in developing countries that have examined the driving behaviors of taxi drivers [21; 36]. According to the literature, risk-taking and risky driving are among the most important factors in the occurrence of accidents [1; 29; 37; 38; 39; 40; 41; 42; 43; 44]. Previous studies investigated the influence of sensation seeking and drivers' attitude on risky driving behavior [37; 38; 42; 43; 44], the impact of the drivers' attitude towards driving rules and speeding on their risky driving levels [45], and the influence of emotional intelligence on risky driving behaviors [1]. However, few articles have discussed the correlation between driving behavior and the psychological characteristics of drivers, which is the focus of this paper. Further, there have been limited studies evaluating the impact of selfishness and life satisfaction on driving behavior. This study will investigate the influence of drivers' behavior, including violations, aggressive violation, errors, and lapses, on the most common human psychological factors. Besides, there has been little attention paid to the taxi drivers' psychological characteristics in developing countries. In addition, it can be stated that there is no study which has examined the relationship between these characteristics and driving behaviors of taxi drivers in these countries. Now, the question is: Do psychological characteristics of drivers affect their driving? If the answer is yes, which driving behavior of drivers does each of their psychological characteristics affect? Moreover, is the effect of these characteristics on each driving behaviors negative or positive? The present study attempts to answer these questions and shows the importance of paying attention to the psychological characteristics of taxi drivers.
## 3 Methods
### Participants
A sample of 245 Iranian taxi drivers has been used in this research. All of them were male drivers (the taxi drivers in Iran are mostly male drivers, and few female drivers work as taxi drivers), and their age was between 20 to 75 years (mean age = 46.8, standard deviation = 12.11). More than 70% of the respondents were taxi drivers for more than ten years. Regarding educational attainment, 34% of respondents had undertaken tertiary studies, and 6% of them were illiterate. Participants were selected from 10 random taxi stations from different areas of Metropolitan Tehran, which were geographically and culturally well-distributed. All the questionnaires were completed through a face-to-face interview. The face-to-face interview with taxi drivers led drivers to answer all the questions so the data would not be incomplete and useless. Furthermore, if a question was unclear, it was explained to them. Another advantage of this type of interview is that a significant number of taxi drivers in Iran are elderly and retired people who prefer to be asked questions instead of filling out the questionnaires themselves.
### Data Collection
The questionnaires were distributed among taxi drivers in taxi stations. The taxi stations were selected from different areas of Metropolitan Tehran. The questionnaire consisted of three sections. The first section included questions on demographic characteristics of drivers such as age, education, and driving experience. The calibrated version of DBQ to accommodate Iran's driving condition was employed in the second section of this questionnaire [46], which was derived from its original version [15]. A four-factor structure of the DBQ included violations, aggressive violations, lapses, and errors. The participants had to answer how often they have been engaged in each type of driving violation behavior while driving (1 = never, 3 = few times, 5 = occasionally, and 7 = all the time). The items' scores were summed, and the higher scores indicated more frequent aberrant driving. A third section was added to the questionnaire to establish a relation between the drivers' psychological characteristics and the driving behavior. The psychological characteristics which were investigated in that section included nine specifications, including: instrumental attitude, subjective norm, life satisfaction, sensation seeking, premeditation, urgency, selfishness, aggressive mode, and conscientiousness. The questions in that section were adapted from existing questionnaires in the field of psychology [2; 25].
### Statistical Analysis
#### 3.3.1 Cronbach's Alpha Coefficients
To determine the reliability of this survey, the internal consistency of the scales was measured for the last two sections of the questionnaire (driving behavior and psychological characteristics), using Cronbach's alpha coefficient. Cronbach's alpha coefficient is a measure of internal consistency that shows how closely a set of items are related as a group. Cronbach's alpha is a measure of scale reliability in which a \"high\" value for alpha does not imply that the measure is unidimensional [47]. A Cronbach's alpha of 0.6 to 0.7 is acceptable [48].
#### 3.3.2 Multiple Regression Model
Standard multiple regression model is used to determine the relationships between the drivers' psychological characteristics, their age, and the driving behavior. In this kind of model, the understanding and interpretation of each variable can be given according to the coefficient and does not require very complicated calculations. In addition, according to the literature, most of the studies which investigate the driving behaviors have used standard multiple regression.
The model shows the impact of psychological characteristics on driving behavior. All psychological characteristic subsets were entered as independent variables, and the DBQ subsets were entered asdependent variables in the model. Thus, four distinct multiple regression models were developed for general violations, aggressive violations, errors, and lapses. The significance of influential paths could be tested by p-value. All tests were two-tailed with 95% accuracy (\\(p<0.05\\) as significance threshold). All statistical analyses were completed using IBM SPSS version 25 [49].
## 4 Results
Table 1 shows the internal consistency, means, and standard deviations for all driving behavior and psychological characteristic scales. As presented in this table, the Cronbach's Alpha ranged from 0.6 to 0.85 for all the scales. This shows that the reliability of the scales is generally acceptable and confirms that the survey approach is valid. Exceptionally, the Cronbach's alpha coefficients for selfishness are less than 0.6. A probable explanation is that measuring selfishness consists of a few questions [50]. By comparing the means, we see that the highest mean score is for Violations (\\(M=3.333\\), \\(\\pm 1.811\\)), followed by Aggressive Violations (\\(M=2.976\\), \\(\\pm 1.831\\)), Lapses (\\(M=2.881\\), \\(\\pm 1.652\\)), and then Errors (\\(M=2.697\\), \\(\\pm 1.530\\)).
### General Violations
As previously mentioned, the drivers' age and psychological characteristic scales were used as independent variables, and the DBQ sub-scales were used as dependent variables in the model development. The summary of the results for each model is presented as follows.
According to the results from the regression model, violations, social anxiety, and selfishness were found to be significant predictors, while sensation seeking, aggressive mode, premeditation, conscientiousness, urgency, and age had less impact on violations and instrumental attitude, and life satisfaction were not significant predictors of violations. The model predicted 41% of the variation in violations (\\(F(10,245)=18.22,p<0.05\\)).
Interestingly, the results show that the drivers who have gotten higher scores in social anxiety committed fewer violations while driving. It could be because they are often worried that people will judge or negatively evaluate them. Meanwhile, selfish people only care about their profits and have less consideration for other people, so they may break the rules that are against their interests. The concept of sensation seeking is suitable for individuals who are looking for excitement and new experiences. As presented in Table 2, the results of this study show that an increase in the rate of sensation seeking in people may cause an increase in the level of their violations. Results from previous studies are consistent with the findings of this study. Studies by Rimmo and Aberg [10] showed that violation is the only factor that had a positive association with both the sensation seeking scales (the thrill and adventure seeking and disinhibition). The other factors such as errors and lapses only had a positive association with disinhibition [10; 41; 42; 51]. In this study, both sensation-seeking scales were considered as one scale, thus sensation seeking is the effective predictor for all DBQ scales.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Behavior and Psychological Factors** & **Mean** & **SD** & **Cronbach’s Alpha** & **Cronbach’s Alpha** \\\\ \\hline \\multirow{4}{*}{**Driving behavior scales**} & Violations & 3.333 & 1.811 & 0.736 & 0.736 \\\\ & Aggressive violations & 2.976 & 1.831 & 0.734 & 0.734 \\\\ & Errors & 2.697 & 1.530 & 0.738 & 0.738 \\\\ & Lapses & 2.881 & 1.652 & 0.701 & 0.701 \\\\ \\hline \\multirow{4}{*}{**Psychological factors**} & Instrumental attitude & 0.376 & 0.931 & 0.642 & 0.642 \\\\ & Social anxiety & 0.190 & 0.980 & 0.756 & 0.756 \\\\ & Sensation seeking & 0.253 & 0.972 & 0.734 & 0.734 \\\\ & Aggressive mode & 0.176 & 0.981 & 0.750 & 0.750 \\\\ & Conscientiousness & 0.299 & 0.960 & 0.652 & 0.652 \\\\ & Life satisfaction & 0.223 & 0.972 & 0.611 & 0.611 \\\\ & Premeditation & 0.216 & 0.980 & 0.841 & 0.841 \\\\ & Urgency & 0.236 & 0.971 & 0.660 & 0.660 \\\\ & Selfishness & 0.104 & 0.990 & 0.502 & 0.502 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: The internal consistency, mean, and standard deviations for driving behavior scales and psychological factors.
Aggressive mode had positive correlation to violations, possibly due to the specific attitude that aggressive people have toward competitive driving, time urgency, and hyper-competitiveness which is consistent with findings from previous studies [29]. The results demonstrate that individuals who described themselves as more responsible, reliable, self-disciplined, and dependable (more conscientiousness) are less involved in driving accidents [52; 53]. No previous research has examined the impact of conscientiousness on driving behavior. The results of this study have shown that taxi drivers who are more conscientious have committed fewer violations. From these results, those who consider themselves to be responsible are more respectful to the rules and are less likely to commit violations.
### Aggressive Violations
The model for aggressive driving violations predicted 51% of the variation, and generally, it was significant \\((F(10,245)=27,p<0.05)\\). As expected, aggressive behavior had the most significant impact on this type of driving behavior (\\(p<0.001\\)), followed by instrumental attitude, social anxiety, sensation seeking, life satisfaction, urgency, and selfishness (\\(p<0.05\\)). Three factors of conscientiousness, premeditation, and age had not been significant predictors of aggressive violations. Aggressive violations are somehow related to violations to a degree that, even in some studies, these two psychological characteristics are considered as one specification. According to the results, psychological characteristics such as social anxiety, sensation seeking, and aggressive mode were assumed as significant predictors in both kinds of violations. Among these factors, the impact of aggressive behavior on aggressive violations is higher than general violations. The previous research shows that anger is generally associated with aggressive driving and aggressive violations [54]. Dula et al. [55] showed that through raising social anxiety, drivers do more effort to keep their positive public image and they are less likely to act aggressively even when they are angry. This behavior was also confirmed in this study.
The results from Table 3 indicate that instrumental attitude had a significant negative impact on aggressive violations, which shows that those who are aware of the results of their behaviors and care about their behavior are less likely to behave aggressively. Previous studies showed that attitude towards rule violations and speeding has a direct significant impact on both violation and aggressive violation factors [21; 56; 57]. However, the results from the current study show that instrumental attitude was not a significant predictor of violation, but it was a significant predictor for aggressive violation. This may be due to an overlap with other predictor variables in the model which were not considered by the previous studies.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Violations** & **Beta** & **t** & \\(p\\)**-Value** & **Partial Correlations** \\\\ \\hline instrumental attitude & 0.057 & 0.956 & 0.340 & 0.062 \\\\ social anxiety & \\(-\\)0.258 & \\(-\\)4.380 & 0.000 & \\(-\\)0.275 \\\\ sensation seeking & 0.152 & 2.462 & 0.015 & 0.159 \\\\ Aggressive mode & 0.166 & 2.686 & 0.008 & 0.173 \\\\ Conscientiousness & \\(-\\)0.106 & \\(-\\)1.945 & 0.053 & \\(-\\)0.126 \\\\ Life satisfaction & \\(-\\)0.049 & \\(-\\)0.837 & 0.403 & \\(-\\)0.055 \\\\ Premeditation & \\(-\\)0.158 & \\(-\\)2.710 & 0.007 & \\(-\\)0.174 \\\\ Urgency & 0.105 & 1.899 & 0.059 & 0.123 \\\\ Selfishness & 0.221 & 3.974 & 0.000 & 0.251 \\\\ Age & \\(-\\)0.116 & \\(-\\)2.261 & 0.025 & \\(-\\)0.146 \\\\ \\hline Adjusted R Square & & & 0.414 & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Predicting violations with psychological characteristics.
### Errors
The multiple regression model related to errors shows that urgency was the most significant predictor of this type of driving behavior (\\(p<0.000\\)), followed by sensation seeking. The model predicted 31% of the variation in errors \\((F(10,245)=8.3,p<0.05)\\). Other factors were not significant in predicting errors.
The results from Table 4 shows that the drivers who cannot control the panic situation during an emergency (lower urgency) are more likely to experience errors, because they are more focused on the emergency itself, and they may not be able to control their stress. Therefore, the lack of attention to their surrounding environment makes them prone to erroneous behavior. The increase in the rate of sensation seeking has also led to an increase in errors, which may be due to the fact that the individuals who are more enthusiastic about new experiences may pay less attention to their surroundings.
### Lapses
The last multiple regression model which is related to lapses indicates that life satisfaction, sensation seeking, age, urgency, and conscientiousness were the best predictors of this model, respectively, and other factors were not significant predictors of this model. Generally, the model predicted 21% of the variation in lapses \\((F(10,245)=7.5,p<0.05)\\).This model is less accurate and predicted less percentage of variation compared with the last three models developed in this study; however, the model is still accurate and the level of significance is consistent with the literature [(41; 42; 51)].
As it was mentioned in the literature, lapses are more related to those errors caused by an individual's forgetfulness, which can be due to aging. Besides, the results from Table 5 indicate that the level of life satisfaction can also affect this oblivion. The more a person is satisfied with their life,
\\begin{table}
\\begin{tabular}{c c c c c} \\hline
**Aggressive Violations** & **Beta** & **T** & \\(p\\)**-Value** & **Partial Correlations** \\\\ \\hline instrumental attitude & \\(-0.053\\) & \\(-0.778\\) & \\(0.438\\) & \\(-0.051\\) \\\\ social anxiety & \\(0.120\\) & \\(1.786\\) & \\(0.075\\) & \\(0.116\\) \\\\ sensation seeking & \\(0.214\\) & \\(3.020\\) & \\(0.003\\) & \\(0.194\\) \\\\ Aggressive mode & \\(-0.126\\) & \\(-1.785\\) & \\(0.075\\) & \\(-0.116\\) \\\\ Conscientiousness & \\(-0.073\\) & \\(-1.173\\) & \\(0.242\\) & \\(-0.076\\) \\\\ Life satisfaction & \\(-0.117\\) & \\(-1.761\\) & \\(0.080\\) & \\(-0.114\\) \\\\ Premeditation & \\(0.002\\) & \\(0.036\\) & \\(0.972\\) & \\(0.002\\) \\\\ Urgency & \\(-0.383\\) & \\(-6.055\\) & \\(0.000\\) & \\(-0.368\\) \\\\ Selfishness & \\(0.055\\) & \\(0.859\\) & \\(0.391\\) & \\(0.056\\) \\\\ age & \\(0.035\\) & \\(0.589\\) & \\(0.557\\) & \\(0.038\\) \\\\ \\hline Adjusted R Square & & & \\(0.310\\) & \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Predicting errors with psychological characteristics.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline
**Aggressive Violations** & **Beta** & **t** & \\(p\\)**-Value** & **Partial Correlations** \\\\ \\hline instrumental attitude & \\(-0.167\\) & \\(-3.116\\) & \\(0.002\\) & \\(-0.200\\) \\\\ social anxiety & \\(-0.168\\) & \\(-3.135\\) & \\(0.002\\) & \\(-0.201\\) \\\\ sensation seeking & \\(0.140\\) & \\(2.495\\) & \\(0.013\\) & \\(0.161\\) \\\\ Aggressive mode & \\(0.402\\) & \\(7.163\\) & \\(0.000\\) & \\(0.424\\) \\\\ Conscientiousness & \\(0.049\\) & \\(0.997\\) & \\(0.320\\) & \\(0.065\\) \\\\ Life satisfaction & \\(-0.104\\) & \\(-1.986\\) & \\(0.048\\) & \\(-0.129\\) \\\\ Premeditation & \\(-0.074\\) & \\(-1.400\\) & \\(0.163\\) & \\(-0.091\\) \\\\ Urgency & \\(0.115\\) & \\(2.301\\) & \\(0.022\\) & \\(0.149\\) \\\\ Selfishness & \\(0.107\\) & \\(2.117\\) & \\(0.035\\) & \\(0.137\\) \\\\ Age & \\(-0.072\\) & \\(-1.549\\) & \\(0.123\\) & \\(-0.101\\) \\\\ \\hline Adjusted R Square & & & \\(0.517\\) & \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Predicting aggressive violations with psychological characteristics.
the less likely they will commit lapses. The effects of sensation seeking and urgency can be the same as their effect on errors, because lapses are considered as some kinds of errors.
## 5 Limitations
The project had to be completed within a certain time limit, thus the time-consuming procedure of interviewing taxi drivers took us away from collecting more samples. Increasing the sample size could enhance the reliability of the study findings. Furthermore, in Iran, less than 1 percent of taxi drivers are women. Thus, it was practically impossible to consider female drivers in this study, while considering female drivers may affect the results. Furthermore, the presented analysis is specific to a country and may vary from place to place. It is worth analyzing different samples and statistical methods. Another important point is that the drivers' actual behavior may be different from what they stated in the questionnaire.
## 6 Conclusions
Driving behavior is one of the contributing factors in road accidents and the consequent casualties. Drivers' psychological characteristics affect their driving behavior. The current study used a questionnaire survey to collect reliable data on driving behavior and psychological characteristics among taxi drivers of Metropolitan Tehran. This questionnaire was provided by modifying the original DBQ version, according to Iran's driving condition. In this study, the impact of nine psychological factors (instrumental attitude, subjective norm, life satisfaction, sensation seeking, premeditation, urgency, selfishness, aggressive mode, and conscientiousness) on each case of driving behavior (violations, aggressive violations, errors, and lapses) has been investigated and compared with each other. As a result, social anxiety and selfishness were the best predictors of the violations, aggressive mode was a significant predictor of the aggressive violations, urgency had a perfect impact on the errors, and finally, life satisfaction, sensation seeking, conscientiousness, age, and urgency were the best predictors of the lapses.
The findings can be useful for the managers who need to employ or train taxi drivers. They can use the results of this study to predict driver's behaviors, in order to employ safer ones. In addition, improving taxi drivers' driving behaviors increase users' satisfaction with this mode of transportation. Besides, in countries where taxi companies are privately operating, and there is a competition between them, considering the implication of this study helps to raise the quality level of the company and attract more customers. The companies can easily train and monitor their drivers by developing a testing and evaluation framework of taxi drivers, based on the results of this study. It is worth mentioning that their attention to detail will pay off, if their drivers are committed to the company for the long term. In summary, training taxi drivers based on their psychological characteristics as well as the types of violations that they generally commit may be the main application of this study in
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Aggressive Violations** & **Beta** & **T** & \\(\\mathbf{p}\\)**-Value** & **Partial Correlations** \\\\ \\hline instrumental attitude & 0.028 & 0.415 & 0.679 & 0.027 \\\\ social anxiety & \\(-\\)0.018 & \\(-\\)0.266 & 0.790 & \\(-\\)0.017 \\\\ sensation seeking & 0.181 & 2.527 & 0.012 & 0.163 \\\\ Aggressive mode & 0.033 & 0.456 & 0.649 & 0.030 \\\\ Conscientiousness & \\(-\\)0.128 & \\(-\\)2.019 & 0.045 & \\(-\\)0.131 \\\\ Life satisfaction & \\(-\\)0.274 & \\(-\\)4.082 & 0.000 & \\(-\\)0.258 \\\\ Premeditation & 0.024 & 0.363 & 0.717 & 0.024 \\\\ Urgency & \\(-\\)0.146 & \\(-\\)2.284 & 0.023 & \\(-\\)0.148 \\\\ Selfishness & 0.090 & 1.391 & 0.166 & 0.091 \\\\ age & 0.146 & 2.445 & 0.015 & 0.158 \\\\ \\hline Adjusted R Square & & & 0.212 & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Predicting lapses with psychological characteristics.
practice. The study of how ordinary drivers' psychological characteristics affect their driving behaviors, and discussing its significance, could be a suitable material to steer future research in this direction.
Conceptualization, K.A.; methodology, K.A. and L.M.; data collection, K.A. and L.M.; formal analysis, K.A., L.M. and S.M.; investigation, K.A., L.M. and S.M.; writing--original draft preparation, K.A., L.M.; writing--review and editing, L.M., S.M.; supervision, K.A. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The authors declare no conflict of interest.
## References
* (1) Hayley, A.C.; Ridder, B.; Stough, C.; Ford, T.C.; Downey, L.A. Emotional intelligence and risky driving behavior in adults. _Transp. Res. Part F Traffic Psychol. Behav._**2017**, _49_, 124-131. [CrossRef]
* (2) Elliott, M.A.; Thomson, J.A. The social cognitive determinants of offending drivers' speeding behavior. _Accid. Anal. Prev._**2010**, _42_, 1595-1605. [CrossRef] [PubMed]
* (3) World Health Organization. _Global Status Report on Road Safety_; WHO Publications: Geneva, Switzerland, 2015.
* (4) Das, D.K.; Burger, E.A. Appraisal of urban road safety factors in South Africa. In Proceedings of the Institution of Civil Engineers-Municipal Engineer; Thomas Telford Ltd.: London, UK, 2016; Volume 170, pp. 6-15.
* (5) Attarchi, M.S.; Dehghan, F.; Seyedmehdi, S.M.; Mohammadi, S. Traffic accidents and related injuries in Iranian professional drivers. _J. Public Health_**2011**, _20_, 499-503. [CrossRef]
* (6) Pakgohar, A.; Tabrizi, R.; Khalili, M.; Esmaeili, A. The role of human factor in incidence and severity of road crashes based on the CART and LR regression: A data mining approach. _Procedia Comput. Sci._**2011**, \\(3\\), 764-769. [CrossRef]
* (7) Brown, B. Evidence Stacks Up in Favor of Self-Driving Cars in 2016 NHTSA Fatality Report. 2017. Available online: [https://www.digitaltrends.com/ca](https://www.digitaltrends.com/ca)(rs/2016-nhtsa-fatality-report/ (accessed on 6 October 2017).
* (8) Ozkan, T.; Lajunen, T.; Summala, H. Driver behavior questionnaire: A follow up study. _Accid. Anal. Prev._**2006**, _38_, 386-395. [CrossRef]
* (9) Eboli, L.; Mazzulla, G.; Pungillo, G. How drivers' characteristics can affect driving style. _Transp. Res. Procedia_**2017**, _27_, 945-952. [CrossRef]
* (10) Rimmo, P.A.; Aberg, L. On the distinction between violations and errors: Sensation seeking associations. _Transp. Res. Part F: Traffic Psychol. Behav._**1999**, \\(2\\), 151-166. [CrossRef]
* (11) Hassan, H.M. Investigation of the self-reported aberrant driving behavior of young male Saudi drivers: A survey-based study. _J. Transp. Saf. Secur._**2016**, \\(8\\), 113-128. [CrossRef]
* (12) De Winter, J.C.F.; Dodou, D. The driver behavior questionnaire as a predictor of accident: A meta-analysis. _J. Saf. Res._**2010**, _41_, 463-470. [CrossRef]
* (13) Lajunen, T.; Summala, H. Can we trust self-reports of driving? Effects of impression management on driver behaviour questionnaire responses. _Transp. Res. Part F_**2013**, \\(6\\), 97-107. [CrossRef]
* (14) Bucchi, A.; Sangiorgi, C.; Vignali, V. Traffic psychology and driver behavior. _Soc. Behav. Sci._**2012**, _53_, 972-979. [CrossRef]
* (15) Reason, J.; Manstead, A.; Stradling, S.; Baxter, J.; Cambell, K. Errors and violations on the roads: A real distinction? _Ergonomics_**1990**, _33_, 1315-1332. [CrossRef] [PubMed]
* (16) Blockey, P.N.; Hartley, L.R. Aberrant driving behavior: Errors and violations. _Ergonomics_**1995**, _38_, 1759-1771. [CrossRef] [PubMed]
* (17) Aberg, L.; Rimmo, P. Dimensions of aberrant driver behavior. _Ergonomics_**1998**, _41_, 39-56. [CrossRef]
* (18) Rimmo, P.A. Aberrant driving behavior: Homogeneity of a four-factor structure in samples differing in age and gender. _Ergonomics_**2002**, _45_, 569-582. [CrossRef]
* (19) Stephens, A.; Fitzharris, M. Validation of the driver behaviour questionnaire in a representative sample of drivers in Australia. _Accid. Anal. Prev._**2016**, _86_, 186-198. [CrossRef]
* (20) Huang, Y.; Lin, P.; Wang, J. The influence of bus and taxi drivers' public self-consciousness and social anxiety on aberrant driving behaviors. _Accid. Anal. Prev._**2018**, _117_, 145-153. [CrossRef]* _Vahedi et al. (2018)_ Vahedi, J.; Mohaymany, A.S.; Tabibi, Z.; Mehdizadeh, M. Aberrant Driving Behaviour, Risk Involvement, and Their Related Factors among Taxi Drivers. _Int. J. Environ. Res. Public Health_**2018**, _15_, 1626. [CrossRef]
* _Wang et al. (2019)_ Wang, Y.; Zhang, Y.; Li, L.; Liang, G. Self-reports of workloads and aberrant driving behaviors as predictors of crash rate among taxi drivers: A cross-sectional study in China. _Traffic Inj. Prev._**2019**. [CrossRef]
* _La et al. (2014)_ La, Q.; Lee, A.; Meuleners, L.; Duong, D. Prevalence and factors associated with road traffic crash among taxi drivers in Hanoi, Vietnam. _Accid. Anal. Prev._**2014**, _50_, 451-455. [CrossRef]
* _Asefa et al. (2015)_ Asefa, G.N.; Ingale, L.; Shumey, A.; Yang, H. Prevalence and factors associated with road traffic crash among taxi drivers in Mekelle town, northern Ethiopia, 2014: A cross sectional study. _PLoS ONE_**2015**. [CrossRef] [PubMed]
* _Murugan et al. (2013)_ Murugan, R.A.; Karthikeyan, K.; Mahesh, R.; Rameshkumar, S. Psychological factors considered during driving--A review. _Int. J. Innov. Res. Sci. Eng. Technol._**2013**, \\(2\\), 3563-3567.
* _Zhao et al. (2012)_ Zhao, N.; Mehler, B.; Reimer, B.; D'Ambrosio, L.; Mehler, A.; Coughlin, J.F. An investigation of the relationship between the driver behavior questionnaire and objective measures of highway driving behavior. _Transp. Res. Part F_**2012**, _15_, 676-685. [CrossRef]
* _Parker et al. (1995)_ Parker, D.; Reason, J.T.; Manstead, A.S.R.; Stradling, S.G. Driving errors, driving violations, and accident involvement. _Ergonomics_**1995**, _38_, 1036-1048. [CrossRef]
* _Lawton et al. (1997)_ Lawton, R.; Parker, D.; Manstead, A.S.R.; Stradling, S.G. The role of affect in predicting social behaviors: The case of road traffic violations. _J. Appl. Soc. Psychol._**1997**, _27_, 1258-1276. [CrossRef]
* _Fernandes et al. (2007)_ Fernandes, R.; Soames Job, R.F.; Hatfield, J. A challenge to the assumed generalizability of prediction and countermeasure for risky driving: Different factors predict different risky driving behaviors. _J. Saf. Res._**2007**, _38_, 59-70. [CrossRef]
* _McCartt et al. (2009)_ McCartt, A.T.; Mayhew, D.R.; Braitman, K.A.; Ferguson, S.A.; Simpson, H.M. Effects of age and experience on young driver crashes: Review of recent literature. _Traffic Inj. Prev._**2009**, _10_, 209-219. [CrossRef]
* _Zhang et al. (2013)_ Zhang, Q.; Jiang, Z.; Zheng, D.; Wang, Y.; Man, D. An Application of the Driver Behavior Questionnaire to Chinese Carless Young Drivers. _Traffic Inj. Prev._**2013**, _14_, 867-873. [CrossRef]
* _Chliaoutakis et al. (2005)_ Chliaoutakis, J.E.; Koukouli, S.; Lajunen, T.; Tzamalouka, G. Lifestyle traits as predictors of driving behavior in urban areas of Greece. _Transp. Res. Part F: Traffic Psychol. Behux._**2005**, \\(8\\), 413-428. [CrossRef]
* _Reimer et al. (2005)_ Reimer, B.; D'Ambrosio, L.A.; Gilbert, J.; Coughlin, J.F.; Biederman, J.; Surman, C.; Fried, R.; Aleardi, M. Behavior differences in drivers with attention deficit hyperactivity disorder: The driving behavior questionnaire. _Accid. Anal. Prev._**2005**, _37_, 996-1004. [CrossRef]
* _Shi et al. (2014)_ Shi, J.; Tao, L.; Li, X.; Xiao, Y.; Atchley, P. A survey of taxi drivers' aberrant driving behavior in Beijing. _J. Transp. Saf. Secur._**2014**, \\(6\\), 34-43. [CrossRef]
* _Brandenburg et al. (2017)_ Brandenburg, S.; Oehl, M.; Seigies, K. German Taxi Drivers' Experience and Expression of Driving Anger: Are the Driving Anger Scale and the Driving Anger Expression Inventory Valid Measures? _Traffic Inj. Prev._**2017**, _18_, 807-812. [CrossRef] [PubMed]
* _Newnam et al. (2014)_ Newman, S.; Mamo, W.; Tulu, G. Exploring differences in driving behavior across age and years of education of taxi drivers in Addis Ababa, Ethiopia. _Saf. Sci._**2014**, _68_, 1-5. [CrossRef]
* _Burns and Wilde (1995)_ Burns, P.C.; Wilde, G.J.S. Risk taking in male taxi drivers: Relationships among personality, observational data and driver records. _Personal. Individ. Differ._**1995**, _18_, 267-278. [CrossRef]
* _Jonah (1997)_ Jonah, B.A. Sensation seeking and risky driving: A review and synthesis of the literature. _Accid. Anal. Prev._**1997**, _29_, 651-665. [CrossRef]
* _Boyce and Geller (2002)_ Boyce, T.E.; Geller, E.S. An instrumented vehicle assessment of problem behavior and driving style: Do younger males really take more risks? _Accid. Anal. Prev._**2002**, _34_, 51-64. [CrossRef]
* _Nabi et al. (2007)_ Nabi, H.; Salmi, L.R.; Lafont, S.; Chiron, M.; Zins, M.; Lagarde, E. Attitudes associated with behavioral predictors of serious road traffic crashes: Results from the GAZEL cohort. _Inj. Prev._**2007**, _13_, 26-31. [CrossRef]
* _Constantinou et al. (2011)_ Constantinou, E.; Panayiotou, G.; Konstantinou, N.; Loutsiou-Ladd, A.; Kapardis, A. Risky and aggressive driving in young adults: Personality matters. _Accid. Anal. Prev._**2011**, _43_, 1323-1331. [CrossRef]
* _Bachoo et al. (2013)_ Bachoo, S.; Bhagwanjee, A.; Govender, K. The influence of anger, impulsivity, sensation seeking and driver attitudes on risky driving behavior among post-graduate university students in Durban, South Africa. _Accid. Anal. Prev._**2013**, _55_, 67-76. [CrossRef]
* _Yang et al. (2013)_ Yang, J.; Du, F.; Qu, W.; Gong, Z.; Sun, X. Effects of Personality on Risky Driving Behavior and Accident Involvement for Chinese Drivers. _Traffic Inj. Prev._**2013**, _14_, 565-571. [CrossRef]* Al Azri et al. (2017) Al Azri, M.; Al Reesi, H.; Al-Adawi, S.; Al Maniri, A.; Freeman, J. Personality of young drivers in Oman: Relationship to risky driving behaviors and crash involvement among Sultan Qaboos University students. _Traffic Inj. Prev._**2017**, _18_, 150-156. [CrossRef]
* Iversen and Rundmo (2004) Iversen, H.; Rundmo, T. Attitudes towards traffic safety, driving behavior and accident involvement among the Norwegian public. _Ergonomics_**2004**, _47_, 555-572. [CrossRef] [PubMed]
* Parishad et al. (2019) Parishad, N.; Aghabayk, K.; Rezaei, R.; Samerei, A.; Mohammadi, A. Validation of the Driver Behavior Questionnaire in a Representative Sample of Iranian Drivers. _Civ. Eng. Infrastruct. J._**2019**. [CrossRef]
* Cronbach (1951) Cronbach, L.J. Coefficient alpha and the internal structure of tests. _Psychometrika_**1951**, _16_, 297-334. [CrossRef]
* Nunnally (1978) Nunnally, J.C. _Psychometric Theory_, 2nd ed.; McGraw-Hill: New York, NY, USA, 1978.
* IBM Corp (2017) IBM Corp. _IBM SPSS Statistics for Windows, Version 25.0_; IBM Corp: Armonk, NY, USA, 2017.
* Ulleberg and Rundmo (2003) Ulleberg, P.; Rundmo, T. Personality, attitudes and risk perceptions as predictors of risky driving behavior among young drivers. _Saf. Sci._**2003**, _41_, 427-443. [CrossRef]
* Lucidi et al. (2019) Lucidi, F.; Girelli, L.; Chirico, A.; Alivenini, F.; Cozzolino, M.; Violani, C.; Mallia, L. Personality Traits and Attitudes Toward Traffic Safety Predict Risky Behavior Across Young, Adult, and Older Drivers. _Front. Psychol._**2019**. [CrossRef]
* Arthur and Graziano (1996) Arthur, W.; Graziano, W. The Five-Factor Model, Conscientiousness, and Driving Accident Involvement. _J. Personal._**1996**, _64_, 593-618. [CrossRef]
* Arthur and Doverspike (2001) Arthur, W.; Doverspike, D. Predicting motor vehicle crash involvement from a personality measure and a driving knowledge test. _J. Prev. Interv. Community_**2001**, _22_, 35-42. [CrossRef]
* Nesbit et al. (2007) Nesbit, S.M.; Conger, J.C.; Conger, A.J. A quantitative review of the relationship between anger and aggressive driving. _Aggress. Violent Behav._**2007**, _12_, 156-176. [CrossRef]
* Dula et al. (2010) Dula, C.S.; Adams, C.L.; Miesner, M.T.; Leonard, R.L. Examining relationship between anxiety and dangerous driving. _Accid. Anal. Prev._**2010**, _42_, 2050-2056. [CrossRef]
* Ma et al. (2010) Ma, M.; Yan, X.; Huang, H. Safety of Public Transportation Occupational Drivers. _J. Transp. Res. Board_**2010**, _2145_, 72-79. [CrossRef]
* Kaiser et al. (2016) Kaiser, S.; Furian, G.; Schlembach, C. Aggressive behaviour in road traffic--Findings from Austria. _Transp. Res. Procedia_**2016**, _14_, 4384-4392. [CrossRef] | Professional drivers play a key role in urban road network safety. It is therefore important to employ safer drivers, also find the problem, and train the existing ones. However, a direct driving test may not be very useful solely because of drivers' consciousness. This study introduces a latent predictor to expect driving behaviors, by finding the relation between taxi drivers' psychological characteristics and their driving behaviors. A self-report questionnaire was collected from 245 taxi drivers by which their demographic characteristics, psychological characteristics, and driving behaviors were obtained. The psychological characteristics include instrumental attitude, subjective norm, sensation seeking, aggressive mode, conscientiousness, life satisfaction, premeditation, urgency, and selfishness. Driving behaviors questionnaire (DBQ) provides information regarding drivers' violations, aggressive violations, errors, and lapses. The standard linear regression model is used to determine the relationship between driving behavior and psychological characteristics of drivers. The findings show that social anxiety and selfishness are the best predictors of the violations; aggressive mode is a significant predictor of the aggressive violations; urgency has a perfect impact on the errors; and finally, life satisfaction, sensation seeking, conscientiousness, age, and urgency are the best predictors of the lapses. | Condense the content of the following passage. | 242 |
arxiv-format/0706_1432v1.md | # Reconstruction of \\(5d\\) Cosmological Models From Recent Observations
Chengwu Zhang
[email protected]
Lixin Xu
Yongli Ping and Hongya Liu
[email protected] School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian,116024, P.R. China
Day Month Year
Day Month YearDay Month YearDay Month YearDay Month YearDay Month Year
## 1 Introduction
Observations of Cosmic Microwave Background (CMB) anisotropies[1], high redshift type Ia supernovae[2] and the surveys of clusters of galaxies[3] indicate that an exotic component with negative pressure dubbed dark energy dominates the present universe. The most obvious candidate for this dark energy is the cosmological constant \\(\\Lambda\\) with equation of state (\\(w_{\\Lambda}=-1\\)), which is consistent with recent observations[1, 4] in \\(2\\sigma\\) region. However, it raises several theoretical difficulties[5, 6]. This has lead to models for dark energy which evolves with time, such as quintessence[7], phantom[8], quintom[9], K-essence[10], tachyonic matter[11] and so on. For this kind of models, one can design many kinds of potentials[12] and then study EOS for the dark energy. Another way is to use a parameterization of the EOS to fit the observational data, and then to reconstruct the potential and the evolution of the universe[13]. Various parameterization of the EOS of dark energy have been presented and investigated[14, 15, 16, 17].
If the universe has more than four dimensions, general relativity should be extended from \\(4D\\) to higher dimensions. One of such extensions is the \\(5D\\) Space-Time-Matter (STM) theory[18, 19] in which our universe is a \\(4D\\) hypersurface floatingin a \\(5D\\) Ricci-flat manifold. This theory is supported by Campbell's theorem which states that any analytical solution of the \\(ND\\) Einstein equations can be embedded in a \\((N+1)D\\) Ricci-flat manifold[20]. A class of cosmological solutions of the STM theory is given by Liu and Mashhoon[22],the authors restudied the solutions and pointed out that it can describe a bounce universe. It was shown that dark energy models, similar as the 4D quintessence and phantom ones, can also be constructed in this \\(5D\\) cosmological solution in which the scalar field is induced from the \\(5D\\) vacuum[23, 24]. The purpose of this paper is to use a model-independent method to reconstruct a \\(5D\\) cosmological model and then study the universe evolution and the EOS of the dark energy which is constrained by recent observational data: the latest observations of the 182 Gold SNe Ia [25], the 3-year WMAP CMB shift parameter [4, 26] and the SDSS baryon acoustic peak[27]. The paper is organized as follows. In Section 2, we briefly introduce the \\(5D\\) Ricci-flat cosmological solution and derive the densities for the two major components of the universe. In Section 3, we will reconstruct the evolution of the model from cosmological observations. Section 4 is a short discussion.
## 2 Dark energy in the \\(5d\\) Model
The \\(5D\\) cosmological model was described as before[21, 22, 28, 30]. In this paper we consider the case where the \\(4D\\) induced matter \\(T^{\\alpha\\beta}\\) is composed of two components: dark matter \\(\\rho_{m}\\) and dark energy \\(\\rho_{x}\\), which are assumed to be noninteracting. So we have
\\[\\frac{3\\left(\\mu^{2}+k\\right)}{A^{2}} = \\rho_{m}+\\rho_{x},\\] \\[\\frac{2\\mu\\dot{\\mu}}{A\\dot{A}}+\\frac{\\mu^{2}+k}{A^{2}} = -p_{m}-p_{x}, \\tag{1}\\]
with
\\[\\rho_{m} = \\rho_{m0}A_{0}^{3}A^{-3},\\quad p_{m}=0, \\tag{2}\\] \\[p_{x} = w_{x}\\rho_{x}. \\tag{3}\\]
From Eqs. (1) - (3) and for \\(k=0\\), we obtain the EOS of the dark energy
\\[w_{x}=\\frac{p_{x}}{\\rho_{x}}=-\\frac{2\\mu\\dot{\\mu}/\\left(A\\dot{A}\\right)+\\mu^{ 2}/A^{2}}{3\\mu^{2}A^{2}-\\rho_{m0}A_{0}^{3}A^{-3}}, \\tag{4}\\]
and the dimensionless density parameters
\\[\\Omega_{m} = \\frac{\\rho_{m}}{\\rho_{m}+\\rho_{x}}=\\frac{\\rho_{m0}A_{0}^{3}}{3 \\mu^{2}A}, \\tag{5}\\] \\[\\Omega_{x} = 1-\\Omega_{m}. \\tag{6}\\]
where \\(\\rho_{m0}\\) is the current values of dark matter density.
Consider Eq. (4) where \\(A\\) is a function of \\(t\\) and \\(y\\). However, on a given \\(y=constant\\) hypersurface, \\(A\\) becomes \\(A=A(t)\\), which means we consider a supersurface embedded in a flat \\(5D\\) spacetime. As noticed before[29, 30], the term \\(\\dot{\\mu}/\\dot{A}\\) in (4) can now be rewritten as \\(d\\mu/dA\\). Furthermore, we use the relation
\\[A_{0}/A=1+z, \\tag{7}\\]
as an ansatz[29, 30] and define \\(\\mu_{0}^{2}/\\mu^{2}=f(z)\\) (with \\(f(0)\\equiv 1\\)), then we find that Eqs. (4)-(6) can be expressed in term of the redshift \\(z\\) as
\\[w_{x}=-\\frac{1+(1+z)dlnf(z)/dz}{3(1-\\Omega_{m})}, \\tag{8}\\]
\\[\\Omega_{m}=\\Omega_{m_{0}}(1+z)f(z), \\tag{9}\\]
\\[\\Omega_{x}=1-\\Omega_{m}, \\tag{10}\\]
\\[q=-\\frac{1+z}{2}dlnf(z)/dz. \\tag{11}\\]
where q is the deceleration parameter and \\(q<0\\) meas our universe is accelerating. Now we conclude that if the function \\(w_{x}\\) is given, the evolution of all the cosmic observable parameters in Eqs. (8) - (11) could be determined uniquely. Then we adopt the parametrization of EOS as follows[15, 31]
\\[w_{x}(z)=w_{0}+w_{1}\\frac{z}{1+z} \\tag{12}\\]
From Eq. (8) and Eq. (12), we can obtain the function \\(f(z)\\)
\\[f(z)=\\frac{1}{(1+z)\\left[\\Omega_{m0}+(1-\\Omega_{m0})(1+z)^{3w_{0}+3w_{1}}\\exp( -\\frac{3w_{1}z}{1+z})\\right]}. \\tag{13}\\]
In the next section, we will use the recent observational data to find the best fit parameter (\\(w0,w1,\\Omega_{m0}\\)).
## 3 The best fit parameters from cosmological observations
In a flat universe with Eq. (12), the Friedmann equation can be expressed as
\\[H^{2}(z)=H_{0}^{2}E(z)^{2}=H_{0}^{2}[\\Omega_{0m}(1+z)^{3}+(1-\\Omega_{0m})(1+z) ^{3(1+w_{0}+w_{1})}e^{\\frac{-3w_{1}z}{(1+z)}}] \\tag{14}\\]
Then the knowledge of \\(\\Omega_{m0}\\) and \\(H(z)\\) is sufficient to determine \\(w_{x}(z)\\) with \\(H_{0}=72\\ \\mathrm{km\\cdot s^{-1}\\cdot Mpc^{-1}}\\)[32]. We use the maximum likelihood method[33] to constrain the parameters.
The Gold dataset compiled by Riess et. al is a set of supernova data from various sources and contains 182 gold points by discarding all SNe Ia with \\(z<0.0233\\) and all SNe Ia with quality='Silver' from previously published data with 21 new points with \\(z>1\\) discovered recently by the Hubble Space Telescope[25]. Theoretical model parameters are determined by minimizing the quantity
\\[\\chi^{2}_{SNe}(\\Omega_{m0},w_{0},w_{1})=\\sum_{i=1}^{N}\\frac{(\\mu_{obs}(z_{i}) -\\mu_{th}(z_{i}))^{2}}{\\sigma^{2}_{(obs;i)}} \\tag{15}\\]where \\(N=182\\) for Gold SNe Ia data, \\(\\sigma^{2}_{(obs;i)}\\) are the errors due to flux uncertainties, intrinsic dispersion of SNe Ia absolute magnitude and peculiar velocity dispersion respectively. These errors are assumed to be gaussian and uncorrelated. The theoretical distance modulus is defined as
\\[\\mu_{th}(z_{i}) \\equiv m_{th}(z_{i})-M \\tag{16}\\] \\[= 5\\log_{10}(D_{L}(z))+5\\log_{10}(\\frac{H_{0}^{-1}}{Mpc})+25\\]
where
\\[D_{L}(z)=H_{0}d_{L}(z)=(1+z)\\int_{0}^{z}\\frac{H_{0}dz^{{}^{\\prime}}}{H(z^{{}^{ \\prime}};\\Omega_{m0},w_{0},w_{1})} \\tag{17}\\]
and \\(\\mu_{obs}\\) is given by supernovae dataset.
The shift parameter is defined as[34]
\\[\\bar{R}=\\frac{l_{1}^{\\prime TT}}{l_{1}^{TT}}=\\frac{r_{s}}{r_{s}^{\\prime}} \\frac{d_{A}^{\\prime}(z_{rec}^{\\prime})}{d_{A}(z_{rec})}=\\frac{2}{\\Omega_{m0}^ {1/2}}\\frac{q(\\Omega_{r}^{\\prime},a_{rec})}{\\int_{0}^{z}\\frac{H_{0}dz^{\\prime }}{H(z^{\\prime})}} \\tag{18}\\]
where \\(z_{rec}\\) is the redshift of recombination, \\(r_{s}\\) is the sound horizon, \\(d_{A}(z_{rec})\\) is the sound horizon angular diameter distance, \\(q(\\Omega_{r}^{\\prime},a_{rec})\\) is the correction factor. For weak dependence of \\(q(\\Omega_{r}^{\\prime},a_{rec})\\), the shift parameter is usually expressed as
\\[R=\\Omega_{m0}^{1/2}\\int_{0}^{z}\\frac{H_{0}dz^{\\prime}}{H(z^{\\prime};\\Omega_{m 0},w_{0},w_{1})} \\tag{19}\\]
The R obtained from 3-year WMAP data[4, 26] is
\\[R=1.70\\pm 0.03 \\tag{20}\\]
With the measurement of the R, we obtain the \\(\\chi^{2}_{CMB}\\) expressed as
\\[\\chi^{2}_{CMB}(\\Omega_{m0},w_{0},w_{1})=\\frac{(R(\\Omega_{m0},w_{0},w_{1})-1.7 0)^{2}}{0.03^{2}} \\tag{21}\\]
The size of Baryon Acoustic Oscillation (BAO) is found by Eisenstein et al[27] by using a large spectroscopic sample of luminous red galaxy from SDSS and obtained a parameter \\(A\\) which does not depend on dark energy directly models and can be expressed as
\\[A=\\Omega_{m0}^{1/2}E(z_{BAO})^{-1/3}[\\frac{1}{z_{BAO}}\\int_{0}^{z}\\frac{dz^{ \\prime}}{E(z^{\\prime};\\Omega_{m0},w_{0},w_{1})}]^{2/3} \\tag{22}\\]
where \\(z_{BAO}=0.35\\) and \\(A=0.469\\pm 0.017\\). We can minimize the \\(\\chi^{2}_{BAO}\\) defined as[35]
\\[\\chi^{2}_{BAO}(\\Omega_{m0},w_{0},w_{1})=\\frac{(A(\\Omega_{m0},w_{0},w_{1})-0.46 9)^{2}}{0.017^{2}} \\tag{23}\\]
To break the degeneracy of the observational data and find the best fit parameters, we combine these datasets to minimize the total likelihood \\(\\chi^{2}_{total}\\)[36\\[\\chi^{2}_{total}=\\chi^{2}_{SNe}+\\chi^{2}_{CMB}+\\chi^{2}_{BAO} \\tag{24}\\]
We obtain the best fit values \\((\\Omega_{m0},w_{0},w_{1})\\) are (0.288, -1.050, 0.824) and to identify the dependence of the best fit values of the parameters, we set \\(\\Omega_{m0}\\) to be fixed when calculating the confidence level of \\((w0,w1)\\). The errors of the best fit \\(w_{x}(z)\\) are calculated using the covariance matrix[37] and shown in Fig.1. The corresponding \\(\\chi^{2}\\) contours in parameters space (w0,w1) is shown in Fig.2.
From Fig.1 we find that \\(w_{x}(z)\\) is constrained in a narrow space, the best fit \\(w_{x}(z)\\) crosses \\(-1\\) at about \\(z=0.07\\) and at present the best value of \\(w_{x}(0)<-1\\), but in \\(1\\sigma\\) confidence level we can't rule out the possibility \\(w_{x}(0)>-1\\). Fig.2 shows that a cosmological constant is ruled out in \\(1\\sigma\\) confidence level.
Using the function \\(f(z)\\), the best fit values \\((\\Omega_{m0},w_{0},w_{1})\\), we obtain \\(\\Omega_{m}\\), \\(\\Omega_{x}\\), the deceleration parameter \\(q\\) from Eq.(9)-(11) and their evolution is plotted in Fig.3. Fig.3 also shows the evolution of q\\({}_{\\Lambda CDM}\\), \\(\\Omega_{m-\\Lambda CDM}\\), \\(\\Omega_{\\Lambda}\\) in a \\(4D\\) flat \\(\\Lambda\\)CDM model with the present \\(\\Omega_{m0-\\Lambda CDM}=0.283\\) obtained from above cosmological observations. We can see that the transition point from decelerated expansion to accelerated expansion with \\(q=0\\) is at \\(z\\simeq 0.5\\) and it is earlier than the \\(\\Lambda\\)CDM model. Our universe experiences a expansion at present in a \\(4D\\) supersurface embedded in a \\(5D\\) Ricci-flat spacetime or in \\(\\Lambda\\)CDM model.
## 4 Discussion
Observations indicate that our universe now is dominated by two dark components: dark energy and dark matter. The 5D cosmological solution presented by Liu, Mashhoon and Wesson in[21] and[22] contains two arbitrary functions \\(\\mu(t)\\) and \\(\
u(t)\\), one of the two functions, \\(\\mu(t)\\), plays a similar role as the potential \\(V(\\phi)\\) in the
Figure 1: The best fits of \\(w_{x}(z)\\) with \\(1\\sigma\\) errors (shaded region).
quintessence and phantom dark energy models, which can easily change to another arbitrary function \\(f(z)\\). Thus, if the current values of the matter density parameter \\(\\Omega_{m0}\\), \\(w0\\) and \\(w1\\) in the EOS are all known, this \\(f(z)\\) could be determined uniquely. In this paper we mainly focus on the constraints on this model from recent observational data: the 182 Gold SNe Ia, the 3-year WMAP CMB shift parameter and the SDSS baryon acoustic peak. Our results show that the recent observations allow for a narrow variation of the dark energy EOS and the best fit dynamical \\(w_{x}(z)\\) crosses \\(-1\\) in the recent past. Using the best fit values \\((\\Omega_{m0},w0,w1)\\), we have studied the evolution of the dark matter density \\(\\Omega_{m}\\), the dark energy density \\(\\Omega_{x}\\) and the deceleration parameter \\(q\\) in a \\(4D\\) supersurface of \\(5D\\) spacetime, which is similar to the \\(\\Lambda\\)CDM model. In the future, we hope that more and precision cosmological
Figure 2: The contours show 2-D marginalized \\(1\\sigma\\) and \\(2\\sigma\\) confidence limits in the \\((w_{0},w_{1})\\) plane
observations could determine the key points of the evolution of our universe, such as the transition point from deceleration to acceleration, then distinguish the \\(5D\\) cosmological model from others.
## 5 Acknowledgments
This work was supported by NSF (10573003), NBRP (2003CB716300) of P.R. China. The research of Lixin Xu was also supported in part by DUT 893321 and NSF (10647110).
## References
* [1] D.N. Spergel, et. al, _Astrophys. J._ Supp. **148** 175(2003), astro-ph/0302209.
* [2] A. G. Riesset, et. al, _Astron. J._**116** 1009(1998), astro-ph/9805201.
* [3] A. C. Pope, et. al, _Astrophys.J._**607** 655(2004),astro-ph/0401249.
* [4] D. N. Spergel _et al._, arXiv (2006), astro-ph/0603449.
* [5] P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. **75**, 559 (2003), astro-ph/0207347.
* [6] T. Padmanabhan, Phys. Rep. **380**, 235 (2003).
* [7] I. Zlatev, L. M. Wang, and P. J. Steinhardt, Phys. Rev. Lett. **82**, 896 (1999).
* [8] R. R. Caldwell, M. Kamionkowski, and N. N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003).
* [9] B. Feng, X. L. Wang, and X. M. Zhang, Phys. Lett. B **607**, 35 (2005).
* [10] C. Armendariz-Picon, T. Damour, and V. Mukhanov, Phys. Lett. B **458**, 209 (1999).
* [11] T. Padmanabhan, Phys. Rev. D **66**, 021301 (2002).
* [12] V. Sahni, _Chaos. Soli. Frac._**16** 527(2003).
* [13] Z.K. Guo, N. Ohtab and Y.Z. Zhang, astro-ph/0505253.
* [14] P.S. Corasaniti and E.J. Copeland, _Phys. Rev._**D67**, 063521 (2003), astro-ph/0205544.
* [15] E.V. Linder, _Phys. Rev. Lett._**90**, 091301 (2003), astro-ph/0208512.
* [16] A. Upadhye, M. Ishak and P.J. Steinhardt, astro-ph/0411803.
* [17] Y. Wang and M. Tegmark, _Phys. Rev. Lett._**92**, 241302 (2004), astro-ph/0403292.
* [18] P.S. Wesson, _Space-Time-Matter_ (Singapore: World Scientific) 1999.
* [19] J.M. Overduin and P.S. Wesson, _Phys. Rept._**283**, 303(1997).
* [20] J. E. Campbell, _A Course of Differential Geometry_, (Clarendon, 1926).
* [21] H.Y. Liu and P.S. Wesson, _Astrophys. J._**562** 1(2001), gr-qc/0107093.
* [22] H.Y. Liu and B. Mashhoon, _Ann. Phys._**4** 565(1995).
* [23] B.R. Chang, H. Liu and L. Xu, _Mod. Phys. Lett._**A20**, 923(2005), astro-ph/0405084.
* [24] H.Y. Liu, et. al, _Mod. Phys. Lett._**A20**, 1973(2005), gr-qc/0504021.
* [25] A. G. Riess _et al._, arXiv (2006), astro-ph/0611572.
* [26] Y. Wang and P. Mukherjee, Astrophys. J. **650**, 1 (2006), astro-ph/0604051.
* [27] D. J. Eisenstein _et al._, Astrophys. J. **633**, 560 (2005).
* [28] H. Y. Liu, _Phys. Lett._**B560** 149(2003), hep-th/0206198.
* [29] L. Xu, H.Y. Liu and C.W. Zhang, _Int. J. Mod. Phys. D_**15**, 215(2006), astro-ph/0510673.
* [30] C. W. Zhang, H.Y. Liu and L. Xu _Mod. Phys. Lett. A_**21**, 571(2006).
* [31] M. Chevallier, D. Polarski _Int. J. Mod. Phys. D_**10**, 213 (2001).
* [32] W. L. Freedman _et al._, Astrophys. J. **553**, 47 (2001), astro-ph/0012376.
* [33] S. Nesserisa and L. Perivolaropoulosb, arXiv (2006), astro-ph/0610092.
* [34] J. R. Bond, G. Efstathiou, and M. Tegmark, Mon. Not. R. Astron. Soc. **291**, L33 (1997), astro-ph/9702100.
* [35] U. Alam and V. Sahni, Phys. Rev. D **73**, 084024(2006).
* [36] S. Nesserisa and L. Perivolaropoulosb, JCAP, 0701 (2007) 018, astro-ph/0610092.
* [37] U. Alam _et al._, astro-ph/0406672. | We use a parameterized equation of state (EOS) of dark energy to a \\(5D\\) Ricci-flat cosmological solution and suppose the universe contains two major components: dark matter and dark energy. Using the recent observational datasets: the latest 182 type Ia Supernovae Gold data, the 3-year WMAP CMB shift parameter and the SDSS baryon acoustic peak, we obtain the best fit values of the EOS and two major components' evolution. We find that the best fit EOS crossing \\(-1\\) in the near past \\(z\\simeq 0.07\\), the present best fit value of \\(w_{x}(0)<-1\\) and for this model the universe experiences the acceleration at about \\(z\\simeq 0.5\\).
Kaluza-Klein theory; cosmology; dark energy | Summarize the following text. | 183 |
arxiv-format/2409_06846v1.md | # Stratospheric Aerosol Source Inversion: Noise, Variability, and Uncertainty Quantification
J. Hart
1Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320 Sandia National Laboratories, PO Box 969, Livermore, CA 94551-0969
I. Manickam
M. Gulian
Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320
L. Swiler
1Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320 Sandia National Laboratories, PO Box 969, Livermore, CA 94551-0969
D. Bull
1Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320 Sandia National Laboratories, PO Box 969, Livermore, CA 94551-0969
T. Ehrmann
H. Brown
1Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320 Sandia National Laboratories, PO Box 969, Livermore, CA 94551-0969
B. Wagman
1Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320 Sandia National Laboratories, PO Box 969, Livermore, CA 94551-0969
J. Watkins
2Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87123-1320 Sandia National Laboratories, PO Box 969, Livermore, CA 94551-0969
## 1 Introduction
Stratospheric aerosols from volcanic eruptions can significantly alter regional or global climate patterns on the time scale of months or even years. The most notable eruption with modern observational data, that of Mount Pinatubo in 1991, resulted in a warming of the lower stratosphere by more than 2.5 degrees \\(C^{\\circ}\\)[20, 37, 42, 45, 47]. The extent of the climate impact is proportional to the eruption magnitude [28]; however, many confounding processes in the atmosphere make it difficult to attribute precisely how much a climate anomaly is due to the volcanic aerosols. Downstream climatic impacts tend to be well observed, but are intermixed with multiple other forcings in the system (e.g. anthropogenic emissions, natural and internal variability, local factors, etc.). Furthermore, full characterization of the eruption itself is challenging as direct observations of the aerosols are not always readily available. This motivates the use of inverse uncertainty quantification (UQ) methods to estimate the eruption characteristics. We representthe eruption probabilistically via a Bayesian formulation. Samples from the posterior distribution enable forward UQ to support rigorous analysis of the extent to which a particular climate impact can be attributed to the volcanic eruption rather than variability within the earth system. In this article, we present a mathematical framework enabling Bayesian inversion to estimate the Mt. Pinatubo volcano source characteristics. Our approach has broader potential application; however, we focus on the problem of volcanic aerosols to motivate the problem characteristics which shape our proposed approach.
The inverse problem under consideration is posed as source estimation for a system of partial differential equations modeling the earth system. The forward model used for this problem, which models aerosols evolving in time, is the Energy Exascale Earth System Model (E3SM) [13], modified to enable prognostic stratospheric aerosols (E3SMv2-SPA) [3]. The computational complexity of E3SM inhibits directly using it for source inversion, and necessitates the development of a surrogate to enable the inversion. Using surrogates to enable calibration of earth system models is common in practice [23, 39, 41, 53]. However, in most cases the surrogate is fit to low-dimensional quantities of interest corresponding to spatial and temporal averages of the model prediction. In our case, finer spatial and temporal resolution is required to exploit the resolution of the satellite data and time scales characteristic of the aerosol transport. We will leverage recent advances in operator learning to construct operator neural networks tailored to the structure of our problem. Our focus in this article is the interplay between the operator learning approach and its use to enable Bayesian inversion. Particular emphasis is given to addressing the challenge of atmospheric variability stemming from the chaotic flow characteristics of the wind and imprecision of wind data in the stratosphere.
Traditional surrogate modeling seeks to approximate a function mapping between finite dimensional spaces (frequently Euclidean spaces), but is typically limited by the curse of dimensionality. Operator learning is a burgeoning field which seeks to approximate an operator mapping between infinite dimensional function spaces. From this perspective, operator learning may be viewed as surrogate modeling in the limit as the dimension goes to infinity. Although the curse of dimensionality cannot be avoided, its severity can be lessened by exploiting the mathematical structure of function spaces to design approximations which scale more effectively. Many operator learning approaches may be viewed as a combination of dimension reduction on function spaces and regression mapping between the reduced dimensions. The key idea is that tailoring the dimension reduction and regression to known characteristics of the operator can enable efficient learning. Many operator learning methods are based on deep neural network (DNN) approximations. Examples include modal methods [24, 25, 31, 32, 35, 36], graph based methods [11, 44, 48], principal component analysis based methods [1, 15], meshless methods [49], trunk-branch based methods [6, 27], and time-stepping methods [26, 35, 36, 54]. Other approaches include manifold learning using Polynomial Chaos models [18, 19], Gaussian Process models [12], and models based on polynomial approximations of reduced operators [30, 33]. The best method for a given problem is based on the function spaces under consideration, characteristics of the operator being approximated, and the amount of data available for training. Since the number of climate simulations available for training is limited, we develop an operator learning approach tailored to characteristics of stratospheric aerosol transport. Specifically, we design a spatial dimension reduction approach to efficiently capture aerosol plume advection and use a model architecture that enforces physical constraints derived from chemistry.
Inverse problems arise in many areas across the geosciences. Examples include estimation of basal dynamics of the Antarctic ice sheet [16], identification of contaminant sources in the subsurface [21], modeling of plate mechanics in mantle flow [38], reconstruction of subsurface material properties via full waveform inversion [50], and inference of sources in atmospheric transport [10]. Due to the high-dimensionality created by spatial heterogeneity, it is common to use optimization and derivative-based sampling algorithms to achieve computationally scalable algorithms [2, 5, 7, 34]. Considerable research has gone into atmospheric source inversion with a focus on greenhouse gas emissions [8, 40]. This article focuses on a different class of atmospheric source inversion: stratospheric aerosol inversion, which is characterized by a point source injection advected globally by stratospheric winds.
Our contributions in this article are:
* a novel operator learning approach that models aerosol transport from data with varying atmospheric states and injection masses using nonlinear spatial dimension reduction via radial basis functions and a chemistry-informed architecture,
* a novel Bayesian approximation error approach to accommodate both internal atmospheric variability and background aerosols in volcano source inversion,
* an application informed framework which couples earth system simulation, operator learning, and inverse problems,
* a demonstration of our proposed framework using hold-out simulation data from unseen injection mass and atmospheric states as synthesized satellite observations to rigorously test the approach.
Limitations on data generation due to the computational complexity of earth system models is a central challenge we consider. Our approach is tailored to ensure feasibility in such limited data scenarios. This article does not seek to solve the inverse problem using observational aerosol data, but rather we stress test our proposed approach using unseen simulation data as observations. Such testing is a crucial prerequisite to using satellite data.
## 2 Overview of the Problem
Volcanic aerosol evolution is a well-researched process that begins with the injection of \\(SO_{2}\\) gas into the stratosphere, see [42, 47, 55] for comprehensive overviews. Through chemical processes, this gas is transformed into sulfate aerosol which grows larger through microphysical processes and reflects/absorbs solar radiation causing changes in stratospheric and surface temperatures. Satellite data provides a measure of how much sunlight is scattered and absorbed by the aerosols and is quantified by aerosol optical depth (AOD), which is an aggregate quantity from all aerosols present within a column of the atmosphere observed by the satellite 1.
Footnote 1: Alternatives to AOD observations include instruments for species-specific characterization. However, AOD is a robust and generic indicator of aerosol change. Restricting ourselves to AOD measurements ensures generality and extensibility of our approach.
Let \\(\\Omega\\subset\\mathbb{R}^{3}\\) denote the spatial domain of the earth system model which is defined by coordinates of longitude, latitude, and altitude2, and let \\([0,T]\\) denote the time interval under consideration (which is on the order of days to weeks for this problem). Let \\(u:\\Omega\\times[0,T]\\rightarrow\\mathbb{R}^{m}\\) denote the vector-valued function of state variables in the earth system model, which can be expressed as
\\[\\dot{u}=f(u)+ge_{SO_{2}} \\tag{1}\\] \\[u(0)=u_{0},\\]
where \\(f(u):\\Omega\\times[0,T]\\rightarrow\\mathbb{R}^{m}\\) models the dynamics, \\(g:\\Omega\\times[0,T]\\rightarrow\\mathbb{R}\\) models the \\(SO_{2}\\) injection, \\(e_{SO_{2}}\\in\\mathbb{R}^{m}\\) is a vector with 1 in the entry corresponding to the \\(SO_{2}\\) state and 0 otherwise, and \\(u_{0}:\\Omega\\rightarrow\\mathbb{R}^{m}\\) is the initial state.
There are many variables in the coupled earth system, i.e. \\(m\\) is large, due to the many coupled processes in the earth system. For our problem, the most important variables are:
* the mass of \\(SO_{2}\\),
* the mass of sulfate aerosol,
* the aerosol optimal depth (AOD), and
* the zonal wind (wind in the direction parallel the equator).
**Our goal is to:**
1. learn a surrogate model which takes the zonal wind and \\(SO_{2}\\) source as inputs, and predicts the evolution from \\(SO_{2}\\) to sulfate to AOD,
2. use this surrogate to constrain an inverse problem that infers the \\(SO_{2}\\) source from AOD observations.
Observational data collected via satellites is at a fine spatio-temporal resolution (\\(\\mathcal{O}(1)\\) degrees longitude and \\(\\mathcal{O}(1)\\) days). However, many atmospheric analyses are done at a coarser resolution to reduce complexity. Such coarsening simplifies the modeling but results in loss of information to inform the inverse problem. This trade-off is a frequent challenge in inverse problems as making the forward problem easier (via smoothing) makes the inverse problem more challenging (more ill-posed). We seek to formulate the inverse problem on the time scale of weeks with moderate spatial averaging/smoothing to preserve the richness of information content in the fine resolution satellite data.
In addition to this classical model complexity versus information content trade-off, many challenges arise from unique characteristics of this volcanic aerosol inverse problem. These include:
* variability in the atmosphere, and hence the zonal wind, due to initial state uncertainty and coupled processes in the earth system,
* local-to-global spatial scales as the volcanic eruption is initially localized in space but spreads equatorially around the globe in approximately 3 weeks, and
* the presence of background aerosols which contribute to the AOD but do not come from the volcano.
We propose a novel combination of techniques, which in themselves are well established, but whose adaptation and composition is guided by the challenges outlined above. Figure 1 provides an overview of the various aspects of our approach and how they are organized in the article.
In the sections which follow we detail each aspect and highlight how it relates to the larger framework. Specifically, in Section 3 we consider the nuances of the earth system simulations needed to facilitate our analysis. Section 4 presents our approach to spatial dimension reduction, which prepares reduced data as inputs for learning a time evolution operator in Section 5. The combination of spatial encoding and time evolution of the reduced spatial coordinates defines a reduced order model for the aerosol transport. In Section 6 we formulate a Bayesian inverse problem using the reduced model and present a Bayesian approximation error approach to incorporate uncertainty from background aerosols and atmospheric variability in the inverse problem. Numerical results are given in Section 7 to demonstrate our approach using data from E3SM. Concluding remarks are made in Section 8.
## 3 Simulation data generation and processing
To generate training data, the forward model (1) will be solved for various sources \\(g\\) and initial states \\(u_{0}\\). We are limited to \\(\\mathcal{O}(30)\\) forward model evaluations, so our design of source and initial states for these simulations is crucial to enable reliable reduced order models within the range of variability needed for the inversion. In the subsections which follow we detail our use of limited variability ensembles (the design of \\(u_{0}\\)), source tagging (tracking of aerosols from \\(g\\)), and extraction of the relevant states to form the training set. We conclude this section with an overview of our inverse problem formulation and algorithmic framework.
### Limited Variability Ensembles
Traditional earth system modeling accounts for initial state uncertainty by running _ensembles_ of model evaluations, that is, evaluating the model for different initial states [43]. Typically, these initial states are designed to be statistically independent so that the set of ensemble members captures the full range of possible system states. We refer to this ensemble design as full variability. However, in our context, we analyze a volcanic eruption which occurred in the past and some knowledge of the atmospheric state is available. This motivates the use of limited variability ensembles which constrain the initial states \\(u_{0}\\) to be representative of the atmosphere as it was partially observed at the time of the volcanic eruption.
To design a limited variability ensemble, five full variability ensemble members are generated by randomly perturbing (with values near machine precision) the temperature field of a historical simulation starting in 1985. Running simulations until June 1991, right before the Mt. Pinatubo eruption, yields five ensemble members whose June 1991 states are statistically independent. We select a \"best\" ensemble member that most closely matches observed climate modes (as determined from reanalysis products). Specifically, we match the 1991 El Nino and the Quasi Biennial Oscillation (QBO) modes. Four metrics are used to determine the best match: the NINO3.4 value and NINO3.4 trend, and the QBO phase at the atmospheric pressure levels of 10 and 50 hectopascal [9]. The limited variability ensemble members are generated by perturbing the initial temperature field (with near-machine precision values) of the best-matched full
Figure 1: Overview of the article’s organization. Each bullet point highlights an important aspect of the proposed framework.
variability ensemble member. Since this perturbation occurs shortly before the volcanic eruption, all of the limited variability ensemble members match the 1991 climate modes. Yet, these near machine precision perturbations still induce nontrivial variability in the stratospheric winds. Accounting for this variability is a key aspect of our approach to stratospheric aerosol inversion.
### Source Tagging
The observable variable AOD has contributions from diverse sources including volcanos, dust storms, industrial processes, etc. The background aerosols (all aerosols except for the volcanic aerosols) create additional challenges for volcanic aerosol source inversion since the rapid transport and mixing of aerosols makes it difficult to disentangle the various sources. We leverage an aerosol source tagging method [52] within E3SM which provides the capability to separate aerosol tracers by emission source and evolves them separately in the forward model. Specifically, for a state variable \\(\\zeta\\) modeling a chemical species, source tagging represents \\(\\zeta=\\zeta_{v}+\\zeta_{b}\\), where \\(\\zeta_{v}\\) is the species due to the volcanic source and \\(\\zeta_{b}\\) is the background species due to all other sources. Separate differential equations evolve these two species, thus \"tagging\" which aerosols come from the volcano. This decomposition into volcanic and background species is done for the \\(SO_{2}\\), sulfate, and AOD variables.
### Summary of Simulation Data
The forward model (1) is evaluated for \\(N_{e}\\) ensembles (perturbed initial states \\(u_{0}\\)) and \\(N_{s}\\) sources \\(g\\). The sources are spatially localized around the volcano by defining the spatial profile of \\(g\\) as a Dirac delta function. The temporal profile of \\(g\\) is defined over a 9 hour time window during the eruption, after which time \\(g=0\\). The \\(N_{s}\\) sources differ by their injection magnitudes which are chosen to capture a realistic range of plausible volcanic eruptions. Due to the short time-scale of the eruption relative to the time-scale of our analysis (hours compared to days), we pose the inverse problem to estimate the source tagged \\(SO_{2}\\) shortly after the eruption ends. We restrict our analysis to the most relevant state variables and consider the dataset
\\[\\{\\alpha_{v}^{i,j},\\beta_{v}^{i,j},\\rho_{v}^{i,j},\\rho_{b}^{i,j},\\omega^{i,j} \\}_{i=1,j=1}^{N_{e},N_{s}} \\tag{2}\\]
which corresponds to the \\(SO_{2}\\) (\\(\\alpha\\)), sulfate (\\(\\beta\\)), AOD (\\(\\rho\\)), and zonal wind (\\(\\omega\\)). Each state is indexed by \\(i,j\\) to identify which ensemble it arises from. The subscript \\(v\\) indicates the volcanic source tags and \\(\\rho_{b}\\) is the background AOD variable that will be used to incorporate background aerosol data in the inverse problem. The forward model output is on a daily time scale and hence our data is evaluated at time steps \\(t_{k}\\in\\{0,1,2,\\ldots,N_{t}\\}\\) on a horizon of \\(N_{t}+1\\) days where \\(t_{0}\\) is 24 hours after the onset of the eruption. The \\(SO_{2}\\), sulfate, and zonal wind variables are functions defined in three spatial dimensions, while the AOD variables are defined in two spatial dimensions, since AOD is a column integrated quantity.
Figure 2 displays source tagged \\(SO_{2}\\) to illustrate the characteristics of its transport and variability. In particular, Figure 2 shows three different time steps. At each time step, we compute the mean and standard deviation over ensembles to demonstrate how the ensemble variability compares with eruption magnitude variability.
### Overview of the Proposed Inversion Framework
We formulate an inverse problem to estimate the volcanic source tagged \\(SO_{2}\\) (i.e. \\(\\alpha_{v}\\)) at the initial time \\(t_{0}\\), using observations of AOD without source tags (i.e. \\(\\rho_{v}+\\rho_{b}\\)) at later times \\(t_{k}\\)\\(k=1,2,\\ldots,N_{t}\\). Note that this is an initial condition inversion rather than forcing term inversion because of the time scales in the data. That is, we are estimating the \\(SO_{2}\\) component of the initial state, where \\(t_{0}\\) corresponds to the time shortly after the eruption has ended. We use operator learning to construct a reduced model to evolve \\(\\alpha_{v}\\) to \\(\\beta_{v}\\) in time and a separate stationary model to map \\(\\beta_{v}\\) to \\(\\rho_{v}\\). The background AOD \\(\\rho_{b}\\) is incorporated into the inverse problem in our formulation of the likelihood function. Figure 3 overviews the important components of our proposed framework and how they interact with one another.
## 4 Spatial Dimension Reduction
The raw data (2) is large due to its four dimensions (three spatial dimensions and time). In the small data setting, i.e. \\(N_{e}N_{s}=\\mathcal{O}(30)\\), it is intractable to train an operator surrogate for complex and high dimensional dynamics. Rather, we seek to reduce the spatial dimension and train a time evolution operator in the low-dimensional space. Spatial dimension reduction is
Figure 3: Overview of the proposed framework.
Figure 2: Source tagged 1D profile of \\(SO_{2}\\), as a function of longitude, at time steps \\(t_{0}\\), \\(t_{5}\\), and \\(t_{9}\\). The color indicates the volcano injection mass. At each time step, the solid lines correspond to the ensemble mean of the \\(SO_{2}\\) and the shading indicates two standard deviations.
multifaceted as we consider both data preprocessing, aerosol dimension reduction, and wind dimension reduction. Figure 4 shows an overview of the preprocessing steps performed on the inputs to the time evolution operator. The subsections which follow consider these three facets.
### Data Preprocessing
Although the data is inherently three dimensional in space, the dynamics are faster in the longitudinal directions as prevailing winds drive the aerosols around the globe equatorially in the first three weeks post-eruption. This indicates that integrating over the latitude and altitude directions can significantly reduce the dimension while preserving important dynamical characteristics. In general, integrating out spatial dimensions has the effect that learning a surrogate for the forward problem becomes easier, but the information lost in the integration makes the inverse problem more challenging. This trade-off of dynamical complexity and information content is crucial and it highlights the need to customize the forward model surrogate with a cognizance of the inverse problem. Our results indicate that compression to only longitude dependence is effective for volcanic aerosol transport because the longitudinal transport is much faster compared to the other dimensions. Figure 5 shows a representative simulation of the aerosol transport in two dimensions (latitude and longitude). The aerosols are transported equatorially around the globe in approximately 21 days (\\(t_{0}\\) to \\(t_{20}\\)) while remaining contained within a latitudinal band around the tropics. The aerosols eventually reach high latitudes after multiple months. However, the inverse problem is best informed by the transport in the early days post-eruption.
### Radial Basis Function Dimension Reduction
Let \\(\\xi:\\Omega_{\\text{lon}}\\times[0,T]\\rightarrow\\mathbb{R}\\), \\(\\Omega_{\\text{lon}}\\subset\\mathbb{R}\\), denote an arbitrary volcanic source tagged aerosol variable (\\(SO_{2}\\), sulfate, or AOD) after integrating over the latitude and altitude dimensions. We seek to further reduce the spatial dimension of \\(\\xi\\) by encoding it in a low-dimensional spatial basis so that we can train a time evolution operator in the reduced dimension. All aerosol variables under consideration have the spatio-temporal characteristic that they are spatially localized at the initial time and are transported around the globe by the advective force of the zonal winds, as seen in Figure 2. Such advection dominated dynamics is known to be challenging for linear dimension reduction methods (such as principle component analysis) and hence we consider nonlinear dimension reduction. There are a multitude of potential approaches for nonlinear encoding of spatial fields [19, 22].
In this work we consider Gaussian radial basis functions (RBFs). This choice is motivated by two characteristics of the volcanic aerosol plume: (i) the plume has an approximate Gaussian (bell curve) shape, and (ii) the plume exhibits non-smooth, small amplitude features which are
Figure 4: Overview of the spatial dimension reduction approach for aerosol and wind data.
smoothed out by the RBFs, thus eliminating spatial scales which cannot be learned from limited data. We consider spatial basis functions of the form
\\[\\psi_{\\ell}(x)=c_{\\ell}\\exp\\bigl{(}-a_{\\ell}^{2}\\mid x-x_{\\ell}\\mid^{2}\\bigr{)} \\tag{3}\\]
where \\(x_{\\ell},a_{\\ell},c_{\\ell}\\in\\mathbb{R}\\) are the center, shape, and coefficient hyperparameters respectively, which will be fit via nonlinear least squares.
The aerosol variable \\(\\xi(x,t)\\) is periodic in \\(x\\) (since \\(x\\) is the longitude coordinate) and hence we must periodize (3). Following [51], we consider basis functions
\\[\\Psi_{\\ell}(x)=\\sum_{m=-\\infty}^{\\infty}\\psi_{\\ell}(x+mL) \\tag{4}\\]
where \\(L\\) is the period (if \\(x\\) is longitude measured in degrees then \\(x\\in\\Omega_{\\text{lon}}=[0,360]\\) and \\(L=360\\)). In practice, the infinite sum in (4) can be truncated to \\(m=-M,\\ldots,M\\), where \\(M\\) is chosen based on the shape hyperparameter \\(a_{\\ell}\\). Truncation errors are generally on the order of machine precision and hence are negligible.
Given \\(\\xi(x,t_{k})\\), for an arbitrary time step \\(t_{k}\\), we consider a \\(N_{rbf}\\) dimensional basis \\(\\{\\Psi_{\\ell}\\}_{\\ell=1}^{N_{rbf}}\\) and fit the center, shape, and coefficient hyperparameters. That is, we approximate
\\[\\xi(x,t_{k})\\approx\\sum_{\\ell=1}^{N_{rbf}}\\Psi_{\\ell}(x;x_{\\ell}^{k},a_{\\ell} ^{k},c_{\\ell}^{k}) \\tag{5}\\]
where \\(N_{rbf}\\) is the number of basis functions and the hyperparameters are indexed with a superscript \\(k\\) to indicate the time step. This gives a \\(3N_{rbf}\\) dimensional embedding of \\(\\xi(x,t_{k})\\). This is done for each time step, resulting time series \\(\\{x_{\\ell}^{k},a_{\\ell}^{k},c_{\\ell}^{k}\\}_{k=0}^{N_{t}}\\), \\(\\ell=1,2,\\ldots,N_{rbf}\\), which will be used as training data to learn a time evolution operator in the space of the RBF hyperparameters.
Traditional use of RBFs considers a fixed grid of centers \\(\\{x_{\\ell}\\}_{\\ell=1}^{N_{rbf}}\\), chooses optimal shape hyperparameters \\(\\{a_{\\ell}\\}_{\\ell=1}^{N_{rbf}}\\) based on the placement of the centers, and then fits the coefficients
Figure 5: Representative simulation of the aerosol transport in two dimensions. The panels from left to right correspond to time steps at \\(t_{0}\\), \\(t_{10}\\), and \\(t_{20}\\), i.e. a 21 day period. Longitude is measured eastward from the Greenwich prime meridian.
\\(\\{c_{\\ell}\\}_{\\ell=1}^{N_{rbf}}\\) via linear least squares. Our approach differs: rather than taking \\(N_{rbf}\\) large enough for \\(\\{x_{\\ell}\\}_{\\ell=1}^{N_{rbf}}\\) to cover \\(\\Omega_{\\text{lon}}\\), we instead take a small \\(N_{rbf}\\) and fit the coefficient, center, and shape hyperparameters. This gives a nonlinear embedding which is able to capture advective phenomena in a low-dimensional space by permitting the center hyperparameter to evolve in time. Our approach introduces a challenge of identifiability as multiple sets of hyperparameters may yield identical or nearly identical approximations. However, this can be addressed by judiciously selecting a small \\(N_{rbf}\\) and enforcing constraints on the hyperparameters.
### Wind Dimension Reduction
To incorporate zonal wind into the aerosol plume time evolution operator, we require a low-dimensional embedding of \\(\\omega\\) that is localized about the aerosol plume. However, an RBF approximation of \\(\\omega\\) is not suitable since the zonal wind is not spatially localized like the sourced tagged aerosols.
The core idea that makes our RBF approach effective is that the center hyperparameter is permitted to move and hence the basis function retains low dimensionality as it is advected around the globe. To impart spatial locality to the zonal wind, we weight the wind data using the aerosol mass and thus restrict it to a local region in the atmosphere. Specifically, a threshold \\(\\tau_{SO_{2}}\\) is specified and used to define a time varying set
\\[\\mathcal{D}(t)=\\{(x,y,z)\\in\\Omega\\mid\\alpha_{v}(x,y,z,t)\\geq\\tau_{SO_{2}}\\} \\tag{6}\\]
which restricts the spatial domain \\(\\Omega\\subset\\mathbb{R}^{3}\\) to the region where the volcanic aerosol plume has a magnitude greater than the threshold \\(\\tau_{SO_{2}}\\). Note that \\((x,y,z)\\) corresponds to longitude, latitude, and altitude. For each time step, we introduce RBF-based weighting functions
\\[\\Phi_{\\ell}^{k}(x,y,z)=\\Psi_{\\ell}(x;x_{\\ell}^{k},a_{\\ell}^{k},c_{\\ell}^{k}) \\chi_{\\mathcal{D}(t_{k})}(x,y,z) \\tag{7}\\]
\\(\\ell=1,2,\\dots,N_{rbf}\\), where \\(\\chi\\) is the indicator function of a set. Hence \\(\\Phi_{\\ell}^{k}\\geq 0\\) captures the spatial locality of the RBF basis functions by weighting locations in space by the amount of aerosol being represented in the \\(\\ell^{th}\\) RBF basis function at the \\(k^{th}\\) time step.
For each time step \\(t_{k}\\), we consider point-wise spatial evaluations of the zonal wind
\\(\\omega(x,y,z,t_{k})\\) as samples of a random variable. Weighting these samples with \\(\\Phi_{\\ell}^{k}(x,y,z)\\) defines a distribution of zonal wind values that is localized about the RBF basis function \\(\\Psi_{\\ell}\\) at time \\(t_{k}\\). Using weighted kernel density estimation, we produce PDFs \\(h_{\\ell}^{k}:\\mathbb{R}\\rightarrow\\mathbb{R}\\) for the set of zonal winds which confers the highest probability to zonal winds that are characteristic of the region where the aerosol plume is concentrated.
Transforming \\(\\omega\\) to \\(h_{\\ell}^{k}\\) does not reduce the dimension, but rather changes domains as \\(\\omega\\) is a function of space and \\(h_{\\ell}^{k}\\) is a PDF of zonal wind values. This achieves localization commensurate to what is done via our RBF approximation. However, \\(h_{\\ell}^{k}\\) is still a high dimensional representation of the zonal wind. To compress it, we use principle component analysis [4] to project the PDFs \\(\\{h_{\\ell}^{k}\\}_{\\ell=1,k=1}^{N_{rbf},N_{t}}\\) onto a low-dimensional subspace. Specifically, we learn a collection of principle components \\(\\eta_{i}:\\mathbb{R}\\rightarrow\\mathbb{R}\\), \\(i=1,2,\\dots\\) and express a particular PDF \\(h_{\\ell}^{k}:\\mathbb{R}\\rightarrow\\mathbb{R}\\) in terms of its principle components as
\\[h_{\\ell}^{k}=\\sum_{i}w_{i}^{\\ell,k}\\eta_{i},\\]where \\(w_{i}^{\\ell,k}\\in\\mathbb{R}\\) are the principle component coordinates for \\(h_{\\ell}^{k}\\). We truncate to the leading \\(N_{w}\\) principle components and represent \\(h_{\\ell}^{k}\\) via its coordinates \\(\\mathbf{w}_{\\ell}^{k}=(w_{1}^{\\ell,k},w_{2}^{\\ell,k},\\ldots,w_{N_{w}}^{\\ell,k}) \\in\\mathbb{R}^{N_{w}}\\).
We can take small values for \\(N_{w}\\) since our aggregation of time steps with a moving region eliminates advective characteristics in the data. Furthermore, taking a small \\(N_{w}\\) has the effect of smoothing the wind. Thus for a given RBF basis function and time step, \\(\\mathbf{w}_{\\ell}^{k}\\) is a \\(N_{w}\\) dimensional representation of the zonal wind field. This reduces the three dimensional zonal wind field, which has \\(\\mathcal{O}(10^{6})\\) degrees of freedom, to a \\(N_{w}\\) dimensional space, where \\(N_{w}=\\mathcal{O}(1)\\) in many cases.
## 5 Time Evolution Operator
After spatial dimension reduction, our objective is to train a time evolution operator in the space of reduced coordinates to predict the transport of the volcanic aerosols. The operator needs to accurately trace the evolution of aerosols while accounting for variations in both the volcanic injection magnitudes and atmospheric winds. The architecture and training of the time evolution operator needs to be endowed with awareness of time discretization and enforcement of relevant physical constraints such as the conservation of mass and irreversibility of chemical processes. This is particularly important when training the model with a small dataset; incorporation of physical knowledge compensates for the lack of data. This section considers model architecture and loss function design to achieve this \"physics informed\" operator.
Working with source tagged data simplifies the analysis by avoiding the complexity of other external processes adding or removing aerosols, effectively reducing noise in the training data. Hence our focus is on the volcano-induced \\(SO_{2}\\), sulfate, and AOD, i.e. \\(\\{\\alpha_{v},\\beta_{v},\\rho_{v}\\}\\). This section considers training a time evolution operator to predict the evolution of \\(\\{\\alpha_{v},\\beta_{v}\\}\\) given a zonal wind field \\(\\omega\\). The AOD does not evolve dynamically, but rather is computed point-wise in space at a given time instance based on the presence of aerosols in the atmosphere at that time and spatial location. Hence, we model the mapping from \\(\\beta_{v}\\) to \\(\\rho_{v}\\) as an observation operator which we consider in Section 6.
Let \\(\\mathbf{r}_{k}\\in\\mathbb{R}^{3N_{rbf}}\\) denote the concatenation of the \\(SO_{2}\\) RBF hyperparameters \\(\\{x_{\\ell}^{k},a_{\\ell}^{k},c_{\\ell}^{k}\\}_{\\ell=1}^{N_{rbf}}\\) at time \\(t_{k}\\) and let \\(\\mathbf{s}_{k}\\in\\mathbb{R}^{3N_{rbf}}\\) denote the analogous concatenation of the sulfate RBF hyperparameters. We use the notation \\(\\hat{\\mathbf{r}}_{k}\\) and \\(\\hat{\\mathbf{s}}_{k}\\) to denote the time evolution operator's prediction of these coordinates. That is, \\(\\{\\mathbf{r}_{k},\\mathbf{s}_{k}\\}_{k=0}^{N_{t}}\\) denotes the RBF coordinates computed from the data \\(\\{\\alpha_{v},\\beta_{v}\\}\\) and \\(\\{\\hat{\\mathbf{r}}_{k},\\hat{\\mathbf{s}}_{k}\\}_{k=0}^{N_{t}}\\) denotes the RBF coordinates predicted by the learned operator.
Another benefit of source tagging is that the mass of sulfur is conserved over time in the volcanic aerosol data. We use this fact to model sulfate as a function of the \\(SO_{2}\\) at each time step. Initially, the volcano injects \\(SO_{2}\\) which reacts with hydroxyl radicals in the atmosphere to create sulfate. Letting \\(M_{\\alpha}\\) and \\(M_{\\beta}\\) denote the molar masses of \\(SO_{2}\\) and sulfate, respectively, we have that
\\[\\int_{\\Omega_{\\text{lim}}}\\biggl{(}\\frac{\\alpha_{v}(x,t)}{M_{\\alpha}}+\\frac{ \\beta_{v}(x,t)}{M_{\\beta}}\\biggr{)}dx=\\int_{\\Omega_{\\text{lim}}}\\biggl{(}\\frac {\\alpha_{v}(x,t_{0})}{M_{\\alpha}}+\\frac{\\beta_{v}(x,t_{0})}{M_{\\beta}}\\biggr{)}dx \\tag{8}\\]
for all \\(t\\).
The sulfate plume mirrors the spatial profile of the \\(SO_{2}\\) plume but differs in total mass, so given \\(SO_{2}\\) at a particular time step, we use (8) to predict the sulfate. Specifically, for a fixed time step, we set the center and shape hyperparameters of the sulfate RBF representation to be equal to the \\(SO_{2}\\) RBF center and shape hyperparameters. The sulfate coefficient hyperparameters are computed by enforcing conservation of mass (8).
Estimation of sulfate via (8) requires computing the total mass of sulfur at time \\(t_{0}\\). Yet at time \\(t_{0}\\), which is 15 hours after the volcanic eruption has ended, there is a small amount of sulfate present in the atmosphere due to the reaction of \\(SO_{2}\\) with hydroxyl radicals. Hence the total mass of sulfur at time \\(t_{0}\\) is more than just the initial mass of sulfur in the \\(SO_{2}\\) molecules. Computing the ratio of sulfate and \\(SO_{2}\\) masses at \\(t_{0}\\) in all training set simulations, we observe that the variability of this ratio across the simulations is small due to the brief time window from the eruption to \\(t_{0}\\). Hence, we compute the mean ratio of masses and use this as a scaling factor2 to initialize the sulfate mass proportional to the initial \\(SO_{2}\\) mass.
Footnote 2: Across the training set, the ratio of initial sulfate mass to initial \\(SO_{2}\\) mass range from 0.0533 to 0.0558, with a mean of 0.0544.
Let \\(\\mathcal{E}\\) denote the operator which, for a given time step, computes the sulfate coordinates \\(\\hat{\\mathbf{s}}_{k}\\) using the estimated \\(SO_{2}\\) coordinate \\(\\hat{\\mathbf{r}}_{k}\\) and (8), that is, \\(\\hat{\\mathbf{s}}_{k}=\\mathcal{E}(\\hat{\\mathbf{r}}_{k},\\hat{\\mathbf{r}}_{0}, \\hat{\\mathbf{s}}_{0})\\) for all \\(k\\). This is achieved by enforcing the conservation of molar mass (8) for each RBF basis function separately, which translates to conservation of total mass since our approximation is a linear combination of the RBF basis functions. The mass of the RBF basis function can be computed analytically, as shown in (10).
Given that the sulfate can be modeled as a function of the \\(SO_{2}\\) at each time step, we use a flow map to model the time evolution of the \\(SO_{2}\\). Let \\(\\mathbf{w}_{k}\\in\\mathbb{R}^{N_{w}N_{rbf}}\\) denote the concatenation of the wind coordinates \\(\\{\\mathbf{w}_{\\ell}^{k}\\}_{\\ell=1}^{N_{rbf}}\\) at time \\(t_{k}\\). We seek to learn the flow map \\(\\mathcal{F}:\\mathbb{R}^{3N_{rbf}}\\times\\mathbb{R}^{N_{w}N_{rbf}}\\to\\mathbb{R}^ {3N_{rbf}}\\) such that
\\[\\mathbf{r}_{k+1}\\approx\\mathcal{F}(\\mathbf{r}_{k},\\mathbf{w}_{k}).\\]
Since the flow map is approximating a differential equation time step, we use an architecture which mimics a forward Euler time stepping scheme,
\\[\\mathcal{F}(\\mathbf{r}_{k},\\mathbf{w}_{k})=\\mathbf{r}_{k}+\\Delta t\\mathcal{N }(\\mathbf{r}_{k},\\mathbf{w}_{k}) \\tag{9}\\]
where we assume constant time step sizes \\(\\Delta t=t_{k+1}-t_{k}\\) and introduce the model \\(\\mathcal{N}\\) to be learned. Since time and space have been modeled via the forward Euler architecture (9) and RBF dimension reduction, respectively, we model \\(\\mathcal{N}:\\mathbb{R}^{3N_{rbf}}\\times\\mathbb{R}^{N_{w}N_{rbf}}\\to\\mathbb{R} ^{3N_{rbf}}\\) using a dense feed forward neural network.
Since no additional \\(SO_{2}\\) enters the system (due to our use of source tagged data) and \\(SO_{2}\\) is depleted over time via its reaction with hydroxyl radicals to create sulfate, we require that the mass of \\(SO_{2}\\) (integrated over the spatial domain) be monotonically decreasing. For a basis function \\(\\Psi_{\\ell}\\) as in (4), we have
\\[\\int_{\\Omega_{\\text{nn}}}\\Psi_{\\ell}(x)dx=\\sqrt{\\pi}\\frac{c_{\\ell}}{a_{\\ell}}, \\tag{10}\\]
where we assume without loss of generality that \\(a_{\\ell}>0\\). We can enforce monotonicity of the \\(SO_{2}\\) mass by using a ReLU output layer for \\(\\mathcal{N}\\) that makes the time step increment nonpositive. By embedding this monotonicity structure in the learned operator, we enforce the irreversibility of chemical processes so that \\(SO_{2}\\) cannot be created over time. Where appropriate, we can also impose monotonicity on the RBF center hyperparameter \\(x_{\\ell}\\) if, for instance, the plume is always flowing from east to west. In general, the forward Euler architecture of (9) simplifies enforcement of monotonicity constraints since the constraints are equivalent to non-negativity (or non-positivity) in the output components of \\(\\mathcal{N}\\).
Our reduced model takes the initial \\(SO_{2}\\) coordinate \\(\\mathbf{r}_{0}\\) and the time series of zonal wind coordinates \\(\\{\\mathbf{w}_{k}\\}_{k=0}^{N_{t}}\\) as input. The flow map (9) is composed with itself \\(N_{t}\\) times to produce a time series of approximate \\(SO_{2}\\) coordinates \\(\\{\\hat{\\mathbf{r}}_{k}\\}_{k=1}^{N_{t}}\\). The initial sulfate coordinates are determined by the initial \\(SO_{2}\\) coordinates and \\(\\mathcal{E}\\) is applied at each time step \\(t_{k}\\), \\(k\\geq 1\\), to compute sulfate coordinates \\(\\{\\hat{\\mathbf{s}}_{k}\\}_{k=1}^{N_{t}}\\). The model architecture is illustrated in Figure 6.
The embedding of spatio-temporal structure and physical constraints in the model architecture is crucial to facilitate learning with limited data. However, the model architecture must be complemented with specialized a loss function to embed additional structure. We define the loss as the sum of misfits in both the \\(SO_{2}\\) and sulfate predictions. Furthermore, to ensure stability in the time stepping, we consider a look ahead loss function which composes \\(\\mathcal{F}\\) with itself to predict over longer time horizons in the loss. Specifically, we define the loss function as
\\[\\sum_{k=1}^{N_{t}}\\sum_{p=1}^{\\text{min}(P,N_{t}-k)}||\\mathbf{r}_{k+p}- \\mathcal{F}^{[p]}(\\mathbf{r}_{k},\\{\\mathbf{w}_{i}\\}_{i=k}^{k+p-1})||^{2}+|| \\mathbf{s}_{k+p}-\\mathcal{E}(\\mathcal{F}^{[p]}(\\mathbf{r}_{k},\\{\\mathbf{w}_{i }\\}_{i=k}^{k+p-1}))||^{2} \\tag{11}\\]
where \\(\\mathcal{F}^{[p]}(\\mathbf{r}_{k},\\{\\mathbf{w}_{i}\\}_{i=k}^{k+p-1})\\) denotes the composition of \\(\\mathcal{F}\\) to step from time \\(t_{k}\\) to \\(t_{k+p}\\), and \\(P\\) is a tunable hyperparameter defining how many time steps we look ahead. A more detailed discussion of the forward Euler architecture (9) and look ahead loss function (11) can be found in [14]. We note the importance of the look ahead hyperparameter \\(P\\) which encourages time stepping stability of the learned flow map. We also note that the loss function could be defined using only the \\(SO_{2}\\) data. However, we observed that including the sulfate data added negligible computational cost and provided some benefit in generalization of the model.
## 6 Inverse Problem Formulation
Given a trained flow map \\(\\mathcal{F}\\) and \\(SO_{2}\\) to sulfate mapping \\(\\mathcal{E}\\), we define a mapping
\\[\\mathcal{G}:\\mathbb{R}^{3N_{rbf}}\\times\\mathbb{R}^{N_{w}N_{rbf}N_{t}}\\to \\mathbb{R}^{3N_{rbf}N_{t}}\\]
from an initial volcanic sourced tagged \\(SO_{2}\\) RBF coordinates \\(\\mathbf{r}_{0}\\) and time series of the zonal wind coordinates \\(\\{\\mathbf{w}_{k}\\}_{k=0}^{N_{t}-1}\\), to the time series of sulfate RBF coordinates, i.e. \\(\\{\\hat{\\mathbf{s}}_{k}\\}_{k=1}^{N_{t}}=\\mathcal{G}(\\mathbf{r}_{0},\\{\\mathbf{w }_{k}\\}_{k=0}^{N_{t}-1})\\).
Figure 6: Illustration of repeated compositions of the time evolution operator to estimate the coordinates \\(\\hat{\\mathbf{r}}\\) and \\(\\hat{\\mathbf{s}}\\), for \\(SO_{2}\\) and sulfate, respectively. The zonal wind coordinates \\(\\mathbf{w}_{k}\\) are input at each time step. The flow map \\(\\mathcal{F}\\) evolves the \\(SO_{2}\\) and the operator \\(\\mathcal{E}\\) predicts the corresponding sulfate.
We assume that only AOD observations are available and hence must model the mapping from source tagged sulfate to source tagged AOD. In earth system models, the AOD is computed point-wise in space and time using a model of scattering and absorption of light via atmospheric particulates. We can learn this mapping from the sourced tagged sulfate and AOD training data
\\[\\{\\beta_{v}^{i,j},\\rho_{v}^{i,j}\\}_{i=1,j=1}^{N_{v},N_{s}}.\\]
Specifically, we represent sulfate and AOD in the RBF basis and learn a mapping \\(\\mathcal{C}:\\mathbb{R}^{3N_{rbf}}\\rightarrow\\mathbb{R}^{3N_{rbf}}\\) which inputs the source tagged sulfate RBF coordinates \\(\\mathbf{s}_{k}\\) and returns the source tagged AOD RBF coordinates \\(\\mathbf{q}_{k}\\).
The observable data (from satellite measurements) corresponds to point-wise evaluations of AOD at times \\(t_{k}\\), \\(k=1,2,\\ldots,N_{t}\\), and at \\(N_{\\text{obs}}\\) spatial locations in \\(\\Omega_{\\text{lon}}\\). We define the operator \\(\\mathcal{B}:\\mathbb{R}^{3N_{rbf}N_{t}}\\rightarrow\\mathbb{R}^{N_{\\text{obs}}N_ {t}}\\) which, for each time step \\(t_{k}\\), \\(k=1,2,\\ldots,N_{t}\\), maps from AOD RBF coordinates to physical space via (4) and evaluates the AOD at the \\(N_{\\text{obs}}\\) discrete observation points in space.
Composing all of these operators, we define
\\[\\mathcal{A}=\\mathcal{B}\\circ\\mathcal{C}\\circ\\mathcal{G}:\\mathbb{R}^{3N_{rbf}} \\times\\mathbb{R}^{N_{w}N_{rbf}N_{t}}\\rightarrow\\mathbb{R}^{N_{\\text{obs}}N_{ t}}\\]
which maps the initial \\(SO_{2}\\) RBF coordinates and reduced zonal wind coordinates to the observable AOD at \\(N_{\\text{obs}}N_{t}\\) locations in space-time. We seek to compare predictions of \\(\\mathcal{A}\\) with observed AOD data \\(\\mathbf{d}\\in\\mathbb{R}^{N_{\\text{obs}}N_{t}}\\) to estimate the initial \\(SO_{2}\\).
Uncertainty arises from a variety of processes: noise in AOD observations \\(\\mathbf{d}\\), background AOD not modeled in \\(\\mathcal{A}\\), and variability in zonal winds. We model noise in the AOD observations via a mean-zero Gaussian random vector \\(\\epsilon\\in\\mathbb{R}^{N_{\\text{obs}}N_{t}}\\) with covariance \\(\\boldsymbol{\\Sigma}_{\\epsilon}\\). To incorporate the background AOD, we introduce a Gaussian random vector \\(\\mathbf{v}\\in\\mathbb{R}^{N_{\\text{obs}}N_{t}}\\) whose mean \\(\\mu_{\\mathbf{v}}\\) and covariance \\(\\boldsymbol{\\Sigma}_{\\mathbf{v}}\\) are estimated from the AOD background data \\(\\{\\rho_{b}^{i,j}\\}_{i=1,j=1}^{N_{e},N_{e}}\\).
Then for initial \\(SO_{2}\\) hyperparameters \\(\\mathbf{r}_{0}\\) and reduced zonal wind coordinates \\(\\mathbf{w}=(\\mathbf{w}_{0},\\mathbf{w}_{1},\\ldots,\\mathbf{w}_{N_{t}-1})\\), we express the observed noisy AOD, \\(\\mathbf{d}\\), as
\\[\\mathbf{d}=\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})+\\mathbf{v}+\\epsilon.\\]
Assuming that the AOD background and observation noise are independent, if follows that \\(\\mathbf{d}-\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})\\) is normally distributed with mean \\(\\mu_{\\mathbf{v}}\\) and covariance \\(\\boldsymbol{\\Sigma}_{\\epsilon}+\\boldsymbol{\\Sigma}_{\\mathbf{v}}\\).
Given a prior for \\(\\mathbf{r}_{0}\\) and \\(\\mathbf{w}\\), we apply Bayes rule to arrive at a joint posterior for \\((\\mathbf{r}_{0},\\mathbf{w})\\). The zonal wind \\(\\mathbf{w}\\) is uncertain due to the earth system's internal variability, but is not a quantity of interest to estimate. Rather we seek to account for zonal wind uncertainty in our estimate of \\(\\mathbf{r}_{0}\\). Accordingly, we take a Bayesian approximation error (BAE) approach [17] by marginalizing over \\(\\mathbf{w}\\). Specifically, we consider
\\[\\mathbf{d} =\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})+\\mathbf{v}+\\epsilon\\] \\[=\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})]+ (\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})-\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A} (\\mathbf{r}_{0},\\mathbf{w})])+\\mathbf{v}+\\epsilon.\\]
We make the simplifying assumptions that \\(\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})-\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A}( \\mathbf{r}_{0},\\mathbf{w})]\\) follows a Gaussian distribution, and that its covariance does not depend on \\(\\mathbf{r}_{0}\\). The latter assumption is reasonable as the variability due to atmospheric winds is known to be chaotic and hence will have a weak dependence on the aerosol mass. The Gaussian assumption does not have theoretical justification. However, this assumption is common in the BAE literature and has been observed to be reasonable in many applications. It results in a convenient expression for the modified likelihood which is both computable and interpretable.
We compute samples of \\(\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})-\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A}( \\mathbf{r}_{0},\\mathbf{w})]\\) by drawing prior samples from \\((\\mathbf{r}_{0},\\mathbf{w})\\) and propagating them through \\(\\mathcal{A}\\). Ignoring its dependence on \\(\\mathbf{r}_{0}\\), we fit a Gaussian distribution to these samples. This yields a mean \\(\\mu_{\\text{BAE}}\\) and covariance \\(\\boldsymbol{\\Sigma}_{\\text{BAE}}\\) which models variability due to the zonal winds. Then we have that
\\[\\eta=(\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})-\\mathbb{E}_{\\mathbf{w}}[ \\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})])+\
u+\\epsilon\\]
is Gaussian with mean \\(\\mu=\\mu_{\
u}+\\mu_{\\text{BAE}}\\) and covariance \\(\\boldsymbol{\\Sigma}=\\boldsymbol{\\Sigma}_{\\epsilon}+\\boldsymbol{\\Sigma}_{\
u}+ \\boldsymbol{\\Sigma}_{\\text{BAE}}\\). Hence,
\\[\\mathbf{d}=\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})]+\\eta\\]
and we express the posterior PDF for \\(\\mathbf{r}_{0}\\) as
\\[\\pi_{\\text{post}}(\\mathbf{r}_{0}|\\mathbf{d})\\propto\\pi_{\\text{like}}( \\mathbf{d}|\\mathbf{r}_{0})\\pi_{\\text{prior}}(\\mathbf{r}_{0}) \\tag{12}\\]
where
\\[\\log(\\pi_{\\text{like}}(\\mathbf{d}|\\mathbf{r}_{0}))=-\\frac{1}{2}(\\mathbb{E}_{ \\mathbf{w}}[\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})]+\\mu-\\mathbf{d})^{T} \\boldsymbol{\\Sigma}^{-1}(\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A}(\\mathbf{r}_{0}, \\mathbf{w})]+\\mu-\\mathbf{d}) \\tag{13}\\]
and \\(\\pi_{\\text{prior}}\\) is the prior PDF for \\(\\mathbf{r}_{0}\\).
We emphasize the following points regarding (13). First, the background AOD is accounted for in \\(\\mu\\) and \\(\\boldsymbol{\\Sigma}\\) due to the source tag separating the volcanic and background aerosols. This allows us to avoid modeling dynamics of the small scale processes such as industrial emissions or dust storms while still accounting for them in the inverse problem. Second, internal climate variability that manifests itself in wind uncertainty is accommodated in (13) through: (i) computing the average observable AOD by taking an expectation over the zonal winds \\(\\mathbf{w}\\), and (ii) weighting the data misfit with a covariance \\(\\boldsymbol{\\Sigma}\\) whose magnitude depends on the magnitude of AOD variability due to wind uncertainty (measured in \\(\\boldsymbol{\\Sigma}_{\\text{BAE}}\\)). Our assumptions about the statistics of \\(\\mathcal{A}(\\mathbf{r}_{0},\\mathbf{w})-\\mathbb{E}_{\\mathbf{w}}[\\mathcal{A}( \\mathbf{r}_{0},\\mathbf{w})]\\) enabled this convenient expression for the likelihood in terms of a bias correction \\(\\mu=\\mu_{\
u}+\\mu_{\\text{BAE}}\\) and uncertainty weighting \\(\\boldsymbol{\\Sigma}=\\boldsymbol{\\Sigma}_{\\epsilon}+\\boldsymbol{\\Sigma}_{\
u}+ \\boldsymbol{\\Sigma}_{\\text{BAE}}\\).
Figure 7 summarizes how the models are combined in our inversion framework. The leftmost box in Figure 7 corresponds to the initial \\(SO_{2}\\) RBF coordinates which are being estimated. For a given initial \\(SO_{2}\\) plume, the figure shows how the models are composed to propagate \\(N_{w}\\) reduced wind samples to produce \\(N_{w}\\) time series of AOD predictions, which are averaged in the rightmost box of Figure 7. The inverse problem seeks an initial \\(SO_{2}\\) (leftmost in the figure) such that the average AOD prediction (rightmost in the figure) matches the observed AOD data, with a correction for background AOD and weighting to account for noise and variability.
## 7 Numerical Results
We demonstrate our proposed framework using the following datasets. The training data consists of \\(N_{e}=7\\) ensemble members. For each ensemble member, \\(N_{s}=5\\) simulations were generated which correspond to \\(SO_{2}\\) injection source magnitudes of \\(3,5,7,13,\\) and \\(15\\) teragrams (Tg). This gives a total of \\(N_{e}N_{s}=35\\) simulations used in the training set, where each simulation includes the variables in (2). Five simulations from an eighth ensemble member with source magnitudes of \\(3,5,7,13,\\) and \\(15\\) Tg are held out as a validation set. The test set consists of two simulationsfrom a ninth and tenth ensemble member with source magnitude 10 Tg, similar to the Mount Pinatubo eruption magnitude. This setup ensures that the numerical results are derived from ensembles and source magnitudes that were not included in the training set, as summarized in Table 1. Our results consider daily data (a time resolution of 24 hours) and a time horizon of 10 days, i.e. \\(N_{t}=9\\).
The observed data informing the inverse problem is the sum of volcanic and background AOD. Figure 8 highlights the benefit of using source tagged data for training the time evolution operator by comparing the volcanic source tagged AOD with the AOD due to background aerosols. Each panel in the figure displays 7 curves corresponding to the \\(N_{e}=7\\) training ensembles with a fixed source magnitude4. The top row displays the source tagged AOD corresponding to the volcanic aerosols, \\(\\rho_{v}\\), and the bottom row displays the total AOD (volcanic and
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline
**Injection Loss** & **Example 1** & **Example 2** & **Example 3** & **Example 4** & **Example 5** & **Example 6** & **Example 7** & **Example 8** & **Example 9** & **Example 10** \\\\ \\hline
3 Tg & Yahn & Train & Yahn & Train & Yahn & Train & Validation & & \\\\ \\hline
5 Tg & Yahn & Train & Train & Train & Train & Train & Validation & & \\\\ \\hline
7 Tg & Train & Train & Train & Train & Train & Train & Validation & & \\\\ \\hline
10 Tg & Yahn & Train & Train & Train & Train & Train & Validation & & \\\\ \\hline
13 Tg & Yahn & Train & Train & Train & Train & Train & Validation & & \\\\ \\hline
15 Tg & Yahn & Train & Train & Train & Train & Train & Validation & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Overview of the simulations included in the training (blue), validation (green), and test (orange) sets.
background), i.e. \\(\\rho_{v}+\\rho_{b}\\). Distinguishing the volcanic AOD from the background noise becomes harder over time due to both the variability across ensembles as demonstrated by comparing days \\(t_{0}\\) and \\(t_{9}\\) in the top row, and the dispersion of the plume over time which reduces the signal to noise ratio, as demonstrated by comparing days \\(t_{5}\\) and \\(t_{9}\\) in the bottom row.
In the subsections below, we present numerical results mirroring the progression of the approach in Sections 4, 5, and 6.
### Spatial Dimension Reduction
#### 7.1.1 Aerosol Dimension Reduction
With time horizon of \\(N_{t}=9\\) days, it is sufficient to use a single radial basis function, i.e. \\(N_{rbf}=1\\), to represent the plume (see Figure 8). By day \\(t_{9}\\), the plume is asymmetric and heavy tailed, so there is noticeable error in using a single Gaussian basis function. However, we justify our choice of not including additional RBFs by noting the error is on a comparable order of magnitude as the background aerosol and hence is negligible in the inverse problem. By using this low-dimensional embedding of the spatial field into a 3 dimensional space, we ensure that the time evolution operator can be trained effectively even with a small dataset of only 35 time series.
Figure 8: Top: source tagged volcanic AOD; bottom: volcanic plus background AOD. From left to right: days \\(1,5,\\) and \\(9\\). Each panel displays 7 ensemble members with a fixed injection magnitude.
We fit the RBF using a block-coordinate-descent optimization algorithm which alternates between a gradient descent step to update the scale and shape hyperparameters and a linear least squares solve to update the coefficient hyperparameter. Figure 9 shows the RBF fit for both the volcanic \\(SO_{2}\\) (left) and sulfate (right) variables. We observe the trend that the \\(SO_{2}\\) mass decreases as the sulfate mass increases (as functions of time). The maximum sulfate value initially increases and subsequently decreases as the plume diffuses in space. The \\(SO_{2}\\) and sulfate RBFs have nearly identical center and shape hyperparameters. The center is monotonically decreasing (moving right to left and then through the boundary at 0\\({}^{\\circ}\\) longitude, the Greenwich prime meridian) and the shape hyperparameter is monotonically decreasing as the plume diffuses in space. The near identical shape and center hyperparameters motivate our flow map model which evolves the \\(SO_{2}\\) and estimates the sulfate using the same center and shape hyperparameters as the \\(SO_{2}\\) RBF representation.
Due to the inability of the Gaussian RBF basis function to capture heavy tails in the plume, the conservation of mass equation (8) is not satisfied by the RBF approximation. Figure 10 shows the relative loss in total source tagged \\(SO_{2}\\) mass as a function of time in the raw data (left panel) and the RBF approximation (right panel). Specifically, letting \\(S:[0,T]\\rightarrow\\mathbb{R}\\) denote the total mass of source tagged volcanic \\(SO_{2}\\) in the atmosphere as a function of time, Figure 10 displays \\(S(t)/S(0)\\) for all simulations in the training set. The simulation curves in the left panel correspond to the constant function \\(S(t)/S(0)=1\\) (with some small numerical noise) since mass is conserved. In contrast, the curves in the right panel disperse since the RBF approximation does not preserve mass. We observe a loss in mass between 5\\(\\%\\) and 15\\(\\%\\) for the majority of the time series. This indicates that additional RBF basis functions may be required to improve the accuracy of the reduced order model we seek to learn. However, in practice we use a single RBF basis function as the subsequent results are sufficient in spite of this error in mass conservation. We posit that this lack of mass conservation is acceptable for two reasons. First, the network architecture enforces conservation of mass and its loss function incorporates both \\(SO_{2}\\) and sulfate data, so the network cannot overfit to the loss of RBF mass in the training data. Second, the inverse problem is most informed by the earlier time steps in which the loss of mass is smaller than the background aerosol noise.
Figure 9: RBF fit for volcanic \\(SO_{2}\\) (left) and sulfate (right) in gigagrams (Gg). The solid black lines show the raw data and the colored broken lines show the RBF fit, with a different color for each time step. Note that \\(t_{0}\\) corresponds to the blue line, and the plume advects westward.
#### 7.1.2 Wind Dimension Reduction
To embed the zonal wind data, we select threshold \\(\\tau_{SO_{2}}=100\\) grams to truncate the wind data to the region of the atmosphere where the mass of \\(SO_{2}\\) is greater than \\(\\tau_{SO_{2}}\\) as defined in (6). Weighting by the RBF basis function at each time step (7), we generate probability density functions (PDFs) corresponding to the distribution of the zonal winds localized about the plume at each time step. This gives a total of \\(N_{e}N_{s}N_{t}=(7)(5)(10)=350\\) PDFs corresponding to the zonal winds at each time step in each training set simulation. Snapshots of these zonal wind PDFs are shown in Figure 11 with the left, center, and right panels corresponding to days \\(1,5\\), and \\(9\\). Each panel has 35 PDFs corresponding to the \\(N_{e}N_{s}=35\\) training set simulations. We observe that the width of the PDFs, i.e. the level of uncertainty in the zonal wind, is increasing over time. This is a result of the ensembles drifting further apart as time evolves.
We evaluate these 350 PDFs on a grid of 1000 zonal wind points to form a 1000 \\(\\times\\) 350 matrix. Principle component analysis is applied to this matrix after centering by subtracting the mean from each column. The left panel of Figure 12 displays the singu
Figure 11: Time snapshots of zonal wind PDFs for days 1 (left), 5 (center), and 9 (right). Each panel has 35 PDFs corresponding to the training set simulations. This 1D representation of the zonal wind is computed using samples of the 3D wind field weighted by the magnitude of \\(SO_{2}\\) present in the atmosphere.
Figure 10: Relative loss in total mass of sulfur as a function of time in the raw data (left panel) and the RBF approximation (right panel). Each panel has 35 curves corresponding to the training set simulations.
data matrix. The right panel of the figure shows the four leading singular vectors, known as the principle components (PCs). We choose a truncation rank of \\(N_{w}=4\\) based upon the spectral characteristics and properties of the principle components displayed in Figure 12. Note that there is a nontrivial truncation as the 5\\({}^{th}\\) singular value is roughly \\(1/4\\) of the 1\\({}^{st}\\) singular value's magnitude. However, in our context, this loss of information is helpful as it corresponds to removing higher frequency wind variations so that the reduced wind coordinates capture the larger scale variations. In APPENDIX A, we further justify this choice by analyzing the validation set prediction error for models with various truncation ranks \\(N_{w}\\).
### Time Evolution Operator
After dimension reduction, we have 35 time series of \\(SO_{2}\\) RBF coordinates in \\(\\mathbb{R}^{3}\\) and zonal wind principal component coordinates in \\(\\mathbb{R}^{4}\\). We train a neural network time evolution operator \\(\\mathcal{N}:\\mathbb{R}^{3}\\times\\mathbb{R}^{4}\\rightarrow\\mathbb{R}^{3}\\) with the architecture
\\[\\mathcal{N}(x,a,c,\\mathbf{w})=T^{-1}\\bigg{(}\\sigma\\bigg{(}\\mathbf{L}\\bigg{(} \\bigg{[}T(x,a,c)\\bigg{]}\\bigg{)}\\bigg{)}\\bigg{)} \\tag{14}\\]
where \\((x,a,c)\\in\\mathbb{R}^{3}\\) are the center, scale, and coefficient for the \\(SO_{2}\\) RBF approximation, \\(\\mathbf{w}_{k}\\in\\mathbb{R}^{4}\\) are the zonal wind principal component coordinates, \\(\\mathbf{L}\\) represents a single fully connected linear layer, \\(\\sigma:\\mathbb{R}^{3}\\rightarrow\\mathbb{R}^{3}\\) is the element-wise activation function \\(\\sigma_{i}(y)=\\text{min}\\{0,y\\}\\), \\(i=1,2,3\\), and \\(T:\\mathbb{R}^{3}\\rightarrow\\mathbb{R}^{3}\\) is the coordinate transformation
\\[T(x,a,c)=\\Big{(}x,a,\\frac{c}{a}\\Big{)}.\\]
The architecture defined in (14) ensures that the flow map predictions for unseen data preserves the known physical properties of the aerosol transport and chemistry. We accomplish this by imposing that the total mass of \\(SO_{2}\\) and the RBF center and scale hyperparameters are monotonically decreasing over time as this corresponds to the physical processes of advection and diffusion. We impose monotonicity in the network architecture using the activation function \\(\\sigma\\). The coordinate transformation \\(T\\) maps the RBF coefficient \\(c\\) to the quotient \\(\\frac{c}{a}\\), a scalar multiple of the total mass of \\(SO_{2}\\) that arises from integration of the radial basis function. The inverse coordinate transformation, \\(T^{-1}\\), is applied to the network output.
Figure 12: Left: singular values of the zonal wind PDF data matrix. Right: leading principle components (PCs) of the zonal wind PDF data matrix.
The choice of a single linear layer \\(\\mathbf{L}\\) was made by an analysis wherein we consider architectures of the form (14), as well as deeper architectures with additional hidden layers. Instead of determining the optimal architecture from a validation set prediction error, we considered the prediction of the network on a larger set of inputs and measured how well it capture the \\(SO_{2}\\) to sulfate reaction rate. Rather than being confined to a small validation set, this metric explores the model predictions over the full range of inputs that may be seen in the inverse problem. This analysis, which lead to our choice of a single linear layer, is detailed in APPENDIX B.
The dense feed forward neural network is trained using batch gradient descent, where each batch includes the training simulations from 2-3 ensembles over all source magnitudes. Figure 13 displays the validation set prediction of the volcanic \\(SO_{2}\\) (left) and sulfate (right) variables. The raw data is given by the solid black lines and the prediction is given by the colored broken lines, with the colors distinguishing the time steps.
Given the time evolution operator (14) to predict the \\(SO_{2}\\) trajectory, we compute the sulfate plume as each time step by setting the sulfate RBF center and shape hyperparameter equal to that of the \\(SO_{2}\\), and the coefficient is computed via the conservation of mass equation (8).
To map the sulfate RBF coordinates to AOD RBF coordinates, a separate linear model \\(\\mathbf{L}_{\\text{AOD}}\\in\\mathbb{R}^{3\\times 3}\\) is fit to the training data using ordinary least squares. We were able to deploy a simple model for this step as the complexity of the problem was significantly reduced thanks to source tagging and the use of RBF hyperparameters rather than the full spatial dimensions.
### Inverse Problem Formulation
We specify a Gaussian prior, \\(\\pi_{\\text{prior}}(\\mathbf{r}_{0})\\), on the initial \\(SO_{2}\\) RBF coordinates. The left panel of Figure 14 displays samples from the prior mapped from the RBF coordinate space back onto the physical space. We intentionally chose a prior with large variance to evaluate how much is learned from the data. The negative values are a result of the Gaussian assumption and large variance which results in some prior samples being physically infeasible.
We assume a mean zero Gaussian noise model whose covariance is a multiple of the identity matrix, \\(\\sigma_{\\text{noise}}^{2}\\mathbf{I}\\), where \\(\\sigma_{\\text{noise}}=0.01\\) is approximately the magnitude of AOD measurement noise. The background AOD statistics \\(\\mu_{\
u}\\) and \\(\\mathbf{\\Sigma}_{\
u}\\) are computed from the 35 training set time series
Figure 13: Validation set prediction for volcanic \\(SO_{2}\\) (left) and sulfate (right) in Gigagrams (Gg). The solid black lines show the raw data and the colored broken lines show the prediction, with a different color for each time step. Note that \\(t_{0}\\) corresponds to the blue line, and the plume advects westward.
for the background AOD variable. The BAE correction mean and covariance \\(\\mu_{\\text{BAE}}\\) and \\(\\mathbf{\\Sigma}_{\\text{BAE}}\\) are also determined using the 35 simulations in the training set. Specifically, using our learned time evolution operator, we forward propagate the \\(SO_{2}\\) prior mean using the 35 wind field samples and compute the empirical mean and covariance of the samples.
We construct observational data by extracting the AOD (volcanic and background) from the test set and contaminating the data with mean zero additive Gaussian noise whose standard derivation is 0.012 (chosen to be comparable but not equal to the noise model in the likelihood function). Numerical optimization is used to compute the maximum a posteriori probability (MAP) point of the posterior (12). To perform this optimization, we use efficient derivative-based optimization algorithms in the Rapid Optimization Library [46]. Derivatives are computed by implementing the time evolution operator (9) in TensorFlow and leveraging its algorithmic differentiation capability. The details of this approach are similar to those described in [14].
Approximate posterior samples are computed using a Laplace approximation of the posterior. That is, we take a Gaussian approximation of the posterior with mean given by the MAP point and covariance given by the inverse Hessian of the negative log likelihood. The center and right panels of Figure 14 displays the posterior MAP point for the two test simulations, approximate samples (given by the grey shading), and the ground true sources from the test set. We note that the optimization problem to determine the MAP point is non-convex and hence possess multiple local minima. We ran the optimization multiple times from different random initializations and chose the solution that attained the greatest likelihood value.
Comparing the two test sets, we observe greater accuracy estimating the \\(SO_{2}\\) source for ensemble 9 (center panel of Figure 14) in comparison to ensemble 10 (right panel of Figure 14). The \\(\\ell_{2}\\) relative error between the MAP point and test data is 18\\(\\%\\) for ensemble 9 and 32\\(\\%\\) for ensemble 10. Both test sets used the same \\(SO_{2}\\) injection but differ in their wind fields as a result of ensemble variability. To understand the wind variability, we compare the reduced wind coordinates \\(\\mathbf{w}\\in\\mathbb{R}^{40}\\) from ensembles 9 and 10 with the reduced wind coordinates from the training set. Note that the test set wind data is never used in our analysis as we do not assume precise knowledge of the stratospheric winds in observations, but it is available for this error analysis since the observational data was synthesized from simulations. To measure the distance between the training and test set reduced winds, we compute the Mahalanobis distance [29] between the training set reduced wind coordinates \\(\\{\\mathbf{w}^{i}\\}_{i=1}^{35}\\subset\\mathbb{R}^{40}\\) and each test set reduced wind coordinate, \\(\\mathbf{w}^{ens09},\\mathbf{w}^{ens10}\\subset\\mathbb{R}^{40}\\). Specifically, we compute the empirical mean \\(\\mu_{\\mathbf{w}}\\) and covariance \\(\\mathbf{\\Sigma}_{\\mathbf{w}}\\) from \\(\\{\\mathbf{w}^{i}\\}_{i=1}^{35}\\subset\\mathbb{R}^{40}\\) and the Mahalanobis distance
\\[d^{ensX}=\\sqrt{(\\mathbf{w}^{ensX}-\\mu_{\\mathbf{w}})\\mathbf{\\Sigma}_{\\mathbf{w}}^{ \\dagger}(\\mathbf{w}^{ensX}-\\mu_{\\mathbf{w}})},\\]
for \\(ens09\\) and \\(ens10\\), where the distance from \\(\\mu_{\\mathbf{w}}\\) is weighted by the pseudo-inverse of the empirical covariance since \\(\\mathbf{\\Sigma}_{\\mathbf{w}}\\) has rank 34 (one less than the number of training set simulations). We have \\(d^{ens09}=1530\\) and \\(d^{ens10}=6372\\). In relative terms, ensemble 10 is approximate 4 times further away from the training data then ensemble 9. This is consistent with the MAP point errors where our \\(SO_{2}\\) estimation error for ensemble 10 is nearly double the error for ensemble 9. This confirms the unsurprising fact that having a training set close to the actual stratospheric winds from the observational period is crucial to achieve accurate aerosol estimates.
Figure 15 displays the \\(SO_{2}\\) (top row), sulfate (middle row), and AOD (bottom row) predictions of the reduced model for Ensemble 9, alongside the test data which it is seeking to match. Three time snapshots, day 1 (left), day 5 (middle), and day 9 (right) are shown to illustrate the predictions time dependence. In each plot, there are 35 grey curves corresponding to the model
prediction using the wind fields from the 35 training set simulations. By maximizing the likelihood (13), the average of the AOD predictions closely matches the observed data. We observe how uncertainty increases as a function of time. This is a result of the zonal wind variability increasing over time. The \\(SO_{2}\\) and sulfate fields are less noisy compared to the AOD since they correspond to volcanic source tagged data, while the AOD fields include the background aerosol and measurement noise.
## 8 Conclusion
Estimating stratospheric aerosol sources is challenging. Internal variability in the climate system makes it difficult to learn a model for plume evolution since the operator must account for both variations in the source injection and the atmospheric winds. This challenge is compounded by the presence of background aerosols which dilute or even mask the presence of an aerosol source of interest. Furthermore, computational complexity limits the number of simulations which can be performed and reduced order modeling is difficult due to the advective nature of the problem. Noise and variability in the climate system is traditionally managed by temporal and spatial averaging. However, such averaging makes it more difficult to distinguish the source from background and hence is not viable in our context. We have addressed the challenges of noise and variability through a combination of techniques. Limited variability ensembles reduce the uncertainty due to internal atmospheric variability so that an accurate time evolution operator may be learned from limited data. By using source tagging, our simulations disentangle the aerosol source of interest from other background sources thus facilitating clean data to learn the aerosol plume evolution while capturing background aerosols statistics that may be incorporated in the inverse problem. Our use of localizing nonlinear dimension reduction techniques overcame the challenge of modeling advective phenomena thus making it possible to learn a reduced order model efficiently. To facilitate model reliability when used within an inversion framework, we designed an operator architecture that strictly enforces first principles chemistry such as conservation of molar mass and irreversibility of the \\(SO_{2}\\) to sulfate reaction. We also developed chemistry-based validation metrics using reaction rate statistics over large sample sets to improve generalizably. Our Bayesian approximation error approach systematically accounts for both internal climate variably and background aerosol uncertainty to provide a reliable inversion framework which can accommodate observational data with unseen wind fields and background aerosols that are within the training data distribution. To the best of our knowledge, this combination of techniques is a first-of-its-kind approach to enable source inversion which is robust to
Figure 14: Inverse optimization results on two test ensemble simulations. Left: prior samples of the initial \\(SO_{2}\\) plume. Center (ensemble 9) and Right (ensemble 10): the MAP point and approximate posterior samples of the initial \\(SO_{2}\\) plume (in grey), with the test data overlaid to demonstrate accuracy.
both wind variability and background aerosol noise.
This article proposes a comprehensive framework to enable stratospheric aerosol source estimation with associated uncertainty. Although each aspect of our framework was designed based on the characteristics of stratospheric aerosol transport, there are potentially many other areas where it may be impactful. Our framework addresses challenges posed by internal variability and global spatial scales inherent in many problems arising from the earth sciences. Since we focused on AOD observational data, our approach is extensible to a variety of other chemical species which may be relevant in global atmospheric monitoring, climate attribution, or geo-engineering.
Our results used an observational dataset synthesized from a test set simulation of the eruption of Mount Pinatubo. Since this was a large eruption, the signal due to the volcanic aerosols rose significantly above the background aerosols. For eruptions of smaller magnitude, our framework is applicable, but a longer time horizon may be needed to attain sufficient information from the AOD measurements. To extend the time horizon, it is necessary to take more RBF basis functions and thus introduce additional complexity in the RBF fitting and time evolution operator learning. Similarly, extending from 1D longitudinal data to 2D spatial data will require
Figure 15: Predictions of \\(SO_{2}\\) (top row), sulfate (middle row), and AOD (bottom row), alongside the Ensemble 9 test data (given by red broken curves), at day 1 (left), day 5 (middle), and day 9 (right). There are 35 grey curves in each panel corresponding to the model prediction from the 35 training ensemble zonal winds.
additional RBF basis functions which have more hyperparameters. Our framework is extensible and future research should explore finer RBF resolution and use in two spatial dimensions. To achieve this, the enforcement of additional constraints such as monotonicity of RBF hyperparameters (as a function of time) will be crucial to ensure identifiably.
Our uncertainty estimates are based on the Laplace approximation of the Bayesian posterior. Although this is pragmatic and common in practice, better uncertainty quantification is possible through the use of Markov Chain Monte Carlo methods. Future research should utilize the computational efficiency of the learned operators and availability of derivative information to enable more advanced sampling algorithms. This has the potential to realize full characterization of uncertainty rather than the Gaussian approximation used in this article.
###### Acknowledgements.
The authors thank Hailong Wang and Yang Yang for sharing their source tagging code which provided a foundation for the implementation used in this article. Data from the full E3SMv2-SPA simulation campaign including pre-industrial control, historical, and Mt. Pinatubo ensembles will be hosted at Sandia National Laboratories with location and download instructions announced on [https://www.sandia.gov/cldera/e3sm-simulations-data/](https://www.sandia.gov/cldera/e3sm-simulations-data/) when available. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a Department of Energy Office of Science User Facility using NERSC award BER-ERCAP0026535. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This written work is authored by employees of NTESS. The employees, not NTESS, own the right, title, and interest in and to the written work and is responsible for its contents. Any subjective views or opinions that might be expressed in the written work do not necessarily represent the views of the U.S. Government. The publisher acknowledges that the U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this written work or allow others to do so, for U.S. Government purposes. The DOE will provide public access to results of federally sponsored research in accordance with the DOE Public Access Plan. SAND2024-11582O.
## Appendix A Selection of Zonal Wind Rank
This section describes our analysis to optimize the number of PCA modes for the zonal wind dimension reduction. We preselected a set of candidate ranks \\(\\{2,4,5,7\\}\\) based on significant jumps in the singular value magnitude seen in Figure 12. We evaluate the number of PCA modes as a function of the time evolution operator prediction accuracy, measured using the relative \\(\\ell_{2}\\) prediction error in the for both \\(SO_{2}\\) and sulfate over all timepoints over all 5 ensembles in the validation set. We fixed the neural network to consist of a single linear layer as defined in (14). The results are shown in Figure A1. For each datapoint in the figure, we ran 5 instances of the neural network configuration with random initialization of the network weights, and display the average prediction error in the figure.
The results match our intuition: there is a tradeoff between accurate representation of the wind (2 modes appears insufficient), and removing higher frequency wind variations that have minimal impact on the plume transport (7 modes may not sufficiently filter the data). Based on the results, we truncated the reduced dimension zonal wind data to 4 modes.
## Appendix B Network Architecture Analysis
This appendix considers analysis of the network architecture in terms of network depth, network width, and the learning rate schedule. A deeper network is more expressive and consequently deep networks have become common in many applications. However, the depth must be commensurate to the training data available as adding hidden layers results in additional model parameters which must be trained. If the training data is insufficient for a given depth and width, the resulting network will likely perform well on the training set, but not generalize well. We note that similar concerns regarding overparameterization motivated our choice of fully connected networks over larger, potentially more expressive operator networks.
Evaluation based upon a small validation set, as in this article where the training and validation data comes from E3SM simulations, may be insufficient to detect poor generalization. Since we are training the network in the service of constraining an inverse problem, it is crucial that the network predictions remain physically plausible for samples outside of the training set. Our validation set cannot cover the full range of inputs relevant for the inverse problem. Hence we will explore metrics based on physical principles which can be computed for a larger set of input samples.
Since the inverse problem seeks to estimate the initial \\(SO_{2}\\) using observations of the AOD evolution, the rate at which \\(SO_{2}\\) transforms into sulfate is crucial to inform the source magnitude estimation. This led us to use the \\(SO_{2}\\) depletion rate as a metric to assess model quality and guide our choice of the number of hidden layers. To measure the depletion rate from daily data, we consider the classical linear model for a chemical reaction
\\[\\frac{d\\upalpha}{dt}(t)=\\uplambda\\upalpha(t), \\tag{1}\\]
where \\(\\upalpha(t)\\) is the mass of the chemical species and \\(\\uplambda\\in\\mathbb{R}\\) is the reaction rate. Since our data is at a daily time resolution, we approximate the time derivative with the difference of \\(\\upalpha\\) evaluatedat successive days. Letting \\(\\alpha_{n}\\in\\mathbb{R}\\) denote the total mass of \\(SO_{2}\\) at day \\(t_{n}\\), approximating \\(\\frac{d\\alpha}{dt}(t)=(\\alpha_{n+1}-\\alpha_{n})/1\\), and solving for the reaction rate at time \\(t_{n}\\), we have
\\[\\lambda_{n}=\\frac{\\alpha_{n+1}-\\alpha_{n}}{\\alpha_{n}}. \\tag{10}\\]
The linear reaction model (10) fails to fully capture the nonlinear evolution of \\(SO_{2}\\). However, (10) provides a measure of the reaction rate locally at a given time step. Computing (10) at each time step gives an estimate of the reaction rate which, as demonstrated below, is sufficient to identify nonphysical behavior of models which generalize poorly 4.
Footnote 4: We note that the assumption of a fairly constant linear reaction rate is reasonable for the particular E3SM model used in this article that is without full chemistry and assumes infinite hydroxyl radicals.
Computing the reaction rate at each time step (excluding the final since we cannot look ahead to estimate the time derivative) in the training set gives a baseline estimate of the range of reaction rates which are physically plausible. To assess the quality of a trained network, we generate 1000 initial \\(SO_{2}\\) RBF coordinates and time series of zonal wind coordinates. The sampling distribution for the \\(SO_{2}\\) RBF coordinates was defined by computing the range of initial \\(SO_{2}\\) RBF coordinates over the 5 validation set simulations (which had eruption magnitudes ranging from 3 Tg to 15 Tg) and defining a uniform distribution over an interval whose endpoints correspond to widening the validation set range by 10\\(\\%\\) on both the minimum and maximum values. We sample the time series of zonal winds using a uniform distribution defined over the range of the training set zonal winds. We use the set of training zonal winds rather than the validation set because in the inverse problem, the training data zonal wind defined the samples used in the likelihood evaluation. For a given model, we generate 1000 trajectories of \\(SO_{2}\\) and compute the reaction rate (10) for each time step and each sample. This yields 1000 time series of reaction rates based on data predicted by the network. We then compare the range of reaction rates in the training data (which represents a physically plausible baseline) with the range of reaction rates computed from the network predictions.
We considered networks with 0, 1
summarize the spread of reaction rates over the sample set by shading the region bound by their minimum and maximum values at each time step. The 0 hidden layer networks have a wider band compared to the training set and a slightly shifted mean reaction rate. The 1 and 2 hidden layer networks have much larger bands, particularly at the initial and final time steps.
The analysis presented in this appendix led to our choice of a 0 hidden layer network with the minimum reaction rate variability, as was presented in Section 7. We note that while the network itself is linear, the full model is autoregressive with added nonlinear layers, as described in Section 7.
## References
* [1] Bhattacharya, K., Hosseini, B., Kovachki, N.B., and Stuart, A.M., Model Reduction and Neural Networks for Parametric PDEs, _Journal of Computational Mathematics_, vol. **7**, pp. 121-157, 2021.
* [2] L. Biegler, G. Biros, O. Ghattas, M. Heinkenschloss, D. Keyes, B. Mallick, Y. Marzouk, L. Tenorio, B. van Bloemen Waanders, and K. Willcox, Eds., _Large-Scale Inverse Problems and Quantification of Uncertainty_, John Wiley and Sons, 2011.
* [3] Brown, H.Y., Wagman, B., Bull, D., Peterson, K., Hillman, B., Liu, X., Ke, Z., and Lin, L., Validating a Microphysical Prognostic Stratospheric Aerosol Implementation in E3SMv2 using the Mount Pinatubo Eruption, _EGUsphere_, vol. **2024**, pp. 1-46, 2024. URL [https://egusphere.copernicus.org/preprints/2024/egusphere-2023-3041/](https://egusphere.copernicus.org/preprints/2024/egusphere-2023-3041/)
* [4] Brunton, S.L. and Kutz, J.N., _Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control_, Cambridge University Press, 2022.
* [5] Bui-Thanh, T., Ghattas, O., Martin, J., and Stadler, G., A Computational Framework for Infinite-Dimensional Bayesian Inverse Problems. Part I: The Linearized Case, with Applications to Global Seismic Inversion, _SIAM Journal on Scientific Computing_, vol. **35**, no. 6, pp. A2494-A2523, 2013.
Figure 10: Spread of reaction rates in the time series corresponding to the training data and network predictions with varying network depths.
* [6] Cai, S., Wang, Z., Lu, L., Zaki, T.A., and Karniadakis, G.E., DEEPM&MNET: Inferring the Electroconvection Multiphysics Fields based on Operator Approximation by Neural Networks, _Journal of Computational Physics_, vol. **436**, p. 110296, 2021.
* [7] Cui, T., Martin, J., Marzouk, Y.M., Solonen, A., and Spantini, A., Likelihood-Informed Dimension Reduction for Nonlinear Inverse Problems, _Inverse Problems_, vol. **30**, no. 114015, pp. 1-28, 2014.
* [8] Deng, Z., Ciais, P., Tzompa-Sosa, Z.A., Saunois, M., Qiu, C., Tan, C., Sun, T., Ke, P., Cui, Y., Tanaka, K., Lin, X., Thompson, R.L., Tian, H., Yao, Y., Huang, Y., Lauerwald, R., Jain, A.K., Xu, X., Bastos, A., Sitch, S., Palmer, P.I., Lauvaux, T., d'Aspremont, A., Giron, C., Benoit, A., Poulter, B., Chang, J., Petrescu, A.M.R., Davis, S.J., Liu, Z., Grassi, G., Albergel, C., Tubiello, F.N., Perugini, L., Peters, W., and Chevallier, F., Comparing National Greenhouse Gas Budgets Reported in UNFCCC Inventories against Atmospheric Inversions, _Earth System Science Data_, vol. **14**, no. 4, pp. 1639-1675, 2022.
* [9] Ehrmann, T., Wagman, B., Bull, D., Brown, H., Hillman, B., Peterson, K., Swiler, L., Watkins, J., and Hart, J., Identifying Northern Hemisphere Temperature Responses to the Mt. Pinatubo Eruption through Limited Variability Ensembles, _in preparation_, 2024.
* [10] Enting, I.G., _Inverse Problems in Atmospheric Constituent Transport_, Cambridge University Press, 2002.
* [11] Gao, X., Huang, A., Trask, N., and Reza, S., Physics-Informed Graph Neural Network for Circuit Compact Model Development, _2020 International Conference on Simulation of Semiconductor Processes and Devices (SISPAD)_, IEEE, pp. 359-362, 2020.
* [12] Giovanis, D. and Shields, M., Data-Driven Surrogates for High Dimensional Models using Gaussian Process Regression on the Grassmann Manifold, _Computer Methods in Applied Mechanics and Engineering_, vol. **370**, p. 113269, 2020. URL [https://www.sciencedirect.com/science/article/pii/S0045782520304540](https://www.sciencedirect.com/science/article/pii/S0045782520304540)
* [13] Golaz, J.C., Van Roekel, L.P., Zheng, X., Roberts, A.F., Wolfe, J.D., Lin, W., Bradley, A.M., Tang, Q., Maltrud, M.E., Forsyth, R.M.,, The DOE E3SM Model Version 2: Overview of the Physical Model and Initial Model Evaluation, _Journal of Advances in Modeling Earth Systems_, vol. **14**, no. 12, p. e2022MS003156, 2022.
* [14] Hart, J., Gulian, M., Manickam, I., and Swiler, L.P., Solving High-Dimensional Inverse Problems with Auxiliary Uncertainty via Operator Learning with Limited Data, _Journal of Machine Learning for Modeling and Computing_, vol. **4**, no. 2, pp. 105-133, 2023.
* [15] Hesthaven, J. and Ubbiali, S., Non-Intrusive Reduced Order Modeling of Nonlinear Problems using Neural Networks, _Journal of Computational Physics_, vol. **363**, no. 15, pp. 55-78, 2018.
* [16] Isaac, T., Petra, N., Stadler, G., and Ghattas, O., Scalable and Efficient Algorithms for the Propagation of Uncertainty from Data through Inference to Prediction for Large-Scale Problems, with Application to Flow of the Antarctic Ice Sheet, _Journal of Computational Physics_, vol. **296**, pp. 348-368, 2015.
* [17] Kaipio, J. and Kolehmainen, V., Approximate Marginalization over Modeling Errors and Uncertainties in Inverse Problems, _Bayesian Theory and Applications_, pp. 644-672, 2013.
* [18] Kontolati, K., Loukrezis, D., dos Santos, K.R.M., Giovanis, D.G., and Shields, M.D., Manifold Learning-Based Polynomial Chaos Expansions For High-Dimensional Surrogate Models, _International Journal for Uncertainty Quantification_, vol. **12**, no. 4, pp. 39-64, 2022.
* [19] Kontolati, K., Loukrezis, D., Giovanis, D.G., Vandanapu, L., and Shields, M.D., A Survey of Unsupervised Learning Methods for High-Dimensional Uncertainty Quantification in Black-Box-Type Problems, _Journal of Computational Physics_, vol. **464**, p. 111313, 2022. URL [https://www.sciencedirect.com/science/article/pii/S0021999122003758](https://www.sciencedirect.com/science/article/pii/S0021999122003758)
* [20] Kremser, S., Thomason, L.W., von Hobe, M., Hermann, M., Deshler, T., Timmreck, C., Toohey, M., Stenke, A., Schwarz, J.P., Weigel, R.,, Stratospheric Aerosol Observations, Processes, and Impact on Climate, _Reviews of Geophysics_, vol. **54**, no. 2, pp. 278-335, 2016.
* [21] Laird, C.D., Biegler, L.T., van Bloemen Waanders, B., and Bartlett, R.A., Contamination Source Determination for Water Networks, _Journal of Water Resources Planning and Management_, vol. **131**, no. 2, pp. 125-134, 2005.
* [22] Lee, K. and Carlberg, K.T., Model Reduction of Dynamical Systems on Nonlinear Manifolds using Deep Convolutional Autoencoders, _Journal of Computational Physics_, vol. **404**, p. 108973, 2020. URL [https://www.sciencedirect.com/science/article/pii/S0021999119306783](https://www.sciencedirect.com/science/article/pii/S0021999119306783)
* [23] Li, J., Duan, Q., Wang, Y.P., Gong, W., Gan, Y., and Wang, C., Parameter Optimization for Carbon and Water Fluxes in Two Global Land Surface Models based on Surrogate Modelling, _International Journal of Climatology_, vol. **38**, pp. e1016-e1031, 2018.
* [24] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., and Anandkumar, A., Fourier Neural Operator for Parametric Partial Differential Equations, _arXiv:2010.08895_, 2020.
* [25] Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., and Anandkumar, A., Neural Operator: Graph Kernel Network for Partial Differential Equations, _NeurIPS_, pp. 1-21, 2020.
* [26] Long, Z., Lu, Y., Ma, X., and Dong, B., PDE-Net: Learning PDEs from Data, _Proceedings of the 35th International Conference on Machine Learning_, J. Dy and A. Krause, Eds., Vol. 80 of _Proceedings of Machine Learning Research_, PMLR, pp. 3208-3216, 2018.
* [27] Lu, L., Jin, P., and Karniadakis, G.E., Learning Nonlinear Operators via DeepONet based on the Universal Approximation Theorem of Operators, _Nature Machine Intelligence_, vol. **3**, pp. 218-229, 2021.
* [28] Marshall, L., Johnson, J.S., Mann, G.W., Lee, L., Dhomse, S.S., Regayre, L., Yoshioka, M., Carslaw, K.S., and Schmidt, A., Exploring How Eruption Source Parameters Affect Volcanic Radiative Forcing using Statistical Emulation, _Journal of Geophysical Research: Atmospheres_, vol. **124**, no. 2, pp. 964-985, 2019.
* [29] Maupin, K.A., Swiler, L.P., and Porter, N.W., Validation Metrics for Deterministic and Probabilistic Data, _Journal of Verification, Validation and Uncertainty Quantification_, vol. **3**, no. 3, 2018.
* [30] McQuarrie, S.A., Huang, C., and Willcox, K.E., Data-Driven Reduced-Order Models via Regularised Operator Inference for a Single-Injector Combustion Process, _Journal of the Royal Society of New Zealand_, vol. **51**, no. 2, pp. 194-211, 2021.
* [31] Patel, R.G. and Desjardins, O., Nonlinear Integro-Differential Operator Regression with Neural Networks, _arXiv preprint arXiv:1810.08552_, 2018.
* [32] Patel, R.G., Trask, N.A., Wood, M.A., and Cyr, E.C., A Physics-Informed Operator Regression Framework for Extracting Data-Driven Continuum Models, _Computer Methods in Applied Mechanics and Engineering_, vol. **373**, p. 113500, 2021.
* [33] Peherstorfer, B. and Willcox, K., Data-Driven Operator Inference for Nonintrusive Projection-Based Model Reduction, _Computer Methods in Applied Mechanics and Engineering_, vol. **306**, pp. 196-215, 2016.
* [34] Petra, N., Martin, J., Stadler, G., and Ghattas, O., A Computational Framework for Infinite-Dimensional Bayesian Inverse Problems. Part II: Stochastic Newton MCMC with Application to Ice Sheet Flow Inverse Problems, _SIAM Journal on Scientific Computing_, vol. **36**, no. 4, pp. A1525-A1555, 2014.
* [35] Qin, T., Chen, Z., Jakeman, J.D., and Xiu, D., Data-Driven Learning of Nonautonomous Systems, _SIAM Journal on Scientific Computing_, vol. **43**, no. 3, pp. A1607-A1624, 2021.
* [36] Qin, T., Wu, K., and Xiu, D., Data Driven Governing Equations Approximation using Deep Neural Networks, _Journal of Computational Physics_, vol. **395**, pp. 620-635, 2019.
* [37] Ramachandran, S., Ramaswamy, V., Stenchikov, G.L., and Robock, A., Radiative Impact of the Mount Pinatubo Volcanic Eruption: Lower Stratospheric Response, _Journal of Geophysical Research: Atmospheres_, vol. **105**, no. D19, pp. 24409-24429, 2000.
* [38] Ratnaswamy, V., Stadler, G., and Gurnis, M., Adjoint-Based Estimation of Plate Coupling in a Non-Linear Mantle Flow Model: Theory and Examples, _Geophysical Journal International_, vol. **202**, no. 2, pp. 768-786, 2015.
* [39] Ray, J., Hou, Z., Huang, M., Sargsyan, K., and Swiler, L., Bayesian Calibration of the Community Land Model using Surrogates, _SIAM/ASA Journal on Uncertainty Quantification_, vol. **3**, no. 1, pp. 199-233, 2015.
* [40] Ray, J., Yadav, V., Michalak, A., van Bloemen Waanders, B., and McKenna, S., A Multiresolution Spatial Parameterization for the Estimation of Fossil-Fuel Carbon Dioxide Emissions via Atmospheric Inversions, _Geoscientific Model Development_, vol. **7**, pp. 1901-1918, 2014.
* [41] Ricciuto, D., Sargsyan, K., and Thornton, P., The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model, _Journal of Advances in Modeling Earth Systems_, vol. **10**, no. 2, pp. 297-319, 2018.
* [42] Robock, A., Volcanic Eruptions and Climate, _Reviews of geophysics_, vol. **38**, no. 2, pp. 191-219, 2000.
* [43] Shiogama, H., Tatebe, H., Hayashi, M., Abe, M., Arai, M., Koyama, H., Imada, Y., Kosaka, Y., Ogura, T., and Watanabe, M., MIROC6 Large Ensemble (MIROC6-LE): Experimental Design and Initial Analyses, _Earth System Dynamics_, vol. **14**, no. 6, pp. 1107-1124, 2023. URL [https://esd.copernicus.org/articles/14/1107/2023/](https://esd.copernicus.org/articles/14/1107/2023/)
* [44] Shukla, K., Xu, M., Trask, N., and Karniadakis, G.E., Scalable Algorithms for Physics-Informed Neural and Graph Networks, _Data-Centric Engineering_, vol. **3**, p. e24, 2022.
* [45] Stenchikov, G.L., Kirchner, I., Robock, A., Graf, H.F., Antuna, J.C., Grainger, R.G., Lambert, A., and Thomason, L., Radiative Forcing from the 1991 Mount Pinatubo Volcanic Eruption, _Journal of Geophysical Research: Atmospheres_, vol. **103**, no. D12, pp. 13837-13857, 1998.
* [46] The ROL Project Team, 2024. The Rapid Optimization Library (ROL) Project Website. Accessed September 10, 2024. URL [https://www.sandia.gov/ccr/software/rapid-optimization-library-rol/](https://www.sandia.gov/ccr/software/rapid-optimization-library-rol/)
* [47] Timmreck, C., Modeling the Climatic Effects of Large Explosive Volcanic Eruptions, _Wiley Interdisciplinary Reviews: Climate Change_, vol. **3**, no. 6, pp. 545-564, 2012.
* [48] Trask, N., Huang, A., and Hu, X., Enforcing Exact Physics in Scientific Machine Learning: a Data-Driven Exterior Calculus on Graphs, _Journal of Computational Physics_, vol. **456**, p. 110969, 2022.
* [49] Trask, N., Patel, R.G., Gross, B.J., and Atzberger, P.J., GMLS-Nets: A Framework for Learning from Unstructured Data, _AAAI 2020 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physical Sciences_, J. Lee, E.F. Darve, P.K. Kitanidis, M.W. Farthing, and T. Hesser, Eds., pp. 1-12, 2020.
* [50] Virieux, J., Asnaashari, A., Brossier, R., Metivier, L., Ribodetti, A., and Zhou, W., 2017. An introduction to full waveform inversion. _Encyclopedia of Exploration Geophysics_. Society of Exploration Geophysicists, pp. R1-1.
* [51] Xiao, J., Periodized Radial Basis Functions (RBFs) and RBF-Vortex Method for the Barotropic Vorticity Equation, PhD thesis, The University of Michigan, 2014.
* [52] Yang, Y., Mou, S., Wang, H., Wang, P., Li, B., and Liao, H., Global Source Apportionment of Aerosols into Major Emission Regions and Sectors over 1850-2017, _Atmos. Chem. Phys._, vol. **24**, pp. 6509-6523, 2024.
* [53] Yarger, D., Wagman, B.M., Chowdhary, K., and Shand, L., Autocalibration of the E3SM Version 2 Atmosphere Model using a PCA-Based Surrogate for Spatial Fields, _Journal of Advances in Modeling Earth Systems_, vol. **16**, no. 4, p. e2023MS003961, 2024.
* [54] You, H., Yu, Y., Trask, N., Gulian, M., and D'Elia, M., Data-Driven Learning of Nonlocal Physics from High-Fidelity Synthetic Data, _Computer Methods in Applied Mechanics and Engineering_, vol. **374**, p. 113553, 2021.
* [55] Zanchettin, D., Timmreck, C., Khodri, M., Schmidt, A., Toohey, M., Abe, M., Bekki, S., Cole, J., Fang, S.W., Feng, W., Hegerl, G., Johnson, B., Lebas, N., LeGrande, A.N., Mann, G.W., Marshall, L., Rieger, L., Robock, A., Rubinetti, S., Tsigaridis, K., and Weierbach, H., Effects of Forcing Differences and Initial Conditions on Inter-Model Agreement in the VolMIPvolc-Pinatubo-Full Experiment, _Geoscientific Model Development_, vol. **15**, no. 5, pp. 2265-2292, 2022. | Stratospheric aerosols play an important role in the earth system and can affect the climate on timescales of months to years. However, estimating the characteristics of partially observed aerosol injections, such as those from volcanic eruptions, is fraught with uncertainties. This article presents a framework for stratospheric aerosol source inversion which accounts for background aerosol noise and earth system internal variability via a Bayesian approximation error approach. We leverage specially designed earth system model simulations using the Energy Exascale Earth System Model (E3SM). A comprehensive framework for data generation, data processing, dimension reduction, operator learning, and Bayesian inversion is presented where each component of the framework is designed to address particular challenges in stratospheric modeling on the global scale. We present numerical results using synthesized observational data to rigorously assess the ability of our approach to estimate aerosol sources and associate uncertainty with those estimates.
Bayesian inverse problem, source identification, operator learning, Bayesian approximation error, surrogate modeling | Condense the content of the following passage. | 195 |
copernicus/c449742c_ddbc_4b7e_b708_a5b4a16cf17d.md | "Atmos. Chem. Phys., 24, 9031-9044, 2024\n\n[https://doi.org/10.5194/acp-24-9031-2024](https://doi.org/10.5194/acp-24-9031-2024)\n\n(c) Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.\n\n(d)\n\nThe effect of different climate and air quality policies in China on in situ ozone production in Beijing\n\n### Sensitivity of in situ O\\({}_{3}\\) production to concentration changes of specific AVOCs and ASA\n\nTo further investigate which chemical species are driving the largest changes in O\\({}_{3}\\) production rate, SAPRC07 AVOC groups were incrementally increased by 5 %, with all other groups remaining at 2017 observed levels. The resulting change in O\\({}_{3}\\) production rate for each incremental change is presented in Table 3 and compared to the group-determined maximum incremental reactivities (MIRs) in high NO\\({}_{x}\\) conditions and the subsequently calculated O\\({}_{3}\\) formation potentials (OFPs) (see Sect. 2.3).\n\nLocal in situ O\\({}_{3}\\) production rates in 2017 were most sensitive to changes in the OLE2 group (\\(\\Delta\\)_P_(O\\({}_{3}\\)) = +1.12 %), which includes highly reactive C\\({}_{4}\\)-C\\({}_{5}\\) alkenes such as but-2-enes and \\(trans\\)-pent-2-ene (Table 3). During APHH-Beijing 2017, alkene concentrations were reported to be much higher than a comparable field campaign in London, with mean alkene concentrations more than double those observed during the Clearflo summer campaign in 2012 (Whalley et al., 2021). Higher concentrations observed during the 2017\n\nFigure 4: Projected percentage change in O\\({}_{3}\\) production rate since 2017 observations, when VOC and NO\\({}_{x}\\) observations are scaled using the DPEC emissions inventory.\n\nFigure 3: Projected absolute change in mixing ratio of key AVOC sub-groups (those observed during APHH-Beijing 2017) and NO\\({}_{x}\\) for the six DPEC air pollution and climate policy scenarios (Table 1) every 5 years between 2025 and 2060, in comparison to the APHH-Beijing 2017 campaign. AVOCs include all VOCs observed during APHH-Beijing 2017, excluding isoprene, \\(\\alpha\\)-pinene and limonene (Table 2). Note that the \\(y\\) axis scale is different in each sub-plot.\n\ncampaign, combined with their fast reactivity (\\(k_{\\rm OH}>7\\times 10^{4}\\) ppm\\({}^{-1}\\) min\\({}^{-1}\\)) of these alkenes, result in a high sensitivity of O\\({}_{3}\\) production toward this group. After the OLE2 group, O\\({}_{3}\\) production was most sensitive to changes in the ALK3 and ALK4 groups, which include the C\\({}_{4}\\)-C\\({}_{7}\\) alkanes and ethanol, followed by the OLE1 group which includes the less reactive alkenes (\\(k_{\\rm OH}<7\\times 10^{4}\\) ppm\\({}^{-1}\\) min\\({}^{-1}\\), excluding ethene) such as propene and but-1-ene.\n\nThe observed changes in \\(P\\)(O\\({}_{3}\\)) in increasing selected species by 5 % are generally in agreement with the maximum incremental reactivities (MIRs) of each species, determined by Carter (2010). However, the total O\\({}_{3}\\) formation potential (\\(\\Sigma\\)OFP) of the ALK3 group is almost double that of the OLE2 group, despite \\(\\Delta P\\)(O\\({}_{3}\\)) being 3 times more sensitive to a 5 % increase in OLE2 than for ALK3. The large OFP attributed to ALK3 is explained by the very high concentrations of ethanol observed during the APHH-Beijing 2017 campaign. However, modelled \\(P\\)(O\\({}_{3}\\)) is less sensitive to small changes in ethanol in the APHH-Beijing 2017 model, where the full chemistry, bespoke to the observational data, is accounted for. Although the MIRs derived by Carter et al. (2010) and subsequently calculated \\(\\Sigma\\)OFP are a good guide for determining the key contributors to O\\({}_{3}\\) formation, the detailed chemical model provides a more bespoke tool for assessing the key drivers of in situ O\\({}_{3}\\) formation at this particular location.\n\nThe sensitivities of modelled O\\({}_{3}\\) production to these VOCs can be combined with the trends in projected concentrations presented in Fig. 4 to explain why O\\({}_{3}\\) production is not reduced the most under the Ambitious Pollution 1.5D Goals between 2030-2045. Ethane (ALK1) emissions are projected to increase compared to 2017 levels under the Ambitious Pollution 1.5D Goals until 2040 and are a large proportion of AVOCs by mixing ratio (Fig. 3). Although increasing ethane mixing ratios by 5 % only leads to an increase in O\\({}_{3}\\) production of 0.02 %, the observed increase in ethane of ca. 25 % up to 2040 could have a more pronounced impact on increasing O\\({}_{3}\\) production than smaller incremental changes in more reactive species such as OLE2 that are present in much smaller concentrations by 2025 (\\(<0.1\\) ppbv compared to ca. 4 ppbv of ethane). Similarly, although this study shows that modelled O\\({}_{3}\\) production rate is not very sensitive to changes in the ALK2 group, substantial concentrations of this VOC group are observed. As a result, the overall trend in AVOCs (Table 2, excluding isoprene, \\(\\alpha\\)-pinene and limonene) shows a weaker decline for the Ambitious Pollution 1.5D scenario, which is reflected in the modelled O\\({}_{3}\\) production trend (Fig. 4).\n\n\\begin{table}\n\\begin{tabular}{l l r r r} \\hline \\hline & & \\(\\Delta\\)\\(P\\)(O\\({}_{3}\\)) & MIR & \\(\\Sigma\\)(OFP) \\\\ Group & Volatile organic compound & (\\%) & (mean) & (\\(\\mu\\)g m\\({}^{-3}\\)) \\\\ \\hline OLE2 & \\(cis\\)-but-2-ene, \\(trans\\)-but-2-ene, methylpropene, & \\(+1.12\\) & 12.46 & 12.44 \\\\ & \\(trans\\)-pent-2-ene, 1,3-butadiene & & & \\\\ \\hline ALK3 & \\(n\\)-butane, \\(i\\)-butane, ethanol & \\(+0.35\\) & 1.30 & 21.96 \\\\ \\hline ALK4 & \\(n\\)-pentane, \\(i\\)-pentane, 2- and 3-methylpentane, & \\(+0.32\\) & 1.40 & 11.08 \\\\ & \\(n\\)-hexane, \\(n\\)-heptane & & & \\\\ \\hline OLE1 & propene, but-1-ene, pent-1-ene & \\(+0.29\\) & 9.53 & 10.87 \\\\ \\hline ETHE & ethene & \\(+0.24\\) & 9.00 & 15.62 \\\\ \\hline ARO2 & 1,2,3-trimethylbenzene, 1,2,4-trimethylbenzene, & \\(+0.23\\) & 9.31 & 20.26 \\\\ & 1,3,5-trimethylbenzene, (\\(m\\),\\(p\\))-xylene, \\(o\\)-xylene & & & \\\\ \\hline MEOH & methanol & \\(+0.17\\) & 0.67 & 14.39 \\\\ \\hline ARO1 & toluene, \\(i\\)-propylbenzene, \\(n\\)-propylbenzene & \\(+0.15\\) & 2.85 & 14.59 \\\\ \\hline ALK2 & propane & \\(+0.08\\) & 0.49 & 3.15 \\\\ \\hline ALK1 & ethane & \\(+0.02\\) & 0.28 & 1.20 \\\\ \\hline BENZENE & benzene & \\(+0.01\\) & 0.72 & 1.09 \\\\ \\hline ALK5 & \\(n\\)-octane & \\(+0.01\\) & 0.90 & 0.18 \\\\ \\hline ACYE & acetylene & \\(<+0.01\\) & 0.95 & 1.31 \\\\ \\hline ASA & aerosol surface area & \\(-0.12\\) & & \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 3: Change in O\\({}_{3}\\) production rate, \\(P\\)(O\\({}_{3}\\)), when incrementally increasing the observed concentrations of each SAPRC07 AVOC grouping by \\(+5\\) %. Changes in \\(P\\)(O\\({}_{3}\\)) are listed in descending order from the largest increase in \\(P\\)(O\\({}_{3}\\)). These are compared to MIRs determined by Carter (2010) and the calculated sum of OFPs for each group (Liu et al., 2023).\n\nIt is worth noting that projections in HONO mixing ratios are not included in the DPEC inventory at the time of this study. As a result, mixing ratios of HONO are kept constant under all model scenarios. How HONO might change under different future scenarios is highly uncertain. Although it is generally expected that HONO mixing ratios correlate with changes in NO\\({}_{x}\\), its formation is dependent on multiple factors including photolysis rates, heterogeneous reactions and meteorological conditions (Sander and Peterson, 1984; Lee et al., 2016). However, a recent study found that reductions in NO\\({}_{x}\\) during the COVID-19 lockdowns in the Chinese megacity of Zhengzhou did not lead to comparable reductions in HONO during the day (Wang et al., 2024). When HONO is increased by 5 % in the model, we observe a 1.9 % increase in O\\({}_{3}\\) production. This shows that O\\({}_{3}\\) production is highly sensitive to changes in HONO and emphasises the importance of improving our understanding of how HONO might be expected to change under different socioeconomic, climate and carbon neutrality goals.\n\n### The effect of changes in ASA on in situ O\\({}_{3}\\) production\n\nThe observed aerosol surface area during the APHH-Beijing 2017 campaign was also varied using DPEC projections for changes in PM\\({}_{2.5}\\) and PM\\({}_{10}\\). A more detailed description of how aerosol surface area (ASA) is incorporated into the chemical mechanism can be found in Sect. 2.2. The future scenarios were modelled with and without DPEC-derived variations in ASA (scenarios 2 and 3, Sect. 2.4). DPEC projected ASA decreases with time under all scenarios, except for the Baseline scenario. Under Ambitious Pollution 1.5D, 2D and Neutral Goals, ASA is projected to increase up to 2025 before decreasing by ca. 20 %-40 % of 2017 levels by 2030 and then further decreasing to 50 %-70 % of 2017 values by 2060. ASA is projected to increase under \"Ambitious Pollution National Determined Contribution (NDC) Goals\", Current Goals and Baseline scenarios by 2030 (64 %, 71 % and 250 % respectively) and by 2060 (21 %, 55 % and 221 % respectively) relative to 2017 levels. Despite large percentage changes in ASA, very small changes in O\\({}_{3}\\) production rate are estimated when DPEC ASA estimates are applied in the model (Fig. 5).\n\nOn comparing O\\({}_{3}\\) production rate with or without the inclusion of the DPEC-derived ASAs, the largest percentage difference in O\\({}_{3}\\) production was under the Baseline scenario, where O\\({}_{3}\\) production was 5.3 % lower in 2055 and 2060 when large increases in ASA were applied from 2017 on (221 %). This can also be attributed to the large concentrations of AVOCs projected from the MEIC emissions estimates, leading to a larger supply of HO\\({}_{2}\\) radicals to be taken up by the ASA enhancement. Under the only other two scenarios where ASA was estimated to increase less steeply (Current Goals and NDC Goals by 55 % and 22 % by 2060 respectively), including DPEC ASA values had very little impact (1 %-2 % difference in O\\({}_{3}\\) production), and even smaller changes were found for scenarios where DPEC ASA was projected to decrease (Ambitious Pollution 1.5D, 2D and Neutral Goals by 69 %, 59 % and 64 % by 2060 respectively), with percentage differences \\(<\\) 1 %. This suggests changes in ASA will not appreciably impact O\\({}_{3}\\) production under the future DPEC scenarios. A study by Whalley et al. (2021) evaluating O\\({}_{3}\\) formation sensitivity to the APHH-Beijing 2017 observations found that reductions in ASA only enhanced HO\\({}_{2}\\) concentrations under very low NO\\({}_{x}\\) (\\(<\\) 0.3 ppb) at observed VOC levels. This is also consistent with previous studies, which suggest there is no evidence that aerosol chemistry has significant impact on ozone production in the North China Plain and that aerosol light extinction may cancel out the impacts of aerosols on ozone production in South China (Tan et al., 2020, 2022). This suggests that under the DPEC scenarios presented here, co-reductions in VOCs alongside NO\\({}_{x}\\) sufficiently reduce HO\\({}_{2}\\) to reduce the impact of ASA on resultant O\\({}_{3}\\) production rates.\n\n### The effect of biogenic compounds on in situ O\\({}_{3}\\) production rate\n\nWhilst DPEC projections can be used to project changes in anthropogenic emissions of VOCs and NO\\({}_{x}\\), it is less clear how local biogenic emissions will change up to 2060. As compounds such as isoprene and the monoterpens are primarily from biogenic sources, changes in their mixing ratios have not yet been accounted for in the anthropogenic VOC subset (AVOCs) used in this modelling study. However, with increasing global temperatures and urban greening, it is estimated that biogenic VOC (BVOC) emissions in Beijing will increase over time, with recent studies estimating a 25 % increase in biogenic emissions in China in the 2050s (Liu et al., 2019; Xie et al., 2017).\n\nTo investigate the sensitivity of projected O\\({}_{3}\\) production rate to increasing BVOC emissions under the six different scenarios, the models were re-run for 2060, with isoprene,\n\nFigure 5: Modelled changes in O\\({}_{3}\\) production rate since 2017 levels under six different DPEC future scenarios between 2025 and 2060. Both the modelled O\\({}_{3}\\) production fixed at observed 2017 ASA levels (dashed line) and modelled O\\({}_{3}\\) production rate with ASA varying according to DPEC-derived estimates (solid line) are shown.\n\n\\(\\alpha\\)-pinene and limonene multiplied by a scaling factor of between 1 and 2 at 0.1 increments (scenario 5, Sect. 2.4). Figure 6 shows how the change in O\\({}_{3}\\) production rate since 2017 varies for the six scenarios with increasing isoprene, \\(\\alpha\\)-pinene and limonene in 2060.\n\nO\\({}_{3}\\) production rates calculated using the less ambitious Baseline and Current Goals scenarios were found to be most sensitive to increasing biogenic concentrations in 2060. For the Baseline scenario, doubling biogenic concentrations led to a further 18 % increase in O\\({}_{3}\\) production rates since 2017 compared to APHH-Beijing 2017 concentrations (scaling factor = 1). In the Current Goals scenario, a switch in the O\\({}_{3}\\) production rate from being a reduction to an increase since 2017 was found when biogenic concentrations were increased by ca. 40 %. In all ambitious scenarios, O\\({}_{3}\\) production rates are found to decrease since 2017, even when biogenic concentrations are doubled. However, the Ambitious Pollution 2D Goals and Ambitious Pollution NDC Goals are more sensitive to increasing biogenic concentrations (7 % and 8 % increase in percentage change in O\\({}_{3}\\) production rates on doubling respectively) compared to the Ambitious Pollution 1.5D Goals and Ambitious Pollution NDC Goals (4 % and 5 % respectively). In all cases presented here, \\(<\\) 2 % of the sensitivity is attributed to limonene and \\(\\alpha\\)-pinene, and almost all the sensitivity observed here can be attributed to changes in isoprene alone. However, the impact of BVOC emissions on O\\({}_{3}\\) production rates is highly uncertain, and it is likely that there are many more fast-reacting terpen present in Beijing, whose reactivity is not accounted for in this study. In addition, from a pollution abatement perspective, increasing BVOCs will be much harder to controls than AVOCs. This sensitivity study highlights the importance of understanding the biogenic speciation and how biogenic compounds are expected to vary in the Beijing region, as these compounds are likely to have important implications for in situ O\\({}_{3}\\) production in this urban environment.\n\n## 4 Conclusions\n\nFuture in situ O\\({}_{3}\\) production rates have been investigated for Beijing using detailed measurements of precursor species taken during the APHH-Beijing 2017 summer campaign alongside future air quality and climate policy emission projections for the Beijing region. A chemical isopleth indicated a currently VOC-limited regime (in 2017), which would switch to a NO\\({}_{x}\\)-limited regime if NO\\({}_{x}\\) alone were reduced by \\(\\sim\\) 75 %. Based on this, and on estimated reductions in total VOCs and NO\\({}_{x}\\) under the DPEC scenarios, O\\({}_{3}\\) production rates were projected to increase under all four Ambitious Pollution scenarios. However, when speciated AVOCs were varied in the model rather than total DPEC VOCs, reductions in O\\({}_{3}\\) production rate were observed. This highlighted the need to consider the detailed individual VOC speciation when estimating in situ O\\({}_{3}\\) production rate effects. O\\({}_{3}\\) production rate was found to be most sensitive to the OLE2 VOC group, which includes reactive C\\({}_{4}\\)-C\\({}_{5}\\) alkenes such as but-2-enes and pent-2-ene. This sub-group is forecast to be reduced considerably by 2025 (ca. 95 %) under the Ambitious Pollution scenarios and is likely to strongly influence the reductions in observed in situ O\\({}_{3}\\) production. Between 2030-2045, the most ambitious scenario, Ambitious Pollution 1.5D Goals, did not lead to the largest reductions in O\\({}_{3}\\) production rate. This can be attributed to reductions in less reactive species that are present in large amounts in Beijing, such as the smaller-chain alkanes (ALK1 and ALK2). Aerosol surface area (ASA) was found to have a minimal effect on O\\({}_{3}\\) production rates, with a 69 % decrease in ASA leading to a change in O\\({}_{3}\\) production rate of \\(<\\) 1 %. O\\({}_{3}\\) production was considerably impacted by possible climate-induced changes in BVOC emissions, almost entirely driven by changes in isoprene. Doubling the mixing ratios of isoprene, \\(\\alpha\\)-pinene and limonene led to the largest increases in O\\({}_{3}\\) production under the Baseline scenario, increasing O\\({}_{3}\\) production by 18 % in 2060 compared to O\\({}_{3}\\) production projections using changes to anthropogenic VOCs alone. However, it is important to note that the future scenarios presented here are highly uncertain due to their socioeconomic and political nature and can only be used as a guide. The focus of this study is the impacts of the changing VOC emissions scenarios on photochemical O\\({}_{3}\\) formation. However, there are several other important factors that will evolve in a changing climate that will likely affect the formation and concentrations of O\\({}_{3}\\), such as meteorology and extreme temperature and biomass burning events impacting urban areas such as Beijing. Heterogeneous sources of HONO, an important source of OH radical in urban environments (Lee et al., 2016), are also likely to change, impacting urban oxidising capacity and hence O\\({}_{3}\\) formation. However, how these factors are likely to change is highly uncertain and should be looked at further in future studies. Although estimates for in situ O\\({}_{3}\\) production have been presented in this study, percentage changes in O\\({}_{3}\\) production cannot be applied to O\\({}_{3}\\) concentrations. This is due\n\nFigure 6: Percentage change in O\\({}_{3}\\) in 2060 since 2017 for the six different DPEC future scenarios (Table 1). Coloured bars show the range of percentage change when BVOC concentrations (isoprene, \\(\\alpha\\)-pinene and limonene) are multiplied by a scaling factor of between 1–2. Values to the left and right of the dashed line indicate decreasing and increasing O\\({}_{3}\\) production rates respectively.\n\nto the nature of the chemical modelling used, as only instantaneous O3 production can be reproduced, and it does not account for background O3 or O3 transported into and out of the measurement site in Beijing. To fully understand how O3 concentrations may vary in future scenarios, further analysis using regional transport models may be required. However, this study provides important insights into how the in situ chemical processing leading to additional O3 production and destruction in Beijing may vary in the future and highlights the key need to further understand how resultant concentrations from BVOC emissions are expected to change in future years.\n\nData availability.Data are available at [http://catalogue.ceda.ac.uk/uuid/?ed9d8a288814b8b85433b0d3fec0300](http://catalogue.ceda.ac.uk/uuid/?ed9d8a288814b8b85433b0d3fec0300) (Harrison et al., 2018).\n\nSupplement.The supplement related to this article is available online at: [https://doi.org/10.5194/acp-24-9031-2024-supplement](https://doi.org/10.5194/acp-24-9031-2024-supplement).\n\nAuthor contributions.BSN prepared the manuscript with contributions from all authors. FAS, MS and JRH provided measurements and data processing of pollutants used in this study. JFH, ZL, ARR, ACL, JDL and ZS contributed to scientific discussion.\n\nCompeting interests.The contact author has declared that none of the authors has any competing interests.\n\nDisclaimer.Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.\n\nAcknowledgements.This project was undertaken on the Viking Cluster, a high-performance computing facility provided by the University of York, and supported by the University of York Research Computing team. The authors would like to thank and acknowledge Alfred Mayhew for additional modelling support.\n\nFinancial support.This research has been supported by the Natural Environment Research Council (grant no. 2021GRIP02COP-AQ).\n\nReview statement.This paper was edited by Jayanarayanan Kuttippurath and reviewed by two anonymous referees.\n\n## References\n\n* Ainsworth (2017) Ainsworth, E. A.: Understanding and improving global crop response to ozone pollution, The Plant J., 90, pp. 886-897, [https://doi.org/10.1111/tpj.13298](https://doi.org/10.1111/tpj.13298), 2017.\n* Carter (2010) Carter, W. P. L.: Development of the SAPRC-07 chemical mechanism, Atmos. Environ., 44, pp. 5324-5335, [https://doi.org/10.1016/j.atmosenv.2010.01.026](https://doi.org/10.1016/j.atmosenv.2010.01.026), 2010.\n* Cheng et al. (2021) Cheng, J., Tong, D., Zhang, Q., Liu, Y., Lei, Y., Yan, G., Yan, L., Yu, S., Cui, R. Y., Clarke, L., Geng, G., Zheng, B., Zhang, X., Davis, S. J., and He, K.: Pathways of China's PM2.5 air quality 2015-2060 in the context of carbon neutrality, Natl. Sci. Rev., 8, pp. 10. Note: [https://doi.org/10.1093/nsr/nsr/nswab078](https://doi.org/10.1093/nsr/nsr/nswab078), 2021.\n* Cryer (2024) Cryer, D. R.: Measurements of hydroxyl radical reactivity and formaldehyde in the atmosphere, PhD Thesis, University of Leeds, [https://etheses.whierose.ac.uk/16834](https://etheses.whierose.ac.uk/16834) (last access: July 2024), 2016.\n* Fu & Tian (2019) Fu, T. M. and Tian, H.: Climate Change Penalty to Ozone Air Quality: Review of Current Understandings and Knowledge Gaps, Curr. Pollution Rep., 5, pp. 159-171, [https://doi.org/10.1007/s40726-019-00115-6](https://doi.org/10.1007/s40726-019-00115-6), 2019.\n* Harrison et al. (2018) Harrison, R., Sokhi, R., Kelly, F. J., Nemitz, E., Bloss, W., Loh, M., and Lewis, A. C.: Atmospheric Pollution & Human Health in a Developing Megacity (APHH), CEDA Archive [data set], [https://catalogue.ceda.ac.uk/uuid/?ed9d8a288814b8b85433b0d3fe0c300/](https://catalogue.ceda.ac.uk/uuid/?ed9d8a288814b8b85433b0d3fe0c300/) (last access: November 2022), 2018.\n* Hopkins et al. (2011) Hopkins, J. R., Jones, C. E., and Lewis, A. C.: A dual channel gas chromatograph for atmospheric analysis of volatile organic compounds including oxygenated and monoteprene compounds, J. Environ. Monitor., 13, 2268, [https://doi.org/10.1039/c1em10050e](https://doi.org/10.1039/c1em10050e), 2011.\n* Huang et al. (2018) Huang, J., Pan, X., Guo, X., and Li, G.: Health impact of China's Air Pollution Prevention and Control Action Plan: an analysis of national air quality monitoring and mortality data, Lancet Planet. Health., 2, e313-e323, [https://doi.org/10.1016/S2542-5196](https://doi.org/10.1016/S2542-5196)(18)30141-4, 2018.\n* Huang et al. (2016) Huang, Z., Zhang, Y., Yan, Q., Zhang, Z., and Wang, X.: Real-time monitoring of respiratory absorption factors of volatile organic compounds in ambient air by proton transfer reaction time-of-flight mass spectrometry, J. Hazard. Mater., 320, pp. 547-555, [https://doi.org/10.1016/j.jhazmat.2016.08.064](https://doi.org/10.1016/j.jhazmat.2016.08.064), 2016.\n* IPCC Core Writing Team (2014) IPCC Core Writing Team: Climate Change 2014: Synthesis Report, Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Pachuari, R. K. and Meyer, L. A., Geneva, Switzerland, 151, [https://www.ipcc.ch/report/ar/5/syr/](https://www.ipcc.ch/report/ar/5/syr/) (last access: November 2023), 2014.\n* Ivatt et al. (2022) Ivatt, P. D., Evans, M. J., and Lewis, A. C.: Suppression of surface ozone by an aerosol-inhibited photochemical ozone regime, Nat. Geosci., 15, 536-540, [https://doi.org/10.1038/s41561-022-00972-9](https://doi.org/10.1038/s41561-022-00972-9), 2022.\n* Jacob (2000) Jacob, D. J.: Heterogeneous chemistry and tropospheric ozone, Atmos. Environ., 34, 2131-2159, [https://doi.org/10.1016/S1352-2310](https://doi.org/10.1016/S1352-2310)(09)900462-8, 2000.\n* Jenkin et al. (2015) Jenkin, M. E., Young, J. C., and Rickard, A. R.: The MCM v3.3.1 degradation scheme for isoprene, Atmos. Chem. Phys., 15, 11433-11459, [https://doi.org/10.5194/acp-15-11433-2015](https://doi.org/10.5194/acp-15-11433-2015), 2015.\n\n* [11] Lee, J. D., Whalley, L. K., Heard, D. E., Stone, D., Dunmore, R. E., Hamilton, J. F., Young, D. E., Allan, J. D., Laufs, S., and Klueffmann, J.: Detailed budget analysis of HONO in central London reveals a missing daytime source, Atmos. Chem. Phys., 16, 2747-2764, [https://doi.org/10.5194/acp-16-2747-2016](https://doi.org/10.5194/acp-16-2747-2016), 2016.\n* [12] Lee, J. D., Drysdale, W. S., Finch, D. P., Wilde, S. E., and Palmer, P. I.: UK surface NO\\({}_{2}\\) levels dropped by 42 % during the COVID-19 lockdown: impact on surface O\\({}_{3}\\), Atmos. Chem. Phys., 20, 15743-15759, [https://doi.org/10.5194/acp-20-15743-2020](https://doi.org/10.5194/acp-20-15743-2020), 2020.\n* [13] Li, M., Zhang, Q., Streets, D. G., He, K. B., Cheng, Y. F., Emmons, L. K., Huo, H., Kang, S. C., Lu, Z., Shao, M., Su, H., Yu, X., and Zhang, Y.: Mapping Asian anthropogenic emissions of non-methane volatile organic compounds to multiple chemical mechanisms, Atmos. Chem. Phys., 14, 5617-5638, [https://doi.org/10.5194/acp-14-5617-2014](https://doi.org/10.5194/acp-14-5617-2014), 2014.\n* [14] Li, M., Liu, H., Geng, G., Hong, C., Liu, F., Song, Y., Tong, D., Zheng, B., Cui, H., Man, H., Zhang, Q., and He, K.: Anthropogenic emission inventories in China: A review, Natl. Sci. Rev., 4, 834-866, [https://doi.org/10.1093/nsrnv150](https://doi.org/10.1093/nsrnv150), 2017.\n* [15] Li, M., Zhang, Q., Zheng, B., Tong, D., Lei, Y., Liu, F., Hong, C., Kang, S., Yan, L., Zhang, Y., Bo, Y., Su, H., Cheng, Y., and He, K.: Persistent growth of anthropogenic non-methane volatile organic compound (NMVOC) emissions in China during 1990-2017: drivers, speciation and ozone formation potential, Atmos. Chem. Phys., 19, 8897-8913, [https://doi.org/10.5194/acp-19897-2019](https://doi.org/10.5194/acp-19897-2019), 2019.\n* [16] Li, Q., Su, G., Li, C., Liu, P., Zhao, X., Zhang, C., Sun, X., Mu, Y., Wu, M., Wang, Q., and Sun, B.: An investigation into the role of VOCs in SOA and ozone production in Beijing, China, Sci. Total. Environ., 720, 137536, [https://doi.org/10.1016/j.scitotenv.2020.137536](https://doi.org/10.1016/j.scitotenv.2020.137536), 2020.\n* [17] Liu, S., Xing, J., Zhang, H., Ding, D., Zhang, F., Zhao, B., Sahu, S. K., and Wang, S.: Climate-driven trends of biogenic volatile organic compound emissions and their impacts on summertime ozone and secondary organic aerosol in China in the 2050s, Atmos. Environ., 218, 117020, [https://doi.org/10.1016/j.atmosenv.2019.117020](https://doi.org/10.1016/j.atmosenv.2019.117020), 2019.\n* [18] Liu, Z., Wang, B., Wang, C., Sun, Y., Zhu, C., Sun, L., Yang, N., Fan, G., Sun, X., Xia, Z., Pan, G., Zhu, C., Gai, Y., Wang, X., Xiao, Y., Yan, G., and Xu, C.: Characterization of photochemical losses of volatile organic compounds and their implications for ozone formation potential and source apportionment during summer in suburban Jinan, China, Environ. Res., 238, 117158, [https://doi.org/10.1016/j.envres.2023.117158](https://doi.org/10.1016/j.envres.2023.117158), 2023.\n* [19] Mills, G., Sharps, K., Simpson, D., Pleijel, H., Broberg, M., Uddling, J., Jaramillo, F., Davies, W. J., Dentener, F., Van den Berg, M., Agrawal, M., Agrawal, S. B., Ainsworth, E. A., Buker, P., Emberson, L., Feng, Z., Harrems, H., Hayes, F., Kobayashi, K., Paoletti, E., Van Dingreen, R.: Ozone pollution will compromise efforts to increase global wheat production, Glob. Change Biol., 24, 3560-3574, [https://doi.org/10.1111/gcb.14157](https://doi.org/10.1111/gcb.14157), 2018.\n* [20] Nelson, B. S., Stewart, G. J., Drysdale, W. S., Newland, M. J., Vaughan, A. R., Dunmore, R. E., Edwards, P. M., Lewis, A. C., Hamilton, J. F., Acton, W. J., Hewitt, C. N., Crillley, L. R., Alam, M. S., Sahin, U. A., Beddows, D. C. S., Bloss, W. J., Slater, E., Whalley, L. K., Heard, D. E., Cash, J. M., Langford, B., Nemitz, E., Sommariva, R., Cox, S., Shivani, Gadi, R., Gurjar, B. R., Hopkins, J. R., Rickard, A. R., and Lee, J. D.: In situ ozone production is highly sensitive to volatile organic compounds in Delhi, India, Atmos. Chem. Phys., 21, 13609-13630, [https://doi.org/10.5194/acp-21-13609-2021](https://doi.org/10.5194/acp-21-13609-2021), 2021.\n* [21] Ren, J., Guo, F., and Xie, S.: Diagnosing ozone-NO\\({}_{\\text{X}}\\)-VOC sensitivity and revealing causes of ozone increases in China based on 2013\\(-\\)2021 satellite retrievals, Atmos. Chem. Phys., 22, 15035-15047, [https://doi.org/10.5194/acp-22-15035-2022](https://doi.org/10.5194/acp-22-15035-2022), 2022.\n* [22] Sander, S. P. and Peterson, M. E.: Kinetics of the Reaction HO\\({}_{2}\\)+NO\\({}_{2}\\)+M=HO\\({}_{2}\\)NO\\({}_{2}\\)+M, J. Phys. Chem., 88, 1566-1571, [https://doi.org/10.1021/j150652a025](https://doi.org/10.1021/j150652a025), 1984.\n* [23] Saunders, S. M., Jenkin, M. E., Derwent, R. G., and Pilling, M. J.: Protocol for the development of the Master Chemical Mechanism, MCM v3 (Part A): tropospheric degradation of non-aromatic volatile organic compounds, Atmos. Chem. Phys., 3, 161-180, [https://doi.org/10.5194/acp-3-161-2003](https://doi.org/10.5194/acp-3-161-2003), 2003.\n* [24] Shi, Z., Vu, T., Kotthaus, S., Harrison, R. M., Grimmond, S., Yue, S., Zhu, T., Lee, J., Han, Y., Demuzere, M., Dunmore, R. E., Ren, L., Liu, D., Wang, Y., Wild, O., Allan, J., Acton, W. J., Barlow, J., Barratt, B., Beddows, D., Bloss, W. J., Calzolai, G., Carrathers, D., Carslaw, D. C., Chan, Q., Chatzidikova, L., Chen, Y., Cilley, L., Coe, H., Dai, T., Doherty, R., Duan, F., Fu, P., Ge, B., Ge, M., Guan, D., Hamilton, J. F., He, K., Heal, M., Heard, D., Hewitt, C. N., Hollaway, M., Hu, M., Ji, D., Jiang, X., Jones, R., Kalberer, M., Kelly, F. J., Kramer, L., Langford, B., Lin, C., Lewis, A. C., Li, J., Li, W., Liu, H., Liu, J., Loh, M., Lu, K., Lucarelli, F., Mann, G., McFigans, G., Miller, M. R., Mills, G., Monk, P., Nemitz, E., O'Connor, F., Ouyang, B., Palmer, P. I., Percival, C., Popoola, O., Reeves, C., Rickard, A. R., Shao, L., Shi, G., Spracklen, D., Stevenson, D., Sun, Y., Sun, Z., Tao, S., Tong, S., Wang, Q., Wang, W., Wang, X., Wang, Z., Wei, L., Whalley, L., Wu, X., Wu, Z., Xie, P., Yang, F., Zhang, Q., Zhang, Y., Zhang, Y., and Zheng, M.: Introduction to the special issue \"In-depth study of air pollution sources and processes within Beijing and its surrounding region (APHH-Beijing)\", Atmos. Chem. Phys., 19, 7519-7546, [https://doi.org/10.5194/acp-19-7519-2019](https://doi.org/10.5194/acp-19-7519-2019), 2019.\n* [25] Shindell, D. and Smith, C. J.: Climate and air-quality benefits of a realistic phase-out of fossil fuels. Nature, 573, 408-411, [https://doi.org/10.1038/s41586-019-1554-z](https://doi.org/10.1038/s41586-019-1554-z), 2019.\n* [26] Sicard, P., De Marco, A., Agathokleous, E., Feng, Z., Xu, X., Paoletti, E., Rodriguez, J. D., and Calatayud, V.: Amplified ozone pollution in cities during the COVID-19 lockdown, Sci. Total Environ., 735, 139542, [https://doi.org/10.1016/j.scitotenv.2020.139542](https://doi.org/10.1016/j.scitotenv.2020.139542), 2020.\n* [27] Sommariva, R., Cox, S., Martin, C., Boronska, K., Young, J., Jimack, P. K., Pilling, M. J., Matthaios, V. N., Nelson, B. S., Newland, M. J., Panagi, M., Bloss, W. J., Monks, P. S., and Rickard, A. R.: AIChem (version 1), an open-source box model for the Master Chemical Mechanism, Geosci. Model Dev., 13, 169-183, [https://doi.org/10.5194/gmd-13-169-2020](https://doi.org/10.5194/gmd-13-169-2020), 2020.\n* [28] Tan, Z., Hofzumahaus, A., Lu, K., Brown, S. S., Holland, F., Huey, L. G., Kiedler-Scharr, A., Li, X., Liu, X., Ma, N., Min, K-E., Rohrer, F., Shao, M., Wahner, A., Wang, Y., Wedenschorler, A., Wu, Y., Wu, Z., Zeng, L., Zhang, Y., and Fuchs, H.: No Evidence for a Significant Impact of Heterogeneous Chemistry on Radical Concentrations in the North China Plain in Summer 2014, Environ. Sci. Technol., 54, 5973-5979, [https://doi.org/10.1021/acs.est.0c0525](https://doi.org/10.1021/acs.est.0c0525), 2020.\n\nTan, Z., Lu, K., Ma, X., Chen, S., He, L., Huang, X., Li, X., Lin, X., Tang, M., Yu, D., Wahner, A., and Zhang, Y.: Multiple Impacts of Aerosols on O\\({}_{3}\\) Production are Largely Compensated: A Case Study Shenzhen, China, Environ. Sci. Technol., 56, 24, 17569-17580, [https://doi.org/10.1021/acs.est.2c06217](https://doi.org/10.1021/acs.est.2c06217), 2022.\n* Wang et al. (2024) Wang, M., Wang, S., Zhang, R., Yuan, M., Xu, Y., Shang, L., Song, X., Zhang, X. and Zhang, Y.: Exploring the HONO source during the COVID-19 pandemic in a megacity in China, J. Environ. Sci., 149, 616-627, [https://doi.org/10.1016/j.jes.2023.12.021](https://doi.org/10.1016/j.jes.2023.12.021), 2024.\n* Whalley et al. (2018) Whalley, L. K., Stone, D., Dunnmore, R., Hamilton, J., Hopkins, J. R., Lee, J. D., Lewis, A. C., Williams, P., Kleffmann, J., Laufs, S., Woodward-Massey, R., and Heard, D. E.: Understanding in situ ozone production in the summertime through radical observations and modelling studies during the Clean air for London project (ClearIfLo), Atmos. Chem. Phys., 18, 2547-2571, [https://doi.org/10.5194/acp-18-2547-2018](https://doi.org/10.5194/acp-18-2547-2018), 2018.\n* Whalley et al. (2021) Whalley, L. K., Slater, E. J., Woodward-Massey, R., Ye, C., Lee, J. D., Squires, F., Hopkins, J. R., Dunnmore, R. E., Shaw, M., Hamilton, J. F., Lewis, A. C., Mehra, A., Worrall, S. D., Bacak, A., Bannan, T. J., Coe, H., Percival, C. J., Ouyang, B., Jones, R. L., Crilley, L. R., Kramer, L. J., Bloss, W. J., Vu, T., Kothaus, S., Grimmond, S., Sun, Y., Xu, W., Yue, S., Ren, L., Acton, W. J. F., Hewitt, C. N., Wang, X., Fu, P., and Heard, D. E.: Evaluating the sensitivity of radical chemistry and ozone formation to ambient VOCs and NO\\({}_{x}\\) in Beijing, Atmos. Chem. Phys., 21, 2125-2147, [https://doi.org/10.5194/acp-21-2125-2021](https://doi.org/10.5194/acp-21-2125-2021), 2021.\n* REVIHAAP Project: Technical Report [Internet], Copenhagen: WHO Regional Office for Europe, [https://www.ncbi.nlm.nih.gov/books/NBK361805/](https://www.ncbi.nlm.nih.gov/books/NBK361805/) (last access: November 2023), 2013.\n* Xie et al. (2017) Xie, M., Shu, L., Wang, T., Liu, Q., Gao, D., Li, S., Zhuang, B., Han, Y., Li, M., and Chen, P.: Natural emissions under future climate condition and their effects on surface ozone in the Yangtze River Delta region, China, Atmos. Environ., 150, 162-180, [https://doi.org/10.1016/j.atmosenv.2016.11.053](https://doi.org/10.1016/j.atmosenv.2016.11.053), 2017.\n* Zheng et al. (2018) Zheng, B., Tong, D., Li, M., Liu, F., Hong, C., Geng, G., Li, H., Li, X., Peng, L., Qi, J., Yan, L., Zhang, Y., Zhao, H., Zheng, Y., He, K., and Zhang, Q.: Trends in China's anthropogenic emissions since 2010 as the consequence of clean air actions, Atmos. Chem. Phys., 18, 14095-14111, [https://doi.org/10.5194/acp-18-14095-2018](https://doi.org/10.5194/acp-18-14095-2018), 2018." | using the DPEC inventory.\n\nemissions might impact O\\({}_{3}\\) production rates in future scenarios. When NO\\({}_{x}\\) and VOCs are varied in bulk, the only scenario projected to reduce O\\({}_{3}\\) production is the Baseline scenario. However, the opposite is true when speculated AVOCs are varied by projections defined by the SAPRC07 AVOC groupings in the DPEC inventory. The different scenarios result in different emission reductions in different AVOCs, with different propensities leading to O\\({}_{3}\\) production.\n\n | Provide a brief summary of the text. | 125 |
mdpi/03b23aed_b020_4801_9dc2_237ce08c7e8f.md | # Comparing Offshore Ferry Lidar Measurements in the Southern Baltic Sea with ASCAT, FINO2 and WRF
Daniel Hatfield
1Department of Wind Energy, Technical University of Denmark, Frederiksborgvej 399, 4000 Roskilde, Denmark; [email protected]
Charlotte Bay Hasager
1Department of Wind Energy, Technical University of Denmark, Frederiksborgvej 399, 4000 Roskilde, Denmark; [email protected]
Ioanna Karagali
2Danmarks Meteorological Institute, Lyngbyvej 100, 2100 Copenhagen, Denmark; [email protected]
## 1 Introduction
Offshore wind energy capacity is expected to grow from 12 GW in 2020 to 30 GW in 2030 [1]. Since 1991, when the first offshore wind turbine was installed at Vindeby, Denmark [2], the scale and demand for offshore wind is at its highest. With the increasing size of offshore wind farms, the average water depth for installation has increased, furthering the need for more robust planning and development. Subsequently, a solid basis of wind information is necessary to estimate the future energy yield, expected return, and design optimization of the planned investment. Offshore wind resource has become an area of interest, with increasing installation of wind farms in coastal areas and with the move to further depths (\\(>\\)50 m) such as the Hywind project [3]. Traditional ground-fixed meteorological masts (met mass) have been used offshore to study the wind climate near wind farms such as the German Forschungsplattformen In Nord-und Ostsee (FINO) masts in the North and Baltic Seas [4]; however, when considering far offshore locations it becomes extremely costly and complicated to install, so remote sensing alternatives are favored.
The main alternative to offshore met masts have been lidar systems [5]; more specifically, floating lidar systems (FLS) [6]. FLS research has thus increased dramatically during the last few years with the first buoy-lidar system installed and tested in 2009 [7]. The superior flexibility and lower cost compared to the traditional met mast is an incentive; however, the reliability needs to be addressed before it becomes an industrial standard [8]. The first ship-based lidar measurements were performed by Fraunhofer IWES in [9], where themotion correction algorithm was adapted for ship-based floating systems from [10] while being tested in [9; 11]. The sea-induced movement correction is dependent on the FLS itself--either employing motion compensation (i.e., from a stability platform) or motion detection using suitable sensors. The verification of FLS motion correction is typically studied in two ways: comparison with mesoscale models [12] or in situ data [9; 13]. This means that for ship-borne lidar in deep-water locales, the main validation technique at multiple heights, so far, has been comparison with mesoscale models.
The New European Wind Atlas (NEWA) [14] provides a publicly available wind resource dataset for Europe. It is based on 30 years of mesoscale simulations using the Weather Research and Forecasting (WRF) model [15], available every 30 min with a 3 km \\(\\times\\) 3 km grid resolution. As mesoscale models are not specifically developed for wind energy applications, the main objective for the NEWA project was to gather the best practices and to create a unified modeling methodology [14]. Although WRF has been shown to compare well with measurements at relatively shallow-water offshore locations [16], deep sea sites have not been assessed, and site-specific measurements are still vital to study the wind climate.
Satellite wind retrievals can provide observations of the ocean surface at improved spatial resolution compared to those of mesoscale models [17]. Surface-derived winds from scatterometer instruments such as the Advanced Scatterometer (ASCAT) are obtained at a height of 10 m and have been used for offshore wind energy application studies [18]. While not directly applicable to wind observations needed for wind farm hub height measurements (100+ m), they do provide daily global wind field measurements at the ocean surface. Vertical extrapolation of satellite data has been performed by [19; 20; 21], bringing surface winds to hub heights using the long-term stability correction from [22]. These require longer time scales and accurate stability information, whereas vertically extrapolating co-located ASCAT points to the moving instantaneous wind profiles of the ferry lidar adds another level of complexity. However, extrapolating lidar or offshore met masts profiles using surface layer theory down to the 10 m height in order to make a direct comparison with ASCAT is possible. The same process can be performed with the nearby German meteorological mast, FINO2, with a much larger, stationary dataset that can be used as standard for comparison.
This region of the Baltic Sea was studied extensively in [23] in the different methodologies of offshore wind resource assessment and in [24; 25] for low-level jet activity. Ref. [26] looked at mesoscale processes over the area, incorporating lidar measurements. Refs. [20; 27; 28] studied satellite synthetic aperture radar (SAR) and ASCAT measurements over the area of interest with [19], using the FINO2 location as reference for the long-term stability correction used in profile extrapolation. A study of the NEWA WRF model production runs used in this study with that of the FINO met masts is detailed and presented in [14].
The purpose of this work is to assess the strengths and limitations of the aforementioned ferry lidar experiment using the available datasets from ASCAT and FINO2 in the remote offshore locations of the southern Baltic Sea. This study also highlights strengths and weaknesses of the various sources of data, working towards combining them to obtain optimal estimates of the wind power potential. These techniques alongside mesoscale model simulations such as the NEWA WRF dataset can then be used to study potential regions for entirely floating wind farms in deep water locales.
The various datasets and filtering processes used throughout this study are described in Section 2, as well as the methods of data co-location and stability calculations. Section 3 presents the results of the ferry lidar with ASCAT, FINO2, and WRF as well as an intercomparison of the previously mentioned datasets. This section also introduces the low-level jet results from the ferry lidar experiment. Section 4 discusses the advantages and disadvantages of using a ferry-based compared to buoy-based lidar, as well as difficulties of interpreting results from ferry lidar measurements as compared with other data sources investigated in this work. Finally, conclusions and outlook are outlined in Section 5.
## 2 Measurements and Data Processing
The interest area of the southern Baltic Sea spans from 53\\({}^{\\circ}\\)N to 56\\({}^{\\circ}\\)N and from 10\\({}^{\\circ}\\)E to 21\\({}^{\\circ}\\)E with an average water depth of 55 m. There are 11 wind farms located within the aforementioned area of interest, with four having potential influence on the wind in the path of the ferry lidar (Nysted, Redsand II, EnBW Baltic I & II).
In this study, multiple data sources were used; NEWA ferry lidar wind measurements and meteorological sensor data, Operational Sea Surface Temperature and Ice Analysis (OSTIA) daily satellite sea-surface temperature (SST) data, FINO2 wind and meteorological data, ASCAT wind product data, and NEWA WRF production-run mesoscale model data. These datasets will be explained in more detailed with the time period in which they were used in the following sections. All datasets were recorded in Coordinated Universal Time (UTC).
### Kiel Ferry Lidar
The NEWA ferry lidar campaign was conducted in two phases: the first one as a preparatory campaign in the North Sea from Bremerhaven to Helgoland [29] for 2 months in 2016. The main campaign was a four-month campaign from 7 February 2017 to 6 June 2017 in the Baltic Sea from Kiel, Germany to Klaipeda, Lithuania outlined in [12] and was the only offshore measurement campaign in the NEWA project.
A WindCube v2 lidar was installed on top of the vessel Victoria Seaways, which belongs to the DFDS Seaways Group, where it ran almost continuously (20 h at sea, 4 h in the harbor). Due to the length of the trip, measurements were recorded on average in the same location every second day with small deviations occurring further away from the harbor. Wind measurements were performed at twelve different altitudes from 65 m to 275 m above sea level (accounting for the 25 m to the deck height). Meteorological parameters (temperature, humidity, air pressure, and precipitation) were also recorded along with a weather station, motion sensors, and a satellite compass installed near the lidar position.
The processed dataset used comprised motion-corrected horizontal wind speed and directions, carrier-to-noise ratios (CNR), and ship positions and time stamps sampled at 0.7 s. Similar to the original campaign, CNR values below \\(-\\)29 dB were omitted as per the recommendation from the manufacturer. Data availability for 10 min averages in the four-month period was well above 90% below 150 m, with less availability at the higher heights down to 40% at 275 m. See [12] for the full data report.
### Sea Surface Temperature
The ship lidar system had no sea surface temperature (SST) system aside from the ship SST measurements which were deemed untrustworthy [12], so daily satellite SST data were used from Global Ocean OSTIA Sea Surface Temperature for the Baltic Sea area. This product has a spatial resolution of \\(1/20^{\\circ}\\) and a temporal resolution of 1 day. OSTIA SST has zero mean bias and an accuracy of 0.57 K compared to in situ measurements [30], and our comparison with the FINO2 SST at 2 m data yielded a correlation of \\(R^{2}=0.99\\) (not shown).
### Fino2
The FINO project began in the early 2000s and now consists of two offshore research platforms in the North Sea and one in the Baltic Sea, all of which have 100 m meteorological towers (see [4]). Meteorological parameters are recorded at frequencies of 1-10 Hz, and data are averaged in intervals ranging from 10 to 30 min. FINO2 is the met mast in the Baltic Sea located at 55.0069\\({}^{\\circ}\\)N-13.1542\\({}^{\\circ}\\)E, situated 33 km north of the German island Rugen between the borders of Denmark, Germany, and Sweden. FINO2 is installed three kilometers north of the EnBW Baltic 2 wind farm, where the water depth is 35 m. The observations used were taken from the period where SST measurements began, 1 April 2013, to 30 November 2017. The relevant quantities were wind speeds at heights (32 m, 42 m, 52 m, 62 m, 72 m, 82 m, 92 m, 102 m) and wind directions at (31 m, 51 m, 71 m, 91 m) as well as meteorologicaldata of air pressure (30 m), air temperature (30 m), sea surface temperature (2 m depth), and relative humidity (30 m).
The minimum distance of the ferry lidar track to FINO2 is 24 km; therefore, a range of 30 km was used to have a suitable number of collocated observations to directly compare wind speeds at 62 m, 72 m, 92, and 102 m, as well as wind directions at 71 m and 91 m. The collocations occurred between 07:00-08:00 during the eastward path of the ferry and between 23:00-00:00 on the westward path. Due to the separation distance, different time scales were used for the comparison at 10 min, 30 min, and 1 h, where the 1 h timescale consistently yielded the best results. Different wind speed thresholds, time lags, wind direction dependencies, and distance ranges were tested. The 1 h direct comparison showed the most stable performance.
### Ascat
The Advanced Scatterometer is an instrument launched by the European Space Agency (ESA) flown on the Meteorological Operational (Metop) satellites developed by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) [31]. It was first launched on Metop-A in October 2006, launched as payload on Metop-B in September 2012, and also on Metop-C in November 2018. The instrument has an effective swath width of 512.5 km with a nadir gap of 700 km, leading to 1-3 passes daily from both the ascending and descending passes, depending on the time period.
The EUMETSAT ASCAT wind products provide wind speed and direction measurements at 10 m above the sea surface. The data used in this study have a grid spacing of 12.5 km and a spatial resolution of 25 km [31]. This dataset is processed and distributed by EUMETSAT Ocean and Sea Ice (OSI) Satellite Application facility (SAF) as well as Advanced Retransmission Service (EARS) ground system. Both of these are implemented at the Koninklijk Nederlands Meteorologisch Instituut (KNMI) and are freely available worldwide (see knmi.nl). For the present study, data were obtained through the Copernicus Marine Service (last accessed 25 February 2022) [https://resources.marine.copernicus.eu/](https://resources.marine.copernicus.eu/), using product WIND_GLO_WIND_L3_NRT_OBSERVATIONS_012_002 from 2016 and onwards, while from 2007-2015 product WIND_GLO_WIND_L3_REP_OBSERVATIONS_012_005 was used.
A scatterometer is an active radar that measures the backscatter power from transmitted pulses. This backscatter depends on the surface roughness; a completely smooth surface will result in reflection with no backscatter back to the instrument. As the roughness increases, the backscatter component increases simultaneously [32]. The wind stress creates small-scale (cm) capillary waves resulting from the surface tension. Large-scale waves or tidal influences have lower resulting backscatter influence due to the wavelength (C band). As the wind speed increases, so too does the surface tension and therefore the amplitude of the resulting capillary waves. In effect, this increases the surface roughness and thereby allows for more backscatter. The measurements of the backscatter power from the observed area can be used to estimate the normalized radar cross section (NRCS, \\(\\sigma_{0}\\)) [33]. The NRCS is the relation between the received and transmitted power which depends on the radar settings, the atmospheric attenuation, and the ocean surface characteristics [34]. The computation of ocean surface winds from the NRCS comes from an empirically derived geophysical model function (GMF) [35].
The physical definition of the ASCAT winds are horizontal stress-equivalent (SE) winds obtained using the CMOD7 GMF [35; 36]. SE winds are estimated from the sea surface roughness to the 10 m height independent of atmospheric conditions. In order to obtain SE winds from measured 10 m winds, one must first compute the equivalent neutral winds at 10 m and then multiply by a density correction factor. Since the atmospheric boundary is, on average, weakly unstable, the SE wind will be, on average, slightly stronger than real wind by approximately 0.1 to 0.2 m s\\({}^{-1}\\), with a standard deviation of 1.7 m s\\({}^{-1}\\) for the 12.5 km product and a bias of 0.02 m s\\({}^{-1}\\)[31].
### Collocation Procedure
During the four-month ferry lidar campaign, 126 wind vector cell pairs (WVCs) from ASCAT were collocated with the lidar profile near its center, where 119 had a fully available wind profile. For any given collocation, the ferry lidar wind profiles were then averaged about the central collocation point for a total period of 1 h which, with the ship's speed and direction, spatially covered the two ASCAT cells in the ship's path (see Figure 1). Note that some of the ferry lidar averaged data will fall outside of the ASCAT grid cells in the open sea where the ship had a higher overall speed compared to locations closer to the two harbors. The position of each collocated central lidar profile on the ferry is shown in Figure 2. Due to the sun-synchronous orbit of the satellite and the ferry schedule, the points clustered closer to shore during the nighttime (typically between 19:00-21:00) in the western part of the transect, whereas the points in the open water near FINO2 and outside of the harbor in Klaipeda were measured during the day (between 8:00-11:00) in the eastern part of the transect.
The same process of co-location was carried out for the FINO2 data when SST information was recorded (2013-2017), where only one ASCAT grid cell enveloping the FINO2 position was used. This accounts for 2688 1 h complete collocations.
### Atmospheric Stability Calculation
As meteorological measurements of atmospheric stability are uncommon in deep sea locations, we apply the bulk Richardson method from profile measurements [37] to obtain the stability correction for the logarithmic wind profile. For both the FINO2 met mast location and the moving ferry lidar, the wind speed at the lowest height \\(v_{h}\\), the temperature at that height \\(T_{h}\\), the difference in height from the sea-surface \\(z_{h}\\), and the differences in the virtual potential temperatures at sea level \\(\\Delta\\Theta_{\\varphi}=\\Theta_{\\varphi,h}-\\Theta_{v,SST}\\) were used to derive the dimensionless bulk Richardson number following [38]:
\\[Ri_{b}=\\frac{g}{\\Theta_{\\varphi,h}}\\frac{z_{h}\\Delta\\Theta_{v}}{v_{h}^{2}} \\tag{1}\\]
Figure 1: **Left**: Position of point in Europe. **Right**: ASCAT grid cell pair and corresponding 1 h of the ferry lidar path on 2 September 2017.
Figure 2: Collocation of ASCAT grid cells with Kiel ferry lidar path with Kiel on the west and Klaipeda on the east side of the route. The \\(\\mathsf{x}\\)’s correspond to the central lidar position in the hourly average.
where \\(g\\) is the acceleration due to gravity. The dimensionless stability parameter is thus as follows for stable and unstable cases:
\\[\\zeta=\\begin{cases}\\frac{10R_{i_{b}}}{1-5R_{i_{b}}},&R_{i_{b}}>0\\\\ 10R_{i_{b}},&R_{i_{b}}\\leq 0\\end{cases} \\tag{2}\\]
To extrapolate the profiles of the lidar and FINO2 down to the 10 m height, the logarithmic wind profile was used following [38]:
\\[u(z)=\\frac{u_{*}}{\\kappa}\\ln\\left(\\frac{z}{z_{0}}-\\Psi_{m}(z/L)\\right) \\tag{3}\\]
where \\(u_{*}\\) is the frictional velocity, \\(z_{0}\\) is the roughness length, \\(\\kappa\\) is the von Karman constant (\\(\\kappa=0.4\\)), and \\(\\Psi_{m}(z/L)\\) is the stability correction function introduced by [39] and [40]. The Obukhov length \\(L\\) is obtained from the stability parameter \\(\\zeta=z_{h}/L\\), while \\(z_{0}\\) and \\(u_{*}\\) were calculated for each individual wind profile by using least-squares fitting of the lower portion of the wind profile where an interval is set to \\(1.5\\)\\({}^{-}5.3\\)\\({}^{-}3\\), restricting the \\(z_{0}\\) value based on the ocean surface roughness lengths in the work of [41]. The same stability classification scheme is used as in [42], which is shown in Table 1. A resultant extrapolated collocated profile can be seen in Figure 3. The full derivation of the stability correction can be found in Appendix A.
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**Stability Classification** & **Range** \\\\ \\hline Very stable & \\(0.6<\\zeta<2.0\\) \\\\ Stable & \\(0.2<\\zeta<0.6\\) \\\\ Weakly stable & \\(0.02<\\zeta<0.2\\) \\\\ Neutral & \\(-0.02<\\zeta<0.02\\) \\\\ Weakly unstable & \\(-0.2<\\zeta<-0.02\\) \\\\ Unstable & \\(-0.6<\\zeta<-0.2\\) \\\\ Very Unstable & \\(-2.0<\\zeta<-0.6\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Atmospheric stability classification scheme as suggested in [42].
Figure 3: Wind profile extrapolation of co-located ASCAT and lidar measurements on 9 February 2017.
### Mesoscale Model Simulations
The New European Wind Atlas uses the Weather Research and Forecasting (WRF) mesoscale model [15]. Mesoscale models cover a limited area and require boundary conditions from a global simulation system. To produce the best representation of the state of the atmosphere, data assimilation is used every 3 to 6 h with horizontal resolutions of tens of kilometers. The NEWA WRF modeling was carried out in a nested domain of high spatial resolution for a period of 4 years, where long-term wind statistics using the NCAR-NCEP (National Center for Atmospheric Research/National Centers for Environmental Prediction) reanalysis data were performed during 30 years to provide basis for a long-term adjustment of the results [15].
The NEWA WRF 30 min data was extracted in two different areas within the southern Baltic domain of the WRF production runs: the FINO2 location for the entire FINO2/ASCAT co-location period (N = 31,256) and the entire ferry lidar track at each 30 min measurement (N = 1181). Due to the limitation of the ASCAT co-locations, there were N = 2395 recorded WRF points at the FINO2 location. This is all summarized in Table 2.
Similar to that presented in [12], the lidar/WRF co-located data were filtered for harbor effects where only data with a longitude coordinate between 10.4\\({}^{\\circ}\\) and 20.0\\({}^{\\circ}\\) were considered.
## 3 Results
### Ferry Lidar vs. ASCAT
The co-located and extrapolated 10 m lidar data show very good coincidence with ASCAT 10 m wind speed and direction, where the entire evaluation period is shown in Figure 4. The scatter plots of the wind speeds from all data (left) and data filtered for near-neutral stability conditions (right, \\(-0.2\\leq\\zeta\\leq 0.2\\)) are shown in Figure 5. A large scatter and poor correlation is seen when looking at direct comparisons of the entire dataset, whereas filtering the data for near-neutral stability cases improves the correlation and reduces the root-mean-square-error (RMSE) to well within the acceptable OSI SAF product expected performance of ASCAT of 1.7 m s\\({}^{-1}\\) (see [31]). This is, however, reducing the already smaller dataset down to a fifth of the original observations with N = 23. The wind directions were not extrapolated down to the 10 m height so the comparisons with ASCAT are taken directly for the lowest possible height (i.e., 65 m). All of the biases were calculated with respect to that of the lidar, i.e., \\((\\overline{U}_{ASCAT}-\\overline{U}_{lidar})/\\overline{U}_{lidar}\\). The overall statistical information of the wind speed and directions from the ferry lidar compared to ASCAT co-locations are summarized in Tables 3 and 4, respectively.
\\begin{table}
\\begin{tabular}{c c c} \\hline & **Period of Collocation** & **Number of Samples** \\\\ \\hline FINO2 & 16 April 2013 to 30 November 2017 & 31,256 \\\\ ASCAT & 2 January 2007 to 29 December 2017 & 2395 \\\\ Ferry lidar & 13 February 2017 to 6 June 2017 & 1181 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: NEWA WRF 30 min collocation period and number of samples with each of the measurements.
The wind direction comparisons in Table 4 have a higher overall error with those of ASCAT due to the fact that the data were compared at different heights. The wind directional data was not extrapolated such as the wind speed so the large error is mainly attributed to this expected veer in wind direction with height. Despite these large errors, the time series data in Figure 4 show a very similar behavior.
Figure 4: ASCAT-lidar co-location time series data: (**a**) 10 m wind speed extrapolation; (**b**) wind directions at different heights.
Figure 5: Scatter of extrapolated 10 m ferry lidar wind speed measurements with ASCAT 10 m. (**left**) All co-located extrapolated values (**right**) are filtered for near neutral stability (\\(-0.2\\leq\\zeta\\leq 0.2\\)). The blue line is \\(y=x\\) and the red line is the resultant linear regression.
### Ferry Lidar vs. FINO2
The overall statistical information for both the wind speed and direction of the ferry lidar with the other datasets are presented in Tables 3 and 4, respectively. The FINO2 dataset was directly compared to the ferry lidar at the closest available height, i.e., the 62 m wind speed at FINO2 was compared with 65 m from the ferry lidar, 72 m with 75 m, and 92 m with 90 m, up to 102 m and 100 m. Direct comparisons were performed at a maximum distance range of 30 km using 1 h measurements. Different timescales for direct comparisons were assessed (not shown), with the best results obtained when using 1 h averages. The 30 km range was chosen as it resulted in the highest number of collocated samples (N = 100) while only being 6 km away from the minimum distance (\\(\\sim\\)24 km). Different methods of comparisons were also studied (i.e., time delay, wind direction dependency, ship position filtering); however, the direct comparison at 1 h intervals yielded the best results.
The ferry lidar/FINO2 results consistently have a high correlation at all heights of \\(R^{2}=0.87\\) or greater; nevertheless, due to the distance range of comparison with FINO2, we see a lower overall RMSE with that of the ferry lidar (1.70 m s\\({}^{-1}\\)) than compared to the ASCAT-lidar comparison RMSE (1.29 m s\\({}^{-1}\\)). Using the major wind directions at FINO2 during the day between 180\\({}^{\\circ}\\) and 270\\({}^{\\circ}\\), the results yield the lower RMSE compared to all other sources as well as low bias and the best correlation. The wind directions at FINO2 change diurnally with a stronger westerly wind during the night. As the lidar passed by FINO2 during the day, the dominant wind directions of this time period were used.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline & **Lidar Height (m)** & **Height (m)** & **R\\({}^{2}\\)** & **RMSE (\\({}^{\\circ}\\))** & **Bias (\\({}^{\\circ}\\))** & **N** \\\\ \\hline Ferry lidar vs. ASCAT (collocated) & 65 & 10 & 0.83 & 34.8 & 0.06 & 119 \\\\ \\hline Ferry lidar vs. WRF (collocated) & 65 & 10 & 0.81 & 53.8 & 0.04 & 119 \\\\ & 65 & 50 & 0.79 & 26.3 & 0.01 & 3671 \\\\ & 75 & 75 & 0.78 & 25.6 & 0.01 & 3671 \\\\ & 100 & 100 & 0.76 & 25.4 & 0.01 & 3671 \\\\ & 200 & 200 & 0.73 & 28.1 & \\(-\\)0.01 & 3671 \\\\ & 250 & 250 & 0.75 & 27.2 & \\(-\\)0.00 & 3671 \\\\ \\hline Ferry lidar vs. FINO2 (30 km distance) & 75 & 71 & 0.86 & 32.8 & 0.04 & 134 \\\\ & 90 & 91 & 0.97 & 14.5 & 0.02 & 134 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Ferry lidar validation statistics (bias, RMSE, correlation) of wind direction with ASCAT, WRF, and FINO2.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & **Lidar Height (m)** & **Height (m)** & **R\\({}^{2}\\)** & **RMSE (m s\\({}^{-1}\\))** & **Bias (m s\\({}^{-1}\\))** & **N** \\\\ \\hline Ferry lidar vs. ASCAT (collocated) & 10 & 10 & 0.85 & **1.29** & 0.02 & 23 \\\\ \\hline Ferry lidar vs. WRF (collocated) & 10 & 10 & 0.55 & 2.41 & 0.01 & 23 \\\\ & 65 & 50 & 0.76 & 1.90 & \\(-\\)0.01 & 3671 \\\\ & 75 & 75 & 0.76 & 1.93 & \\(-\\)0.02 & 3671 \\\\ & 100 & 100 & 0.77 & 2.04 & \\(-\\)0.00 & 3671 \\\\ & 200 & 200 & 0.79 & 2.27 & \\(-\\)0.02
### ASCAT vs. FINO2
The scatter plots of the extrapolated FINO2 10 m wind speeds and ASCAT, both for all data and those filtered for near-neutral stabilities, are shown in Figure 6. When all available data are considered, independent of stability, there is a much stronger correlation between ASCAT and FINO2 than between ASCAT and the ferry lidar measurements. The RMSE is 1.51 m s\\({}^{-1}\\), significantly lower compared to 3.03 m s\\({}^{-1}\\) that was found for the ASCAT-ferry lidar comparison. When data were filtered for near-neutral atmospheric conditions, RMSE was reduced by 0.38 m s\\({}^{-1}\\) down to 1.13 m s\\({}^{-1}\\), and the correlation R\\({}^{2}\\) increased to 0.9, based on 923 samples. These values are comparable to what was reported under near-neutral cases between ASCAT and the ferry lidar comparisons, although these were based on a much smaller sample size.
### Wrf
WRF has higher overall RMSE values and lower R\\({}^{2}\\) compared to the intercomparisons of the other measured datasets, all summarized in Table 5. FINO2 yielded the highest overall error at both heights which is consistent with [14; 43]. This is seen in both the 30 min direct comparison and the 1 h comparison similar to that in [14]. ASCAT has the lowest overall error compared with the other direct comparisons; however, falls outside the expected performance of ASCAT (RMSE of 1.87 m s\\({}^{-1}\\)).
Wind direction comparisons in Table 6 show better results than those for wind speed. ASCAT shows the smallest error, whereas the ferry lidar has a consistently higher source of error at all heights. The WRF-FINO2 comparison has a surprisingly high RMSE above 40\\({}^{\\circ}\\) at both available heights on comparison. This is not as evident when looking at the time series of the WRF and ASCAT 10 m height wind directions alongside that of the 31 m FINO2 measurements in Figure 7.
Figure 6: Scatter of extrapolated 10 m FINO2 wind speed measurements with ASCAT 10 m. (**left**) All co-located extrapolated values (**right**) filtered for near neutral stabilities (\\(-0.2\\leq\\zeta\\leq 0.2\\)). The blue line is \\(y=x\\) and the red line is the resultant linear regression.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & **WRF Height (m)** & **Height (m)** & **R\\({}^{2}\\)** & **RMSE (m s\\({}^{-1}\\))** & **Bias (m s\\({}^{-1}\\))** & **N** \\\\ \\hline WRF vs. ASCAT & 10 & 10 & 0.78 & 1.87 & 0.03 & 129 \\\\ \\hline WRF vs. FINO2 & 50 & 52 & 0.67 & 2.45 & 0.00 & 2867 \\\\ & 100 & 102 & 0.70 & 2.61 & \\(-0.03\\) & 2867 \\\\ \\hline WRF vs. Ferry Lidar & 50 & 65 & 0.76 & 1.90 & \\(-0.01\\) & 3671 \\\\ & 100 & 100 & 0.77 & 2.04 & \\(-0.00\\) & 3671 \\\\ & 200 & 200 & 0.79 & 2.27 & \\(-0.02\\) & 3671 \\\\ & 250 & 250 & 0.80 & 2.32 & \\(-0.03\\) & 3671 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: RMSE results of the NEWA WRF data compared to the other data sources for the wind speed.
### Low-Level Jet Case Study
The Baltic Sea is a semi-enclosed basin, meaning that the coastline-sea discontinuity has a strong influence on the wind conditions. During the spring months when the atmospheric stratification is, on average, stable, the temperature differences between land and sea have been shown to be 20 K or greater [24]. When this flow causes an air-sea temperature difference, an area of reduced turbulent mixing can persist for hundreds of kilometers. This increases the influence of inertial oscillations and thus increases the frequency of low-level jet (LLJ) occurrences, especially in coastal areas. Using the definition of [44], an LLJ is defined at the lowest maximum of the wind profile that is at least 2 m s\\({}^{-1}\\) and 25% faster than the next minimum.
Taking this into consideration, in around 30% of the 119 ASCAT-lidar collocations, we saw LLJ wind profiles such as in Figure 8. In this example, the discrepancy of the ASCAT data point and the extrapolated lidar measurements is around 5 m s\\({}^{-1}\\), which is a significant contributor to the large error seen in the original extrapolated data comparison. The majority of these LLJ events that affect the wind profile fitting process are filtered out when using the definition for near-neutral stratification. We see a larger overall occurrence of LLJs in the time period of the ferry lidar experiment mainly due to two reasons; the first being that, on average, the spring months have stable atmospheric stratification. The other contributor is the fact that a ship was docked at each port for 4 h during the night, where a larger portion of the stable atmospheric conditions occur during the day.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline & **WRF Height (m)** & **Height (m)** & **R2** & **RMSE (°)** & **Bias (°)** & **N** \\\\ \\hline WRF vs. ASCAT & 10 & 10 & 0.95 & 17.6 & \\(-\\)0.01 & 129 \\\\ \\hline WRF vs. FINO2 & 50 & 51 & 0.58 & 41.8 & 0.02 & 2867 \\\\ & 100 & 91 & 0.58 & 42.6 & 0.02 & 2867 \\\\ \\hline WRF vs. Ferry Lidar & 50 & 65 & 0.79 & 26.3 & 0.01 & 3671 \\\\ & 100 & 100 & 0.76 & 25.4 & 0.01 & 3671 \\\\ & 200 & 200 & 0.73 & 28.1 & \\(-\\)0.01 & 3671 \\\\ & 250 & 250 & 0.75 & 27.2 & \\(-\\)0.00 & 3671 \\\\ \\hline \\end{tabular}
\\end{table}
Table 6: RMSE results of the NEWA WRF data compared to the other data sources for the wind direction.
Figure 7: Wind direction timeseries of FINO2, ASCAT, and WRF at the FINO2 location from 2013 to 2017.
## 4 Discussion
The present study has highlighted the complexity of intercomparing different types of wind speed and direction information. Each type of dataset, either from direct and indirect measurements or simulations, showed advantages and disadvantages. The FINO2 met mast represents the most accurate and direct measurement technique at a specific location and thus is lacking spatial representation. Furthermore, its distance to the ferry track rendered direct comparison with the lidar measurements more uncertain, and even more so, due to the spatial averaging of the lidar measurements themselves. It would be preferable to have a met mast located closer to the ferry track but this is not always possible, especially with the limited number of offshore met masts.
ASCAT was representative of much larger spatial scales with wide coverage over large parts of the study area; however, for the short campaign duration the number of overpasses available for direct comparison with the lidar measurements was limited along with the single reference height of 10 m. WRF covers spatial scales at multiple heights, but even the direct wind speed comparisons indicated that it was not able to reproduce the measured quantities. FINO2 shows very high error compared to WRF in both wind speed and direction, which is consistent with the results in both [43] and [14]. This may be due to the proximity of FINO2 to the Baltic II wind farm and the coast. However, we do not see this same level of error in the FINO2/lidar comparison at a larger distance scale and with the wind farm lying in-between the two sources with some of the lidar positions. Overall, WRF yielded the worst comparisons with all datasets at multiple heights.
One of the main challenges facing buoy-based FLS is the need for a reliable, autonomous, and robust setup that can withstand the rough offshore conditions. Ferry lidar systems offer a more reliable solution to this challenge with a more stable platform and a simpler means of installing and maintaining the lidar. This comes at the cost of sacrificing the flexibility that comes with buoys. Buoys can be relocated with relative ease and it is much easier and faster to obtain permissions than other meteorological measurement techniques. Ferry lidars offer the possibility to be installed on already-existent ferry routes, especially ones that correspond to areas of interest. Ship-based lidars are still in their infancy and need a more robust validation process for more widespread use, especially to become a tool for wind resource assessment.
Unlike the buoy-based lidar counterpart, ship-based lidars do not have the same scrutiny or recommended practices already established. The IEA Wind Task 32 [8] gives
Figure 8: Wind profile extrapolation of co-located ASCAT and lidar measurements on 30 March 2017 with an LLJ.
detailed guidance into the recommended practices of FLS. This includes verifying the FLS with a reference no more than 500 m away, using 10 min average wind speeds and recommending a six-month campaign duration for adequate data coverage and met-ocean conditions. All of these conditions are not met within this campaign, with the exception of using the 10 min mean wind speed. However, with the spatial coverage the ferry envelopes, using 10 min averages does not cover the WRF or ASCAT spatial domains and covers a larger area when compared to FINO2. Zentek et al. [13] use radiosondes in conjunction with the ship lidar as means of verification but this results in instantaneous comparisons. In the validation campaign from Bremenhaven to Helgoland (see [29]) from 2 June 2016 to 22 August 2016, the ferry lidar was validated against met mast data in the two ports at distances of 1.14 km and 500 m, respectively. In both of these cases, the ferry was close to, or within range of, the recommended practices; however, it was approaching port and therefore not at the top speed, such as in open water. This is one of the main challenges in ferry-based lidar measurements; it is very difficult to study/validate the corrected wind speed data in open water or at top speeds. This means that ferry lidar systems cannot follow the same recommended practices of FLS and are more difficult to achieve a \"pre-commercial\" maturity level.
A future outlook is the use of machine learning to extrapolate the satellite data to lidar heights for a direct comparison. This is foreseen through training models on low-level atmospheric data and predicting wind speeds at multiple heights similar to that of [45]. Having more lidar systems mounted on existing ferry routes would help bridge the gap of data scarcity offshore and provide more datasets to train extrapolation models. WRF provides similar spatial information desired as well as values at multiple heights, but has been shown to produce the largest overall error where the extrapolated satellite measurements could bridge this gap. In [12], where the ferry lidar experiment was first introduced, the RMSE results compared to their WRF simulations had a similar error to those of the NEWA WRF dataset used in this work. We record an RMSE in wind speed of 2.04 m s\\({}^{-1}\\) for all data, whereas that study recorded a value of 1.91 m s\\({}^{-1}\\) using model setup for initial verification of ferry lidar results before this experiment. This is not a significant difference and we see the same results in the wind directions; our results show an RMSE of 25.4\\({}^{\\circ}\\), whereas this group shows 29.2\\({}^{\\circ}\\). Even using a WRF model parameterized to the Baltic Sea for the period when the ferry-lidar experiment was conducted, the error was still greater than that what is found using FINO2 and ASCAT presented in this work.
ASCAT has shown to be an invaluable source of data offshore, with relatively high frequency of measurements compared to other satellite sources as well as lower overall standard deviations and biases. Even with the difficulty to directly compare the results with those of hub height levels, the results compared to the extrapolated wind profiles proved significant. All ASCAT comparison results fall within the acceptable OSI SAF product statistical range with the exception that the lidar extrapolated 10 m wind speeds when all atmospheric conditions were included. This yields an error of 3.0 m s\\({}^{-1}\\), twice that of the FINO2 equivalent comparison. This discrepancy is mainly due to the period of measurement and measurement heights. From February to June, the atmosphere is, on average, stably stratified; as was found from calculations in this study, stable profiles appeared over 50% of the time. The lowest lidar measuring height from sea level was at 65 m, which means we were not able to properly fit to the lower portion of the wind profile during stable conditions. As turbulent fluxes decrease in magnitude and near-surface winds decouple from the wind aloft [46; 47], the surface layer is not represented at a height of 65 m, and thus is not properly representing the wind at 10 m when extrapolated downwards.
The lower surface layer experiencing stable stratification and the high frequency of low-level jets in the Baltic Sea [25] cause the wind profiles in some cases to invert at lower heights where the profiles are being fit, which is not accounted for in the logarithmic wind profile. Thus, filtering for only near-neutral atmospheric stratification reduces errors from 3.0 m s\\({}^{-1}\\) to a level similar to what was found for the FINO2 comparison; however, it results in a much smaller dataset size (N = 23) that may be too small to deem credible or even to take into consideration when comparing to the other measured quantities. The amount of scatter, represented in Figure 6, between ASCAT and FINO2 is smaller than what was found between ASCAT and the lidar measurements, although the former comparison extends over 4 years and has many more collocated pairs. This is mainly attributed to the lower measurement heights of FINO2, i.e., lower portion of the wind profile consistently representing the surface layer.
## 5 Conclusions
This study presents intercomparisons of the NEWA ferry lidar campaign in the southern Baltic Sea, from February to June 2017, with wind retrievals from ASCAT, in situ measurements from the FINO2 mast, and simulated winds using the WRF mesoscale model. Furthermore, intercomparisons between FINO2, WRF, and ASCAT were performed from 2013 to 2017. Challenges in intercomparing with the moving lidar measurements were associated with collocation practices, e.g., comparing spatially distant measurements, spatially averaged measurements, or measurements at varying heights. This resulted in varying RMSE values for the wind speed between the ferry lidar measurements and other sources, with an overall lowest RMSE of 1.29 m s\\({}^{-1}\\) between ASCAT and the ferry lidar at a height of 10 m.
FINO2, as the only true in situ source of wind measurements, could be considered the reference for the comparisons, yet its significant distance from the ferry lidar track was assumed to be the cause of the larger-than-expected errors in wind speed around 1.70 m s\\({}^{-1}\\). Nonetheless, the lowest error in wind directions was found for the FINO2 comparisons with the ferry lidar. The highest errors for the WRF-simulated winds were found when compared to the FINO2 measurements, consistent with the previous studies in this area.
Overall, the ferry lidar is a valuable tool using lidar technology to cover wind-energy-relevant scales (temporal and spatial). It is, however, difficult to validate and has yet to receive recommended practices. Further study at top speeds is recommended, using a concurrence of radiosondes, stabilized lidar devices, or satellite winds extrapolated with machine learning.
D.H. prepared the original draft, as well as acquired, developed, and performed the data analysis and produced the results. C.B.H. and I.K. contributed in numerous discussions, provided suggestions, and supported the interpretation of the results. All authors reviewed and edited the manuscript until it reached the final stage. All authors have read and agreed to the published version of the manuscript.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement number 860879.
Not applicable.
Not applicable.
Not applicable.
The New European Wind Atlas is published at [https://map.neweuropeanwindatlas.eu/](https://map.neweuropeanwindatlas.eu/) (last access: 15 February 2021; NEWA, 2021). The OSTIA dataset can be obtained from [http://marine.copernicus.eu/](http://marine.copernicus.eu/) (last access: 25 February 2022; Copernicus marine service, 2022). The ASCAT data was taken from [https://marine.copernicus.eu](https://marine.copernicus.eu) (last access: 14 March 2022) The FINO2 data can be obtained from [http://fino.bsh.de](http://fino.bsh.de) (last access: 12 November 2021, FINO2, 2021).
We acknowledge the NEWA consortium for providing access to the New European Wind Atlas. We acknowledge Fraunhofer IWES and Julia Gottschall for providing the ferry lidar data set.
The authors declare no conflicts of interest.
## Appendix A Stability Correction through Bulk Richardson Number
The following derivation is adapted from [38; 48; 49].
Assuming a polytropic atmosphere, the air temperature gradient, \\(\\gamma\\) is \\[\\gamma=\\frac{T_{SST}-T_{h}}{z_{h}}\\]
where \\(h\\) is the height used, in the case of the FINO2, this is \\(h=30\\) m, and temperatures are in Kelvin.
The air pressure at sea level is estimated to be
\\[p_{0}=p_{h}\\bigg{(}\\frac{T_{SST}-\\gamma z_{h}}{T_{SST}}\\bigg{)}^{\\frac{-g}{ \\gamma R_{d}}}\\]
where the acceleration due to gravity is \\(g=9.81\\,\\mathrm{ms}^{-2}\\) and the specific gas constant of water vapor is \\(R_{d}=287\\,\\mathrm{JK}^{-1}\\mathrm{kg}^{-1}\\). The saturation vapor pressure is dependent on the temperature through the Magnus equation:
\\[e_{\\mathrm{s}}(T)=100\\cdot 6.1\\cdot 10^{\\frac{7.48(T_{h}-273.15)}{1_{h}-38.15 }}\\]
Here, the partial pressure of water vapor in the air is dependent on the relative humidity, RH:
\\[e=\\mathrm{RH}\\cdot e_{\\mathrm{s}}/100\\]
where the mixing ratio is
\\[r_{\\mathrm{\\tiny{p}}}=\\frac{R_{d}}{R_{\\mathrm{\\tiny{p}}}}\\cdot\\bigg{(}\\frac{e }{p_{h}-e}\\bigg{)}\\]
where the specific gas constant of water vapor is \\(R_{\\mathrm{\\tiny{p}}}=461\\,\\mathrm{JK}^{-1}\\mathrm{kg}^{-1}\\). The potential temperature, \\(\\Theta\\), is calculated to remove the temperature variation caused by changes in pressure as
\\[\\Theta=T_{h}\\bigg{(}\\frac{p_{0}}{p_{h}}\\bigg{)}^{\\kappa_{\\mathrm{\\tiny{p}}}}\\]
with \\(\\kappa_{\\mathrm{\\tiny{p}}}=0.286\\) being the Poisson constant assuming dry air. We can write the specific humidity as
\\[q=\\frac{r_{\\mathrm{\\tiny{p}}}}{1+r_{\\mathrm{\\tiny{p}}}}\\]
Now, the virtual potential temperature is calculated to account for density in buoyancy calculations and in turbulence transport which includes vertical air movement.
\\[\\Theta_{\\mathrm{\\tiny{p}}}=\\Theta\\cdot(1.0+0.61q)\\]
Finally, the bulk Richardson number can be calculated as
\\[\\mathrm{Ri_{b}}=\\frac{g\\Delta\\Theta_{\\mathrm{\\tiny{p}}}z_{h}}{\\Theta_{\\mathrm{ \\tiny{p}}}u_{h}^{2}}\\] (A1)
The bulk Richardson number is a dimensionless ratio of the consumption of turbulence and the production by wind shear of turbulence. It is then further used to show the dynamic stability through estimating the dimensionless stability parameter:
\\[\\zeta=\\begin{cases}\\frac{10R_{i_{b}}}{1-5R_{i_{b}}},&R_{i_{b}}>0\\\\ 10R_{i_{b}},&R_{i_{b}}\\leq 0\\end{cases}\\] (A2)
In unstable stratifications (\\(\\zeta<0\\)), the stability correction in M-O theory can then be estimated by
\\[\\Psi_{m}=2\\ln\\bigg{(}\\frac{1+x}{2}\\bigg{)}+\\ln\\bigg{(}\\frac{1+x^{2}}{2}\\bigg{)} -2\\mathrm{arctan}(x)+\\frac{\\pi}{2}\\] (A3)with \\(x=(1-b\\,z/L_{*})^{1/4}\\) where \\(z/L_{*}=\\zeta\\) (from [39; 40]).
Stable stratifications are broken down into two cases:
\\[\\Psi=\\begin{cases}-C\\zeta,&0\\leq\\zeta\\leq 0.5\\\\ A\\zeta+B(\\zeta-(C/D))\\exp(-D\\zeta)+B(C/D),&0.5\\leq\\zeta\\leq 7.0\\end{cases} \\tag{10}\\]
where \\(A=1\\), \\(B=2/3\\), \\(C=5\\), and \\(D=0.35\\) (from [50; 51; 52]; summarized in [38]).
## References
* (1) European Commission. _An EU Strategy to Harness the Potential of Offshore Renewable Energy for a Climate Neutral Future_; European Commission: Brussels, Belgium, 2020.
* (2) MacAskill, A.; Mitchell, P. Offshore wind--An overview. _WIREs Energy Environ._**2013**, \\(2\\), 374-383. [CrossRef]
* (3) Skaare, B.; Nielsen, F.G.; Hanson, T.D.; Yttervik, R.; Havmoller, O.; Rekdal, A. Analysis of measurements and simulations from the Hywind Demo floating wind turbine. _Wind Energy_**2015**, _18_, 1105-1122. [CrossRef]
* (4) Leiding, T.; Tinz, B.; Gates, L.; Rosenhagen, G.; Herklotz, K.; Senet, C.; Outzen, O.; Lindenthal, A.; Neumann, T.; Fruhman, R.; et al. _Standardistarius und Vergleichte Analyse der Meteorologischen FINO-Messdaten (FINO123)_; Technical Report; Deutscher Wetterdienst: Offenbach, Germany, 2016.
* (5) Clifton, A.; Boquet, M.; Burin Des Roziers, E.; Westerhellweg, A.; Hofsass, M.; Klaas, T.; Vogstad, K.; Clive, P.; Harris, M.; Wylie, S.; et al. _Remote Sensing of Complex Flows by Doppler Wind Lidar: Issues and Preliminary Recommendations_; National Renewable Energy Lab. (NREL): Golden, CO, USA, 2015. [CrossRef]
* (6) Gottschall, J.; Gribben, B.; Stein, D.; Wurth, I. Floating lidar as an advanced offshore wind speed measurement technique: Current technology status and gap analysis in regard to full maturity. _WIREs Energy Environ._**2017**, \\(6\\), e250. [CrossRef]
* (7) Smith, M. An insight into lidars for offshore wind measurements. In _Deepwind 2012_; Natural Power: Trondheim, Norway, 2012; p. 35.
* (8) Bischoff, O.; Wurth, I.; Gottschall, J.; Gribben, B.; Hughes, J.; Stein, D.; Verhoef, H. _IEA Wind Annex 32 Work Package 1.5 Expert Group Report on Recommended Practices: 18. Floating Lidar Systems_; Issue 1.0; University of Stuttgart: Stuttgart, Germany, 2017.
* (9) Wolken-Mohlmann, G.; Gottschall, J.; Lange, B. First Verification Test and Wake Measurement Results Using a SHIP-ILDAR System. _Energy Procedia_**2014**, _53_, 146-155. [CrossRef]
* (10) Edson, J.B.; Hinton, A.A.; Prada, K.E.; Hare, J.E.; Fairall, C.W. Direct Covariance Flux Estimates from Mobile Platforms at Sea. _J. Atmos. Ocean. Technol._**1998**, _15_, 547-562. [CrossRef]
* (11) Strobach, E.J.; Sparling, L.C. The Impact of Coastal Terrain on Offshore Wind and Implications for Wind Energy. Ph.D. Thesis, University of Maryland, Baltimore County, MD, USA, 2017.
* (12) Gottschall, J.; Catalano, E.; Dorenkamper, M.; Witha, B. The NEWA Ferry Lidar Experiment: Measuring mesoscalewinds in the Southern Baltic Sea. _Remote Sens._**2018**, _10_, 1620. [CrossRef]
* (13) Zentek, R.; Kohonemann, S.H.E.; Heinemann, G. Analysis of the performance of a ship-borne scanning wind lidar in the Arctic and Antarctic. _Atmos. Mes. Tech._**2018**, _11_, 5781-5795. [CrossRef]
* (14) Witha, B.; Hahnmann, A.; Sile, T.; Dorenkamper, M.; Ezer, Y.; Garcia-Bustamante, E.; Gonzalez-Rouco, J.F.; Leroy, G.; Navarro, J. _WRF Model Sensitivity Studies and Specifications for the NEWA Message Wind Atlas Production Runs_; Zenodo: Geneva, Switzerland, 2019. [CrossRef]
* (15) Skamarock, W.C.; Klemp, J.B. A time-split nonhydrostatic atmospheric model for weather research and forecasting applications. _J. Comput. Phys._**2008**, _227_, 3465-3485. [CrossRef]
* (16) Gonzalez-Rouco, J.; Bustamante, E.; Hahnmann, A.; Karagili, I.; Navarro, J.; Olsen, B.; Sile, T.; Witha, B. _NEWA Report on Uncertainty Quantification Deliverable D.4.4_; NEWA--New European Wind Atlas; Zenodo: Geneva, Switzerland, 2019. [CrossRef]
* (17) Belmonte Rivas, M.; Stoffelen, A. Characterizing ERA-Interim and ERA5 surface wind biases using ASCAT. _Ocean Sci._**2019**, _15_, 831-852. [CrossRef]
* (18) Karagali, I.; Badger, M.; Hasager, C. Spaceborne Earth Observation for Offshore Wind Energy Applications. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11-16 July 2021; pp. 172-175. [CrossRef]
* (19) Badger, M.; Pena, A.; Hahnmann, A.N.; Mouche, A.A.; Hasager, C.B. Extrapolating satellite winds to turbine operating heights. _J. Appl. Meteorol. Climatol._**2016**, _55_, 975-991. [CrossRef]
* (20) Karagali, I.; Hahnmann, A.N.; Badger, M.; Hasager, C.; Mann, J. Offshore new European wind atlas. _J. Phys. Conf. Ser._**2018**, _1037_, 052007. [CrossRef]
* (21) Hasager, C.B.; Hahnmann, A.N.; Ahsbahs, T.; Karagali, I.; Sile, T.; Badger, M.; Mann, J. Europe's offshore winds assessed with synthetic aperture radar, ASCAT and WRF. _Wind Energy Sci._**2020**, \\(5\\), 375-390. [CrossRef]
* (22) Kelly, M.; Gryning, S.E. Long-Term Mean Wind Profiles Based on Similarity Theory. _Bound.-Layer Meteorol._**2010**, _136_, 377-390. [CrossRef]
* (23) Sempreviva, A.; Barthelmie, R.; Pryor, S. Review of Methodologies for Offshore Wind Resource Assessment in European Seas. _Surv. Geophys._**2008**, _29_, 471-497. [CrossRef]* _Smedman et al. (1997)_ Smedman, A.S.; Bergstrom, H.; Grisogono, B. Evolution of stable internal boundary layers over a cold sea. _J. Geophys. Res. Ocean._**1997**, _102_, 1091-1099. [CrossRef]
* _Dorenkamper et al. (2015)_ Dorenkamper, M.; Optis, M.; Monahan, A.; Steinfeld, G. On the Offshore Advection of Boundary-Layer Structures and the Influence on Offshore Wind Conditions. _Bound.-Layer Meteorol._**2015**, _155_, 459-482. [CrossRef]
* _Svensson (2018)_ Svensson, N. Mesoscale Processes over the Baltic Sea. Ph.D. Thesis, Uppsala Universitet, Uppsala, Sweden, 2018.
* _Hasager et al. (2011)_ Hassager, C.B.; Badger, M.; Pena, A.; Larsen, X.G.; Bingol, F. SAR-Based Wind Resource Statistics in the Baltic Sea. _Remote Sens._**2011**, \\(3\\), 117-144. [CrossRef]
* _Sluzenikina and Mznnik (2011)_ Sluzenikina, J.; Mznnik, A. A comparison of ASCAT wind measurements and the HIRLAM model over the Baltic Sea. _Oceanologia_**2011**, _53_, 229-244. doi: 10.5697/oc.53-1-TI.229. [CrossRef]
* Catalano (2017) Catalano, E. Assessment Of Offshore Wind Resources through Measurements from a Ship-Based LiDAR System. Master's Thesis, University of Genoa, Genoa, Italy, 2017.
* _Worsfold et al. (2021)_ Worsfold, M.; Good, S.; Martin, M.; Mclaren, A.; Fiedler, E. _Global Ocean OSTIA Sea Surface Temperature Reprocessing, SST-GLO-SST-L4-REP-OBSERVATIONS-010-011;_ Technical Report 1.3; Copernicus Marine Service: Southampton, UK, 2021.
* Verhoef and Stoffelen (2019) Verhoef, A.; Stoffelen, A. _EUMETSAT Advanced Retransmission Service ASCAT Wind Product User Manual;_ Technical Report October; EUMETSAT: Darmstadt, Germany, 2019.
* Stoffelen (1996) Stoffelen, A. _Error Modelling of Scattrenometer In-Situ, and ECMWF Model Winds; A Calibration Refinement;_ Technical Report; Konitklijk Nederlands Meteorologist Institut (KNMI): De Bilt, The Netherlands, 1996.
* Martin (2014) Martin, S. _An Introduction to Ocean Remote Sensing_, 2nd ed.; Cambridge University Press: Cambridge, UK, 2014.
* Chelton et al. (2001) Chelton, D.B.; Ries, J.C.; Haines, B.J.; Fu, L.L.; Callahan, P.S. Chapter 1 Satellite Altimetry. _Int. Geophys._**2001**, _69_, 1-183. [CrossRef]
* Stoffelen et al. (2017) Stoffelen, A.; Verspeek, J.A.; Vogelzang, J.; Verhoef, A. The CMDD7 Geophysical Model Function for ASCAT and ERS Wind Retrievals. _IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens._**2017**, _10_, 2123-2134. [CrossRef]
* Verhoef and Stoffelen (2019) Verhoef, A.; Stoffelen, A. _Algorithm Theoretical Basis Document for the OSI SAF Wind Products_; Technical Report; EUMETSAT: Darmstadt, Germany 2019.
* Grachev and Fairall (1997) Grachev, A.A.; Fairall, C.W. Dependence of the Monin-Obukhov Stability Parameter on the Bulk Richardson Number over the Ocean. _J. Appl. Meteorol._**1997**, _36_, 406-414. [CrossRef]
* Emeis (2018) Emeis, S. _Wind Energy Meteorology: Atmospheric Physics for Wind Power Generation_, 2nd ed.; Green Energy and Technology; Springer: Berlin/Heidelberg, Germany, 2018; p. 276.
* Paulson (1970) Paulson, C.A. The mathematical representation of wind speed and temperature profiles in the unstable atmospheric surface layer. _J. Appl. Meteorol. Climatol._**1970**, \\(9\\), 857-861. [CrossRef]
* Hogstrom (1988) Hogstrom, U. Non-dimensional wind and temperature profiles in the atmospheric surface layer: A re-evaluation. In _Topics in Micrometeorology. A Festschrift for Arch Dyer_; Springer: Berlin/Heidelberg, Germany, 1988; pp. 55-78.
* Karagali et al. (2014) Karagali, I.; Pena, A.; Badger, M.; Hassager, C.B. Wind characteristics in the North and Baltic Seas from the QuikSCAT satellite. _Wind Energy_**2014**, _17_, 123-140. [CrossRef]
* Sorbjan and Grachev (2010) Sorbjan, Z.; Grachev, A.A. An evaluation of the flux-gradient relationship in the stable boundary layer. _Bound.-Layer Meteorol._**2010**, _135_, 385-405. [CrossRef]
* Pena et al. (2009) Pena, A.; Hahmann, A.; Hassager, C.; Bingol, F.; Karagali, I.; Badger, J. _South Baltic Wind Atlas: South Baltic Offshore Wind Energy Regions Project_; Technical Report; Rise-Report: Roskilde, Denmark, 2011.
* Baas et al. (2009) Baas, P.; Bosveld, F.; Klein, B.H.; Holtslag, B. A Climatology of Nocturnal Low-Level Jets at Cabauw. _J. Appl. Meteorol. Climatol._**2009**, _48_, 1627-1642. [CrossRef]
* Optis et al. (2021) Optis, M.; Bodini, N.; Debnath, M.; Doubrawa, P. New methods to improve the vertical extrapolation of near-surface offshore wind speeds. _Wind Energy Sci._**2021**, \\(6\\), 935-948. [CrossRef]
* Optis et al. (2014) Optis, M.; Monahan, A.; Bosveld, F.C. Moving Beyond Monin-Obukhov Similarity Theory in Modelling Wind-Speed Profiles in the Lower Atmospheric Boundary Layer under Stable Stratification. _Bound.-Layer Meteorol._**2014**, _153_, 497-514. [CrossRef]
* Optis et al. (2016) Optis, M.; Monahan, A. The Extrapolation of Near-Surface Wind Speeds under Stable Stratification Using an Equilibrium-Based Single-Column Model Approach. _J. Appl. Meteorol. Climatol._**2016**, _55_, 923-943. [CrossRef]
* Schneemann et al. (2020) Schneemann, J.; Rott, A.; Dorenkamper, M.; Steinfeld, G.; Kuhn, M. Cluster wakes impact on a far-distant offshore wind farm's power. _Wind Energy Sci._**2020**, \\(5\\), 29-49. [CrossRef]
* Stull (1988) Stull, R. _An Introduction to Boundary Layer Meteorology_, 1st ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988; p. 670.
* Businger et al. (1971) Businger, J.A.; Wyngaard, J.C.; Izumi, Y.; Bradley, E.F. Flux-Profile Relationships in the Atmospheric Surface Layer. _J. Atmos. Sci._**1971**, _28_, 181-189. [CrossRef]
* Dyer (1974) Dyer, A. A review of flux-profile relationships. _Bound.-Layer Meteorol._**1974**, \\(7\\), 363-372. [CrossRef]
* Holtslag and Bruin (1988) Holtslag, A.A.M.; Bruin, H.A.R.D. Applied Modeling of the Nighttime Surface Energy Balance over Land. _J. Appl. Meteorol. Climatol._**1988**, _27_, 689-704. [CrossRef] | This article highlights the inter-comparisons of the wind measurement techniques available in deep water areas working towards combining them to obtain optimal estimates of the wind power potential. More specifically, this article presents comparisons of the Ferry Lidar Experiment wind data with those of the Advanced Scatterometer (ASCAT), the FINO2 meteorological mast, and the New European Wind Atlas (NEWA) simulations performed using the Weather Research, and Forecasting (WRF) mesoscale model. To be comparable to ASCAT surface winds, which are referenced at 10 m, the ferry lidar and FINO2 wind profile measurements were extrapolated down to 10 m using atmospheric stability information derived from the bulk Richardson number formulation. ASCAT had the lowest associated error compared with that of the ferry lidar in near-neutral atmospheric stratifications, whereas FINO2, despite a distance range of 30 km and a moving ferry lidar target, had the highest correlation and lowest RMSE in all atmospheric conditions. Due to the high frequency of low-level jets caused by the proximity to land from all directions as well as typically stable atmospheric conditions, the extrapolated ferry lidar measurements underpredicted the ASCAT 10 m wind speeds. WRF consistently underperformed compared to the other measurement methods, even with the ability to directly compare results with all other sources at all heights.
offshore wind energy; ASCAT; satellite winds; wind lidar; ferry lidar; WRF; FINO2; NEWAR 2022141427. [https://doi.org/10.3390/ny14061427](https://doi.org/10.3390/ny14061427) | Summarize the following text. | 338 |
arxiv-format/2311_13128v2.md | # P2RBox: Point Prompt Oriented Object Detection with SAM
Guangming Cao\\({}^{1}\\)1, Xuehui Yu\\({}^{1}\\)1, Wenwen Yu\\({}^{1}\\), Xumeng Han\\({}^{1}\\), Xue Yang\\({}^{2}\\)
**Guorong Li\\({}^{1}\\)**, **Jianbin Jiao\\({}^{1}\\)**, **Zhenjun Han\\({}^{1}\\)2 \\({}^{1}\\)3 \\({}^{1}\\)**University of Chinese Academy of Sciences \\({}^{2}\\)Shanghai AI Laboratory
[email protected]
Equal contribution.Corresponding author.
Footnote 1: footnotemark:
Footnote 2: footnotemark:
## 1 Introduction
Aerial object detection focuses on identifying objects of interest, such as vehicles and airplanes, on the ground within aerial images and determining their categories. With the increasing availability of aerial imagery, this field has become a specific yet highly active area within computer vision [1; 2; 3; 4; 5]. Nevertheless, obtaining high-quality bounding box annotations demands significant human resources. Weakly supervised object detection [6; 7; 8; 9; 10; 11; 12; 13] has emerged as a solution, replacing bounding box annotations with more affordable image-level annotations.
However, due to the absence of precise location information and challenges in distinguishing densely packed objects, image-level supervised methods [14] exhibit limited performance in complex scenarios. Recently, the field of weakly supervised remote sensing has aspired to achieve a balance between performance and annotation cost by introducing slightly more annotation information, such as horizontal bounding box-based [15; 16] and single point-based [17; 18] oriented object detection.
Horizontal bounding box-based methods have made significant progress, while point-supervised methods still exhibit a substantial performance gap compared to rotated bounding box supervision.
Given that SAM [19] currently serves as a strong visual model, it possesses robust zero-shot capabilities and supports the use of points as prompt inputs. A natural idea is to utilize point inputs to feed into the SAM to generate mask proposals, which are then converted into rotated bounding box annotations. This approach introduces a new challenge: selecting the most appropriate mask proposal among the many generated by SAM. Although the SAM model assigns a initial score to each mask upon generation, our experiments indicate that this score frequently fails to accurately represent mask quality due to the lack of intra-class homogeneity, as demonstrated in Fig. 1. Consequently, this paper introduces the P2RBox network, which utilizes the semantic and spatial information of the annotation points to develop a novel criterion for assessing mask quality.
In the P2RBox framework, a boundary-aware mask-guided MIL module and a centrality-guided module have been designed, which are utilized for the network to learn semantic and positional information respectively. These modules also facilitate the evaluation of mask quality based on the learned information. Additionally, to address the inevitable orientation issues in remote sensing datasets, particularly the ambiguity between the axis of symmetry of common targets such as airplanes and the orientation of the minimum bounding rectangle, P2RBox incorporates an angle prediction branch embedded in a centrality-guided module. This predicted angle is tasked with transforming the generated mask proposals into oriented bounding rectangles. The masks selected based on this new criterion show significant improvement and high-quality pseudo annotations are generated therefore. Our main contributions are as follows:
**1)** Proposing of the P2RBox Network: We introduce the P2RBox network, which is the first method based on point annotation and SAM for achieving point-supervised rotated object detection. By combining with Oriented R-CNN, P2RBox achieves 62.43% on DOTA-v1.0 test dataset, which narrows the gap between point-supervised and fully-supervised methods.
**2)** High-Quality Mask Selection: By employing the boundary-aware mask-guided MIL module, we introduce a semantic score for the masks. This score, combined with the spatial offset score from the centrality-guided module, forms a comprehensive filtering approach. This approach enhances the quality of the selected mask proposals from the mask generator, ultimately improving the quality of the rotated bounding box annotations generated through the predicted angle.
Figure 1: Visual comparison of the highest confidence mask and its corresponding rotated box annotation generated by mask proposal generator (SAM) and P2RBox. The second row displays the results of the baseline method, while the Rotated boxes in the last row are generated by the SAE module. (Best viewed in color).
Related work
**Weakly-supervised oriented object detection.** In the field of remote sensing, detectors utilizing rotated bounding box annotations have demonstrated robust performance and widespread applications [2; 3; 4; 5; 20; 21; 22]. Concurrently, the use of weakly supervised annotations as a cost-effective method for reducing labeling expenses is gaining increasing attention. OAOD [23] is proposed for weakly-supervised oriented object detection. But in fact, it uses HBox along with an object angle as annotation, which is just \"slightly weaker\" than RBox supervision. H2RBox [15]directly detects RBox by using HBox annotations, leveraging geometric constraints to enhance detection efficiency. H2RBox-v2 [24] further introduces the reverse angle sub-supervision module, which significantly improves the performance and is equivalent to the fully supervised detector. PointOBB [18] is the first method using single-point supervision for oriented object detection, employing self-supervised learning for object scale and angle with innovative loss strategies. Point2RBox [17] introduces a lightweight, end-to-end model that uses synthetic patterns and self-supervision to predict rotated bounding boxes from point annotations efficiently
**Semi-supervised oriented object detection.** Semi-supervised learning sits between supervised and unsupervised learning, utilizing a small amount of labeled data in conjunction with a large volume of unlabeled data for training [25; 26; 27; 28]. SOOD [26] designed a semi-supervised oriented object detection method for remote sensing images. This method is based on a pseudo-label framework and introduces two novel loss functions: Rotation Aware Weighted Adaptation (RAW) loss and Global Consistency (GC) loss, to optimize the detection of oriented targets. Zhou et al. [27] proposed an end-to-end semi-supervised SAR ship detection framework with Interference Consistency Learning (ICL) and Pseudolabel Calibration Network (PLC) to improve accuracy and robustness against complex backgrounds and interferences in SAR imagery.
**Enhancements and Applications of SAM.** In the rapidly evolving field of computer vision, the Segment Anything Model (SAM) [19] has emerged as a versatile tool, capable of generating object masks from simple user inputs [29; 30; 31; 32; 33], such as clicks. SAPNet [34] that integrates SAM with point prompts to tackle the challenge of semantic ambiguity, utilizing Multiple Instance Learning (MIL) and Point Distance Guidance (PDG) to refine SAM-generated masks, ensuring high-quality, category-specific segmentation. UV-SAM [35] aims to identify urban village boundaries in cities by adjusting SAM. The research uses the hybrid cues generated by the semantic segmentation model to be input into SAM to achieve fine-grained boundary recognition. ASAM [36]enhances SAM through adversarial tuning, using natural adversarial examples to improve segmentation tasks significantly. This method requires no extra data or changes to the architecture, setting new performance benchmarks. Dual-SAM [37] enhances SAM for marine animal segmentation by introducing a dual structure to improve feature learning. It incorporates a Multi-level Coupled Prompt (MCP) strategy for better underwater prior information and a Criss-Cross Connectivity Prediction (C3P) method to enhance inter-pixel connectivity.
Although this paper uses point annotations to generate oriented bounding boxes, unlike traditional networks, it employs SAM to generate high-quality candidate masks. Therefore, this paper not only utilizes point annotations but also extracts valuable information from them to optimize the selection of candidate masks.
## 3 Method
### Framework
The generative network structure allows for the conversion of annotations and the replacement of strong supervised detectors to obtain different models with varying focuses (e.g., performance or detection speed). This maintains a certain degree of flexibility while achieving good performance. As shown in the Fig 2, this work designs the P2RBox network, which inputs annotation points into the SAM model. By establishing an evaluation mechanism, the filtered masks are retained and converted into rotated box annotations. This process is used to train a strongly supervised rotated object detector, accomplishing the transition from point annotations to rotated object detection.
The evaluation mechanism established in this paper is the key focus of our work. We designed two modules to separately mine the feature information of annotation points and RoIs, and developed evaluation criteria. The first module is the boundary-aware mask-guided MIL module. In the neighborhood of the annotation points, it constructs positive point bags, negative point bags, and negative points to guide the model in learning the class information of the points on the feature map. The second module is the centrality guided module. Based on the regions of interest corresponding to the candidate masks, it extracts features and uses them to predict the spatial offsets of the masks.
In the inference part of the P2RBox, the output results from these two modules will be used to design a comprehensive mask evaluation metric. In addition, to convert masks into rotated boxes, based on RoI features, this paper designs an angle prediction branch to guide the generation of oriented bounding rectangles from the masks. During the training process of the network, only the class information is known and can be directly used to calculate the loss. The other two important branches, such as spatial offset and angle prediction, both use calculated values based on the algorithm designed in this paper as supervision. During inference, the trained network predictions are used.
### Boundary-Aware Mask-Guided MIL Module
The information contained in point annotations is merely the class information of a single point. After generating candidate masks using the SAM model with annotation point prompt, different methods for generating point bags based on these candidate masks were designed in this paper. Given a candidate mask, a positive point set corresponding to the candidate mask is constructed by randomly sampling pixel points within the mask. This sampling method not only utilizes the information from the mask but also, with a sufficient number of samples, can represent this type of mask.
\\[\\mathcal{B}_{pos}=\\big{\\{}p_{i}|p_{i}\\in mask,\\;p_{i}\\;is\\;randomly\\;selected \\big{\\}}, \\tag{1}\\]
Background points located far from the object are easier for the network to learn. Conversely, the negative points along the object's boundary are more valuable samples, as they often reside on the critical boundary between the foreground and background. Hence, a negative point bag is designed to
Figure 2: The overview of training process of P2RBox, consisting of boundary-aware mask-guided MIL module, centrality-guided module. Initially, mask proposals are generated by a generator (SAM). Four point sets are constructed according the highest initial score mask to train boundary-aware mask-guided MIL moudle, which pursuing dataset-wide category consistency. The centrality-guided module generates offset penalty to enhance alignment between points and masks. The trained network will used to assesses mask quality, selecting the best proposals for detector training supervision. (Best viewed in color)
train the model to discern boundary features. By selecting several points in the margin pixels of the mask, the negative point bag can be defined as:
\\[\\mathcal{B}_{neg}=\\big{\\{}p_{i}|p_{i}\\in mask_{margin},\\;p_{i}\\;is\\;randomly\\; selected\\big{\\}}, \\tag{2}\\]
With annotated point \\(a\\) as a naturally positive reference, we introduce negative points to instruct the model in discerning background features. To obtain the negative points for a given mask, we follow a three-step process. Firstly, determine the circumscribed bounding box of the mask, denoted as \\((x,\\,y,\\,w,\\,h,\\,\\alpha)\\). Secondly, increase both the height \\(h\\) and width \\(w\\) by a factor as \\(\\hat{h}=(1+\\delta)\\cdot h\\) and \\(\\hat{w}=(1+\\delta)\\cdot w\\), where \\(\\delta\\) is the set to \\(0.05\\) in this paper. Lastly, the set of negative points, denoted as \\(\\mathcal{N}\\), comprises the four vertices along with the midpoints of the four edges, _i.e._,
\\[\\begin{split} n_{ij}=(x+\\frac{\\hat{w}}{2}\\cdot\\cos\\alpha\\cdot i -\\frac{\\hat{h}}{2}\\cdot\\sin\\alpha\\cdot j,\\,y+\\frac{\\hat{w}}{2}\\cdot\\sin\\alpha \\cdot i+\\frac{\\hat{h}}{2}\\cdot\\cos\\alpha\\cdot j),\\\\ \\mathcal{N}=\\big{\\{}n_{ij}\\,|\\,i,j\\in\\{-1,0,1\\},\\,(i,j)\
eq(0,0) \\big{\\}}.\\end{split} \\tag{3}\\]
To facilitate P2RBox in determining whether the points in \\(\\mathcal{B}\\) belong to the same category as annotated point, we treat the points in bag \\(\\mathcal{B}\\) as positive instances. We extract the feature vectors \\(\\{\\mathbf{F}_{p}|p\\in\\mathcal{B}\\}\\). For each \\(p\\in\\mathcal{B}\\), two separate fully connected layers with independent weights generate the classification score and instance score, denoted as \\([S_{\\mathcal{B}}^{ins}]_{p}\\) and \\([S_{\\mathcal{B}}^{cls}]_{p}\\).
To obtain the final classification score for \\(\\mathcal{B}\\), we calculate it by summing the element-wise products of \\([S_{\\mathcal{B}}^{ins}]_{p}\\) and \\([S_{\\mathcal{B}}^{cls}]_{p}\\) as follows:
\\[S_{\\mathcal{B}}=\\sum_{p\\in\\mathcal{B}}[S_{\\mathcal{B}}^{ins}]_{p}\\cdot[S_{ \\mathcal{B}}^{cls}]_{p},\\in\\mathbb{R}^{K}. \\tag{4}\\]
For annotation points or negative points, directly use the prediction results from the classification score branch. In the inference stage, the positive point bag and the negative point bag will be used together to evaluate the mask.
### Centrality-Guided Module
For generative point-based approaches like P2BNet [38] and PointOBB [18], they generate anchor boxes centered on annotated points, thereby utilizing the points' positional information. Utilizing semantic information, P2RBox has demonstrated the ability to discern the quality of candidate masks. However, existing modules mainly focus on extracting semantic information and fail to fully leverage the spatial information provided by point annotations. This deficiency often leads to generated masks containing redundant areas when covering the target.
The centrality-guided module also operates based on feature maps and candidate masks, as shown in Fig. 2. The specific implementation steps are as follows: 1) For the mask, alculate the coordinates of its bounding horizontal box \\((x_{1},y_{1},x_{2},y_{2})\\), where \\((x_{1},y_{1})\\) is the top-left corner and \\((x_{2},y_{2})\\) is the bottom-right corner. 2) Sample features from the feature map based on the RoI to obtain the corresponding feature vector. 3) Through two fully connected layers with shared weights, three different branches are designed. The first is the offset prediction branch, which outputs an \\(N\\times 1\\) vector, where \\(N\\) is the number of targets in the image, and 1 represents the offset prediction for each target. The second is the classification branch, used to calculate the class score of the region of interest. The last is the angle prediction branch, which outputs an \\(N\\times 1\\) vector.
In this thesis, a benchmark ground truth value for supervised learning was designed for the network's offset prediction, and a reference value \\(gt_{offset}\\) to guide the model training. Based on the coordinates of the annotation point \\((x,y)\\) the reference value for offset is designed as follows:
\\[gt_{offset}=1-\\sqrt{\\frac{min(x-x_{1},x_{2}-x)}{max(x-x_{1},x_{2}-x)}}\\cdot\\frac {min(y-y_{1},y_{2}-y)}{max(y-y_{1},y_{2}-y)}. \\tag{5}\\]
The design of \\(gt_{offset}\\) ensures that its value decreases as the annotation point approaches the center of the target; conversely, its value increases as the annotation point moves away from the target center, serving as a penalty mechanism for the predicted offset.
The classification branch outputs an \\(N\\times K\\) vector (\\(K\\) represents the number of categories), where the category information is determined by the category of the labeled points. As for angle prediction, due to the lack of angle information, an algorithm is designed below to supervise the angle.
**Angle Prediction.** Rotated bounding boxes offer a more precise means of annotation and can also convey its orientation. Generally, for symmetrical objects, the annotation direction of the rotated bounding box often aligns with the direction of the axis of symmetry. However, the minimum enclosing rectangle cannot guarantee this alignment, as shown in the last column of Fig. 1. Assume there is an object, with \\(P\\) being the set of its pixel points. Interestingly, the axis of symmetry of the object can be obtained by solving for the eigenvector of \\(P^{T}\\cdot P\\) after translating its center of mass to the origin.A simple proof will be given here.
In accordance with this condition, if the target exhibits symmetry along an axis passing through the origin, then there exists a rotation matrix, also referred to as an orthogonal matrix \\(R\\), such that:
\\[P\\cdot E=Q\\cdot R,E=\\begin{pmatrix}1&0\\\\ 0&1\\end{pmatrix}. \\tag{6}\\]
Here, we have a matrix \\(R\\) with dimensions \\(2\\times 2\\), representing a rotated matrix that aligns the axis of symmetry with the x-axis. Consequently, we can express \\(Q\\) as follows:
\\[Q=\\begin{pmatrix}x_{1}&x_{1}&\\dots&x_{n}&x_{n}\\\\ y_{1}&-y_{1}&\\dots&y_{n}&-y_{n}\\end{pmatrix}^{T}. \\tag{7}\\]
To find the matrix \\(R\\), we multiply both sides of the above equation by its transpose, yielding:
\\[E^{T}\\cdot P^{T}\\cdot P\\cdot E=R^{T}\\cdot Q^{T}\\cdot Q\\cdot R. \\tag{8}\\]
This further simplifies to:
\\[P^{T}\\cdot P=R^{T}\\cdot\\begin{pmatrix}2\\times\\sum_{i=1}^{n}x_{i}^{2}&0\\\\ 0&2\\times\\sum_{i=1}^{n}y_{i}^{2}\\end{pmatrix}\\cdot R. \\tag{9}\\]
According to the spectral theorem for symmetric matrices in advanced algebra (proposed by Sylvester in the 19th century): Every real symmetric matrix can be orthogonally diagonalized. Eq. 9 demonstrates that \\(Q^{T}\\cdot Q\\) and \\(P^{T}\\cdot P\\) are similar matrices, with \\(R\\) serving as the similarity transformation matrix and also the eigenvector matrix of \\(P^{T}\\cdot P\\) because \\(Q^{T}\\cdot Q\\) being a diagonal matrix. This confirms our assertion: The eigenvectors of the matrix \\(P^{T}\\cdot P\\) correspond to the object's symmetry direction and its vertical direction.
### Total Loss and Mask Quality Assessment
**Total Loss.** The loss function of P2RBox includes classification loss, offset prediction loss, and angle prediction loss, marked as \\(\\mathcal{L}_{CGM}\\). Object-level MIL loss (\\(\\mathcal{L}_{MIL}\\)) is introduced to endow P2RBox the ability of finding semantic points around each annotated point. By combining information from similar objects throughout the entire dataset, it imparts discriminative capabilities to the features of that category, distinguishing between foreground and background. The boundary sensitive mask guided loss function of P2RBox is a weighted summation of the three losses:
\\[\\mathcal{L}=\\mathcal{L}_{MIL}+\\mathcal{L}_{CGM}. \\tag{10}\\]
\\[\\mathcal{L}_{MIL}=\\mathcal{L}_{ann}+\\mathcal{L}_{neg}+\\mathcal{L}_{bag}^{neg} +\\mathcal{L}_{bag}^{neg}, \\tag{11}\\]
\\[\\mathcal{L}_{CGM}=\\mathcal{L}_{box}^{cls}+\\mathcal{L}_{offset}+\\mathcal{L}_{ angle}. \\tag{12}\\]
The four terms in \\(\\mathcal{L}_{MIL}\\) correspond to the four different components designed in the boundary-aware mask-guided MIL module, and their output scores are calculated using focal loss. The three terms in \\(\\mathcal{L}_{CGM}\\) correspond to the outputs of the three branches in the centrality guided module, with the classification loss \\(\\mathcal{L}_{box}^{cls}\\) calculated using focal loss. The angle loss \\(\\mathcal{L}_{angle}\\) is calculated using smooth-l1 loss according to the supervision value in the module, while the offset loss \\(\\mathcal{L}_{offset}\\) is calculated using cross entropy loss between the prediction and the supervision value.
**Mask quality assessment.** During inference, several outputs from the above modules will be used to assess the mask. The specific calculation method is as follows.
\\[Score=S_{SAM}+S_{bmm}+S_{cgm}. \\tag{13}\\]\\[S_{cgm}=S_{boxCls}-S_{offset}, \\tag{14}\\]
\\[S_{bmm}=S_{posBag}-S_{negBag}. \\tag{15}\\]
Using the softmax activation function to activate positive instance bags, then obtaining the values corresponding to each class, resulting in \\(S_{posBag}\\). For negative instance bags, applying the sigmoid activation function to activate the scores of negative bags, then selecting the highest class score as the penalty term \\(S_{negBag}\\). Similarly, the classification scores of the boxes \\(S_{boxCls}\\) predicted by the centrality guided module are activated using softmax, then the corresponding class scores are subtracted by the network's predicted offsets to obtain the \\(S_{cgm}\\).
During the testing phase, we straightforwardly select the mask with the highest \\(Score\\) as the object's mask, which is subsequently converted into a rotated bounding box with predicted angle.
## 4 Experiments
### Datasets and Implement Details
**DOTA**[39].: There are 2,806 aerial images, 1,411 for training, 937 for validation, and 458 for testing, as annotated using 15 categories with 188,282 instances in total. We follow the preprocessing in MMRate [40], i.e. the high-resolution images are split into 1,024 \\(\\times\\) 1,024 patches with an overlap of 200 pixels for training, and the detection results of all patches are merged to evaluate the performance. We use training and validation sets for training and the test set for testing. The detection performance is obtained by submitting testing results to DOTA's evaluation server.
**DIOR-R**[41].: DIOR-RBox, derived from the original aerial image dataset DIOR [42], has undergone reannotation using RBoxes instead of the initial HBox annotations. With a total of 23,463 images and 190,288 instances spanning 20 classes, DIOR-RBox showcases extensive intra-class diversity and a wide range of object sizes.
In the experimental section and ablation studies of this paper, mAP\\({}_{5}\\)0 and mIoU are utilized as the primary evaluation metrics.
**Single Point Annotation.** Since there are no dedicated point annotations on the DOTA and DIOR-R datasets, this paper directly uses the center points of the bounding boxes annotated with rotation angles as the point annotations for dataset utilization. However, the method proposed in this paper does not require such strict adherence to the exact positions of the annotated points.
**Training Details.** P2RBox predicts the oriented bounding boxes from single point annotations and uses the predicted boxes to train three classic oriented detectors (RetinaNet, FCOS, Oriented R-CNN) with standard settings. All the fully-supervised models are trained based on a single GeForce RTX 2080Ti GPU. Our model is trained with SGD [43] on a single GeForce RTX 2080Ti GPU. The initial learning rate is \\(2.5\\times 10^{-3}\\) with momentum 0.9 and weight decay being 0.0001. And the learning rate will warm up for 500 iterations. Training P2RBox on a single GPU typically takes around 9 hours. To demonstrate the effectiveness of the proposed method in our model, we designed a parameter-free rotation box annotation generator based on SAM, which directly retains the highest initial score mask and computes the minimum bounding rectangle to obtain the rotated bounding box.
### Main Result
As shown in Tab.1, our model's performance across many categories is obvious. In point-supervised detection, to demonstrate the effectiveness of the proposed method in our model, we designed a parameter-free rotation box annotation generator based on SAM, which directly retains the highest-score mask and computes the minimum bounding rectangle to obtain the rotated bounding box. By comparing the results of pseudo-label training on three strong supervised detectors, P2RBox model outperforms our baseline in every single category combined with any detector (56.66% vs. 47.91% on RetinaNet, 59.04% vs. 50.84% on FCOS, 62.43% vs. 52.75% on Oriented R-CNN). The performance of the P2RBox network is significantly higher than previous point supervision methods (PointOBB 33.31%, Point2RBox 40.27%), but there is still a considerable gap compared to strong supervision methods. Examples of detection results on the DOTA dataset using P2RBox (Oriented R-CNN) are shown in Fig. 3The conclusions drawn from the DIOR-R dataset are similar to those from the DOTA dataset. P2RBox outperforms the parameter-free method based on SAM set in this paper (41.21% vs. 30.47% on RetinaNet, 45.32% vs. 34.49% on FCOS, 47.29% vs. 35.75% on Oriented R-CNN ). Under the same strong supervision detection training conditions, P2RBox verifies the high quality of the generated annotations compare to PointOBB (45.32% vs. 37.31 on FCOS, 47.29% vs. 38.08% on Oriented R-CNN).
### Ablation study
According to Eq. 13, mask score can be calculated as follows:
\\[Score=S_{SAM}+S_{bmm}+S_{boxCls}-S_{offset}. \\tag{16}\\]
**The impact of P2RBox Modules.** As depicted in Tab. 3, each component in the model architecture was functionally verified and performance evaluated one by one. By using predicted angle or adding evaluation scores from different modules, this paper generates different rotation pseudo-box annotations and trains them on three mainstream detectors. The experimental results show good compatibility between the modules, and the use of each module contributes to the improvement of the final mask quality.
\\begin{table}
\\begin{tabular}{l|c c c c c c c c c c c c c c|c} \\hline \\hline
**Method** & **PL** & **BD** & **BR** & **GTF** & **SV** & **LV** & **SH** & **TC** & **BC** & **ST** & **SBF** & **RA** & **HA** & **SP** & **HC** & **mAP\\({}_{50}\\)** \\\\ \\hline \\multicolumn{13}{l}{**r**-**level:} \\\\ \\hline RetinaNet [1] & 89.1 & 74.5 & 44.7 & 72.2 & 71.8 & 63.6 & 74.9 & 90.8 & 78.7 & 80.6 & 50.5 & 59.2 & 62.9 & 64.4 & 39.7 & 67.83 \\\\ FCOS [3] & 88.4 & 75.6 & 48.0 & 60.1 & 79.8 & 77.8 & 86.7 & 90.1 & 78.2 & 85.0 & 52.8 & 66.3 & 64.5 & 68.3 & 40.3 & 70.78 \\\\ Oriented R-CNN [5] & 89.5 & 82.1 & 54.8 & 70.9 & 78.9 & 83.0 & 88.2 & 90.9 & 87.5 & 84.7 & 64.0 & 67.7 & 74.9 & 68.8 & 52.3 & 75.87 \\\\ \\hline \\multicolumn{13}{l}{**r**-**level:} \\\\ \\hline I2RBox [1] & 88.5 & 73.5 & 40.8 & 56.9 & 77.5 & 65.4 & 77.9 & 90.9 & 83.2 & 85.3 & 55.3 & 62.9 & 52.4 & 63.6 & 43.3 & 67.82 \\\\ I2RBox-v2 [24] & 89.0 & 74.4 & 50.0 & 60.5 & 79.8 & 75.3 & 86.9 & 90.9 & 85.1 & 85.0 & 59.2 & 63.2 & 65.2 & 70.5 & 49.7 & 72.31 \\\\ \\hline \\multicolumn{13}{l}{**p**-**level:} \\\\ \\hline PointOBB(ORCNN) [18] & 28.3 & 70.7 & 1.5 & 64.9 & 68.8 & 46.8 & 33.9 & 9.1 & 10.0 & 20.1 & 0.2 & 47.0 & 29.7 & 38.2 & 30.6 & 33.31 \\\\ Point2RBox-SK [17] & 53.3 & 63.9 & 3.7 & 50.9 & 40.0 & 39.2 & 45.7 & 76.7 & 10.5 & 56.1 & 5.4 & 49.5 & 24.2 & 51.2 & 33.8 & 40.27 \\\\ \\hline \\multicolumn{13}{l}{**SAM-based training:**} \\\\ \\hline RetinaNet [1] & 79.7 & 64.6 & 11.1 & 45.6 & 67.9 & 47.7 & 74.6 & 81.1 & 6.6 & 75.7 & 20.0 & 30.6 & 36.9 & 50.5 & 26.1 & 47.91 \\\\ FCOS [3] & 78.2 & 61.7 & 11.7 & 45.1 & 68.7 & 64.8 & 78.6 & 80.9 & 5.0 & 77.0 & 16.1 & 31.8 & 45.7 & 53.4 & 44.2 & 50.84 \\\\ Oriented R-CNN [5] & 79.0 & 62.6 & 8.6 & 55.8 & 68.4 & 67.3 & 77.2 & 79.5 & 4.4 & 77.1 & 26.9 & 28.8 & 49.2 & 55.2 & 51.3 & 52.75 \\\\ \\hline \\multicolumn{13}{l}{**P2RBox-based training:**} \\\\ \\hline RetinaNet & 86.5 & 66.2 & 15.7 & 64.8 & 70.9 & 56.3 & 76.2 & 83.6 & 45.5 & 80.1 & 34.0 & 38.4 & 38.2 & 55.7 & 37.0 & 56.66 \\\\ FCOS & 87.8 & 65.7 & 15.0 & 60.7 & 73.0 & 71.7 & 78.9 & 81.5 & 44.5 & 81.2 & 41.2 & 39.3 & 45.5 & 57.5 & 41.2 & 59.04 \\\\ Oriented R-CNN & 87.4 & 66.6 & 13.7 & 64.0 & 72.0 & 74.9 & 85.7 & 89.2 & 47.9 & 83.3 & 45.7 & 40.6 & 50.1 & 65.6 & 49.1 & 62.43 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Detection performance of each category on the DOTA-v1.0 test set
\\begin{table}
\\begin{tabular}{l|c c c c c c c c c c c c c c c c} \\hline \\hline
**Method** & **APL APO** & **BF** & **BC** & **BR** & **CH** & **ESA** & **ETS** & **DAM** & **GF** & **GTF** & **HA** & **OP** & **SH** & **STA** & **STO** & **TC** & **TS** & **VE** & **WM** & **mAP\\({}_{50}\\)** \\\\ \\hline \\multicolumn{13}{l}{**r**-**level:} \\\\ \\hline RetinaNet [2] & 58.9 & 19.8 & 73.1 & 81.3 & 17.0 & 72.6 & 68 & 47.3 & 20.7 & 74.0 & 73.9 & 32.5 & 32.4 & 75.1 & 67.2 & 58.9 & 81.0 & 44.5 & 38.3 & 62.6 & 54.96 \\\\ FCOS [3] & 61.4 & 38.7 & 74.3 & 81.1 & 30.9 & 72.0 & 74.1 & 62.0 & 25.3 & 69.7 & 79.0 & 32.8 & 48.5 & 80.0 & 63.9 & 68.2 & 81.4 & 46.4 & 42.7 & 64.4 & 59.83 \\\\ Oriented R-CNN & 63.1 & 34.0 & 79.1 & 87.6 & 41.2 & 72.6 & 76.6 & 65.0 & 26.9 & 69.4 & 82.8 & 40.7 & 55.9 & 81.1 & 72.9 & 62.7 & 81.4 & 53.6 & 43.2 & 65.6 & 62.80 \\\\ \\hline \\multicolumn{13}{l}{**r**-**level:} \\\\ \\hline H2RBox [15] & 68.1 & 13.0 & 75.0 & 85.4 & 19.4 & 72.1 & 64.4 & 60.0 & 23.6 & 68.9 & 78.4 & 34.7 & 44.2 & 79.3 & 65.2 & 69.1 & 81.5 & 53.0 & 40.0 & 61.5 & 57.80 \\\\ H2RBox-v2 [24] & 67.2 & 37.7 & 55.6 & 80.8 & 29.3 & 66.8 & 76.1 & 58.4 & 26.4 & 53.9 & 80.3 & 25.3 & 48.9 & 78.8 & 67.6 & 62.4 & 82.5 & 49.7 & 42.0 & 63.1 & 57.64 \\\\ \\hline \\multicolumn{13}{l}{**PointOBB-based training:**} \\\\ \\hline FCOS [3] & 58.4 & 17.1 & 70.7 & 77.7 & 0.1 & 70.3 & 64.7 & 4.5 & 7.2 & 0.8 & 74.2 & 9.9 & 9.1 & 69.0 & 38.2 & 49.8 & 46.1 & 16.8 & 32.4 & 29.6 & 37.31 \\\\ Oriented R-CNN & 58.2 & 15.3 & 70.5 & 78.6 & 0.1 & 72.2 & 69.6 & 1.8 & 3.7 & 0.3 & 77.3 & 16.7 & 4.0 & 79.2 & 39.6 & 51.7
**The impact of Mask quality assessment weights**. For a candidate mask, \\(Score=S_{SAM}+\\alpha\\cdot S_{bmm}+\\beta\\cdot S_{cgm}\\). In this paper, the default weights between different metrics (\\(\\alpha,\\beta\\))are set to 1. To test the impact of different weights on the results in the evaluation metrics of masks, Tab. 4 designs experiments with varying weights for the first and second modules to demonstrate the stability of weights within a certain range. The final pseudo-rotation annotations generated by P2RBox and the pseudo-annotations generated by the SAM parameter-free method are compared with the ground truth (GT) in terms of mIoU in Tab. 5.
## 5 Conclusion
This paper introduces P2RBox, the first SAM-based oriented object detector to our best knowledge. P2RBox distinguishes features through multi-instance learning, introduces a novel method for assessing proposal masks, designs a centrality-guided module for oriented bounding box conversion,
\\begin{table}
\\begin{tabular}{c|c c c} \\hline \\hline \\(\\beta\\)\\(\\alpha\\) & 0.8 & 1.0 & 1.2 \\\\ \\hline
0.8 & 60.64 & 60.44 & 60.13 \\\\
1.0 & 60.65 & 60.68 & 60.49 \\\\
1.2 & 60.62 & 60.60 & 60.69 \\\\ \\hline \\hline \\end{tabular}
\\begin{tabular}{c|c c} \\hline \\hline Method & DOTA & DIOR-R \\\\ \\hline SAM & 54.57 & 49.23 \\\\ P2RBox & 60.68 & 59.61 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: The impact of different \\(\\alpha\\) and \\(\\beta\\) on mIoU (DOTA)
Figure 3: Examples of detection results on the DOTA dataset using P2RBox (Oriented R-CNN).
\\begin{table}
\\begin{tabular}{c c c c|c c c} \\hline \\hline \\(\\mathbf{S_{SAM}}\\) & \\(\\mathbf{S_{bmm}}\\) & \\(\\mathbf{S_{boxCls}}\\) & \\(\\mathbf{S_{offset}}\\) & \\(\\mathbf{Angle}\\) & **RetinaNet** & **FCOS** & **Oriented R-CNN** \\\\ \\hline \\(\\checkmark\\) & - & - & - & - & 47.91 & 50.84 & 52.75 \\\\ \\(\\checkmark\\) & - & - & - & \\(\\checkmark\\) & 49.07 & 51.02 & 53.79 \\\\ \\(\\checkmark\\) & \\(\\checkmark\\) & - & - & \\(\\checkmark\\) & 49.60 & 52.68 & 55.45 \\\\ \\(\\checkmark\\) & \\(\\checkmark\\) & \\(\\checkmark\\) & - & \\(\\checkmark\\) & 52.35 & 54.73 & 58.00 \\\\ \\(\\checkmark\\) & \\(\\checkmark\\) & \\(\\checkmark\\) & \\(\\checkmark\\) & \\(\\checkmark\\) & 56.66 & 59.04 & 62.43 \\\\ - & \\(\\checkmark\\) & \\(\\checkmark\\) & \\(\\checkmark\\) & \\(\\checkmark\\) & 56.78 & 59.35 & 62.00 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: The impact of each module of P2RBox on mAP\\({}_{5}0\\)and trains a fully supervised detector. P2RBox achieves impressive detection accuracy, with the exception of complex categories like BR. P2RBox offers a training paradigm that can be based on any proposal generator, and its generated rotated bounding box annotations can be used to train various strong supervised detectors, making it highly versatile and performance-adaptive without the need for additional parameters.
## Appendix
**Upper Limit in our method** In fact, we don't create new mask proposals; we simply choose a mask from the SAM generator using our criteria. As a result, there's a performance limit. When selecting based on IoU with the ground truth, the IoU results are displayed in Tab. 6. This result demonstrates that we have outperformed the SAM model in every category compared to simply selecting the highest score. It also highlights that for some categories, the performance remains poor due to very low upper limits, despite significant improvements from the baseline.
**Details about the supervised value of angle** Tab. 7 provides detailed information. The symmetric angle prediction method shows a slight decrease in IoU for some categories except for PL, which is negligible.
However, it experiences a significant drop in the BD category. The issue arises because the annotation or ground truth for BD does not align with its symmetry axis, even when a symmetry axis is present, as illustrated in Fig. 4.
**The limitations on the upper performance bound for the Bridges.** category are quite restrictive. This is primarily attributed to the distinctive nature of its definition, which deviates from the conventional object definitions. In the case of bridges, they are defined as road segments that span across bodies of water, leading to a situation where there are insufficient discernible pixel variations between the left and right ends of the bridge. Consequently, this characteristic significantly hampers the performance of the SAM model. As a result, it imposes a notable constraint on the potential performance within this category. This challenge is further exemplified in Fig. 5.
**Further Visualizations of P2RBox**: Since P2RBox operates as a generative method, the characteristics of the broxes it produces can provide insights into the model's behavior. Illustrated in Fig. 6, the red box represents the pseudo boxes directly generated by SAM, prioritizing the highest initial score while minimizing the bounding box. Conversely, the green section depicts \\(S_{SAM}+S_{sem}\\) as the evaluation criteria. Additionally, the purple boxes are created with \\(Score=S_{SAM}+S_{bmm}+S_{cgm}\\) and then transformed into rboxes with predicted angles aligned with the symmetric axis.
Fig. 7 illustrates further insights into the inference outcomes on DOTA achieved with Oriented R-CNN trained on pseudo rboxes generated by P2RBox.
\\begin{table}
\\begin{tabular}{c|c c c c c c c c c c c c c c c c c} \\hline Method & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & mIoU \\\\ \\hline \\hline SAM & 55.70 & 60.72 & 17.85 & 62.65 & 63.79 & 65.50 & 67.66 & 78.38 & 25.54 & 57.87 & 46.12 & 48.47 & 52.26 & 60.20 & 56.04 & 54.57 \\\\ P2RBox & 71.22 & 61.0 & 22.01 & 64.83 & 65.42 & 69.22 & 67.97 & 80.70 & 44.80 & 58.49 & 69.95 & 52.22 & 57.30 & 63.50 & 59.54 & 60.68 \\\\ IoU-highest & 74.08 & 70.39 & 26.23 & 78.53 & 69.61 & 73.48 & 74.91 & 83.43 & 47.14 & 64.61 & 70.08 & 58.37 & 66.51 & 66.81 & 64.11 & 65.89 \\\\ \\hline \\end{tabular}
\\end{table}
Table 6: IoU result of SAM (highest score), P2RBox, ceiling (always choose the highest IoU using SAE on PL and HC while others minimum).
\\begin{table}
\\begin{tabular}{c|c c c c c c c c c c c c c c c} \\hline Method & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC & mIoU \\\\ \\hline minimum-only & 57.83 & 66.10 & 22.01 & 64.83 & 65.42 & 69.22 & 67.97 & 80.70 & 44.80 & 58.49 & 66.95 & 52.22 & 57.30 & 63.50 & 57.77 & 59.68 \\\\ supervised-value & 71.22 & 58.14 & 21.80 & 64.94 & 65.46 & 69.12 & 68.15 & 80.37 & 43.80 & 56.00 & 64.91 & 52.87 & 57.42 & 62.95 & 59.54 & 59.78 \\\\ \\hline \\end{tabular}
\\end{table}
Table 7: Different mask2rbox method IoU results.
Figure 4: GT, minimum and SAE on category BD.
**What Result in Bad Bases using minimum bounding rectangle.** To illustrate this without loss of generality, let's consider an object that exhibits symmetry about the y-axis (see Fig. 8). We'll denote three points on the oriented circumscribed bounding box as \\(a\\), \\(b\\), and \\(c\\), respectively, and their corresponding mirror points as \\(\\hat{a}\\), \\(\\hat{b}\\), and \\(\\hat{c}\\).
Now, suppose there exists a minimum circumscribed bounding box, denoted as \\(Mbox\\), which is distinct from \\(Rbox\\). By virtue of symmetry, \\(Mbox\\) must also exhibit symmetry, compelling its shape to be square with its diagonal aligned along the axis of symmetry. To encompass the entire object, \\(Mbox\\) must enclose points \\(a\\), \\(b\\), and \\(c\\) as illustrated in Fig. 8 (a). Let \\(d\\) represent the length of the diagonal of \\(Mbox\\). We have the following conditions:
\\[d>=\\max(h+x_{a},w/2+y_{b})-\\min(-x_{c},y_{b}-w/2), \\tag{17}\\]
\\[d^{2}<=2\\times w\\cdot h. \\tag{18}\\]
The second inequality is derived based on the area requirement. For the more general case, as shown in Fig. 8 (b). By finding two tangent lines with fixed slopes (1 and -1), where \\(\\alpha\\cdot h\\) is the distance between the intersections of these lines with the right green edge, we obtain an equation regarding the length of the diagonal:
\\[d=w+\\alpha\\cdot h. \\tag{19}\\]
Specifically, if the width is equal to the height, \\(w=h\\), the inequality simplifies to:
\\[\\alpha<=\\sqrt{2}-1. \\tag{20}\\]
In conclusion, taking an airplane as an example, as shown in the last column of Fig. 1, due to the intersection ratio \\(\\alpha<\\sqrt{2}-1\\), ambiguity arises between the minimum bounding rectangle and the oriented bounding rectangle, which is well addressed by Symmetry Axis Estimation Module.
Figure 5: Category BR, with mask proposals generated by SAM with annotated point and its circumscribed rbox.
Figure 6: Visualization of generated rbox with different approaches.
Figure 8: Convex polygon example compared to the general case.
Figure 7: Visualization of P2RBox (Oriented R-CNN).
## References
* [1] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In _Advances in Neural Information Processing Systems_, pages 91-99, 2015.
* [2] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 2980-2988, 2017.
* [3] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 9627-9636, 2019.
* [4] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu. Learning roi transformer for oriented object detection in aerial images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2849-2858, 2019.
* [5] Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han. Oriented r-cnn for object detection. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 3520-3529, 2021.
* [6] Hakan Bilen and Andrea Vedaldi. Weakly supervised deep detection networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2846-2854, 2016.
* [7] Peng Tang, Xinggang Wang, Xiang Bai, and Wenyu Liu. Multiple instance detection network with online instance classifier refinement. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2843-2851, 2017.
* [8] Peng Tang, Xinggang Wang, Song Bai, Wei Shen, Xiang Bai, Wenyu Liu, and Alan Yuille. Pcl: Proposal cluster learning for weakly supervised object detection. _IEEE transactions on pattern analysis and machine intelligence_, 42(1):176-191, 2018.
* [9] Ze Chen, Zhihang Fu, Rongxin Jiang, Yaowu Chen, and Xian-Sheng Hua. Slv: Spatial likelihood voting for weakly supervised object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12995-13004, 2020.
* [10] Fang Wan, Pengxu Wei, Jianbin Jiao, Zhenjun Han, and Qixiang Ye. Min-entropy latent model for weakly supervised object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1297-1306, 2018.
* [11] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2921-2929, 2016.
* [12] Ali Diba, Vivek Sharma, Ali Pazandeh, Hamed Pirsiavash, and Luc Van Gool. Weakly supervised cascaded convolutional networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 914-922, 2017.
* [13] Xiaolin Zhang, Yunchao Wei, Jiashi Feng, Yi Yang, and Thomas S Huang. Adversarial complementary learning for weakly supervised object localization. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1325-1334, 2018.
* [14] Zhiwen Tan, Zhiguo Jiang, Chen Guo, and Haopeng Zhang. Wsodet: A weakly supervised oriented detector for aerial object detection. _IEEE Transactions on Geoscience and Remote Sensing_, 61:1-12, 2023.
* [15] Xue Yang, Gefan Zhang, Wentong Li, Xuehui Wang, Yue Zhou, and Junchi Yan. H2rbox: Horizontal box annotation is all you need for oriented object detection. _arXiv preprint arXiv:2210.06742_, 2022.
* [16] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Feipeng Da, and Junchi Yan. H2rbox-v2: Incorporating symmetry for boosting horizontal box supervised oriented object detection. _Advances in Neural Information Processing Systems_, 36, 2024.
* [17] Yu Yi, Xue Yang, Qingyun Li, Feipeng Da, Junchi Yan, Jifeng Dai, and Yu Qiao. Point2rbox: Combine knowledge from synthetic visual patterns for end-to-end oriented object detection with single point supervision. _arXiv preprint arXiv:2311.14758_, 2023.
* [18] Junwei Luo, Xue Yang, Yi Yu, Qingyun Li, Junchi Yan, and Yansheng Li. Pointobb: Learning oriented object detection via single point supervision. _arXiv preprint arXiv:2311.14757_, 2023.
* [19] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023.
* [20] Xue Yang, Junchi Yan, Ziming Feng, and Tao He. R3det: Refined single-stage detector with feature refinement for rotating object. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 3163-3171, 2021.
* [21] Jiaming Han, Jian Ding, Jie Li, and Gui-Song Xia. Align deep features for oriented object detection. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-11, 2021.
* [22] Jiaming Han, Jian Ding, Nan Xue, and Gui-Song Xia. Redet: A rotation-equivariant detector for aerial object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 2786-2795, 2021.
* [23] Javed Iqbal, Muhammad Akhtar Munir, Arif Mahmood, Afsheen Rafaqat Ali, and Mohsen Ali. Leveraging orientation for weakly supervised object detection with application to firearm localization. _Neurocomputing_, 440:310-320, 2021.
* [24] Yi Yu, Xue Yang, Qingyun Li, Yue Zhou, Gefan Zhang, Junchi Yan, and Feipeng Da. H2rbox-v2: Boosting bbox-supervised oriented object detection via symmetric learning. _arXiv preprint arXiv:2304.04403_, 2023.
* [25] Tianyu Zhu, Bryce Ferenczi, Pulak Purkait, Tom Drummond, Hamid Rezatofighi, and Anton Van Den Hengel. Knowledge combination to learn rotated detection without rotated annotation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15518-15527, 2023.
* [26] Wei Hua, Dingkang Liang, Jingyu Li, Xiaolong Liu, Zhikang Zou, Xiaoqing Ye, and Xiang Bai. Sood: Towards semi-supervised oriented object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15558-15567, 2023.
* [27] Yue Zhou, Xue Jiang, Zeming Chen, Lin Chen, and Xingzhao Liu. A semi-supervised arbitrary-oriented sar ship detection network based on interference consistency learning and pseudo label calibration. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 2023.
* [28] Wenhao Wu, Hau-San Wong, and Si Wu. Pseudo-siamese teacher for semi-supervised oriented object detection. _IEEE Transactions on Geoscience and Remote Sensing_, 2024.
* [29] Trevor J Chan, Aarush Sahni, Jie Li, Alisha Luthra, Amy Fang, Alison Pouch, and Chamith S Rajapakse. Sam3d: Zero-shot semi-automatic segmentation in 3d medical images with the segment anything model. _arXiv preprint arXiv:2405.06786_, 2024.
* [30] Vazgen Zohranyan, Vagner Navasardyan, Hayk Navasardyan, Jan Borggrefe, and Shant Navasardyan. Dr-sam: An end-to-end framework for vascular segmentation, diameter estimation, and anomaly detection on angiography images. _arXiv preprint arXiv:2404.17029_, 2024.
* [31] Shumeng Li, Lei Qi, Qian Yu, Jing Huo, Yinghuan Shi, and Yang Gao. Concatenate, fine-tuning, re-training: A sam-enabled framework for semi-supervised 3d medical image segmentation. _arXiv preprint arXiv:2403.11229_, 2024.
* [32] Hong Liu, Haosen Yang, Paul J van Diest, Josien PW Pluim, and Mitko Veta. Wsi-sam: Multi-resolution segment anything model (sam) for histopathology whole-slide images. _arXiv preprint arXiv:2403.09257_, 2024.
* [33] Zhen Chen, Qing Xu, Xinyu Liu, and Yixuan Yuan. Un-sam: Universal prompt-free segmentation for generalized nuclei images. _arXiv preprint arXiv:2402.16663_, 2024.
* [34] Zhaoyang Wei, Pengfei Chen, Xuehui Yu, Guorong Li, Jianbin Jiao, and Zhenjun Han. Semantic-aware sam for point-prompted instance segmentation. _arXiv preprint arXiv:2312.15895_, 2023.
* [35] Xin Zhang, Yu Liu, Yuming Lin, Qingmin Liao, and Yong Li. Uv-sam: Adapting segment anything model for urban village identification. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 38, pages 22520-22528, 2024.
* [36] Bo Li, Haoke Xiao, and Lv Tang. Asam: Boosting segment anything model with adversarial tuning. _arXiv preprint arXiv:2405.00256_, 2024.
* [37] Pingping Zhang, Tianyu Yan, Yang Liu, and Huchuan Lu. Fantastic animals and where to find them: Segment any marine animal with dual sam. _arXiv preprint arXiv:2404.04996_, 2024.
* [38] Pengfei Chen, Xuehui Yu, Xumeng Han, Najmul Hassan, Kai Wang, Jiachen Li, Jian Zhao, Humphrey Shi, Zhenjun Han, and Qixiang Ye. Point-to-box network for accurate object detection via single point supervision. In _European Conference on Computer Vision_, pages 51-67. Springer, 2022.
* [39] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 3974-3983, 2018.
* [40] Yue Zhou, Xue Yang, Gefan Zhang, Jiabao Wang, Yanyi Liu, Liping Hou, Xue Jiang, Xingzhao Liu, Junchi Yan, Chengqi Lyu, Wenwei Zhang, and Kai Chen. Mmrotate: A rotated object detection benchmark using pytorch. In _Proceedings of the 30th ACM International Conference on Multimedia_, page 7331-7334, 2022.
* [41] Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, and Junwei Han. Anchor-free oriented proposal generator for object detection. _IEEE Transactions on Geoscience and Remote Sensing_, 60:1-11, 2022.
* [42] Ke Li, Gang Wan, Gong Cheng, Liqiu Meng, and Junwei Han. Object detection in optical remote sensing images: A survey and a new benchmark. _ISPRS journal of photogrammetry and remote sensing_, 159:296-307, 2020.
* [43] Leon Bottou. Stochastic gradient descent tricks. In _Neural Networks: Tricks of the Trade: Second Edition_, pages 421-436. Springer, 2012. | Single-point annotation in oriented object detection of remote sensing scenarios is gaining increasing attention due to its cost-effectiveness. However, due to the granularity ambiguity of points, there is a significant performance gap between previous methods and those with fully supervision. In this study, we introduce P2RBox, which employs point prompt to generate rotated box (RBox) annotation for oriented object detection. P2RBox employs the SAM model to generate high-quality mask proposals. These proposals are then refined using the semantic and spatial information from annotation points. The best masks are converted into oriented boxes based on the feature directions suggested by the model. P2RBox incorporates two advanced guidance cues: Boundary Sensitive Mask guidance, which leverages semantic information, and Centrality guidance, which utilizes spatial information to reduce granularity ambiguity. This combination enhances detection capabilities significantly. To demonstrate the effectiveness of this method, enhancements based on the baseline were observed by integrating three different detectors. Furthermore, compared to the state-of-the-art point-annotated generative method PointOBB, P2RBox outperforms by about 29% mAP (62.43% vs 33.31%) on DOTA-v1.0 dataset, which provides possibilities for the practical application of point annotations. | Write a summary of the passage below. | 255 |
arxiv-format/2405_00264v2.md | # Using Texture to Classify
Forests Separately from Vegetation
1st David R. Treadwell IV
_Computer Science_
_Northeastern University_
Seattle, WA, USA
[email protected]
2nd Derek Jacoby
_Computer Science_
_University of Victoria_
Victoria, BC, Canada
[https://orcid.org/0000-0002-1552-7484](https://orcid.org/0000-0002-1552-7484)
3rd Will Parkinson
_Earth Daily Analytics, Inc_
Vancouver, BC, Canada
[email protected]
4th Bruce Maxwell
_Computer Science_
_Northeastern University_
Seattle, Wa, USA
[email protected]
5th Yvonne Coady
_Computer Science_
_University of Victoria_
Victoria, BC, Canada
[email protected]
## I Introduction
The ability to classify regions of terrain from satellite images is a critical step in monitoring the health of ecological systems. Algorithms exist to help parse the visual bands captured by satellites, such as the Level-2A Algorithm to classify terrain properties in Sentinel-2 image data. [1] However, a missing component of these image processing algorithms is the ability to parse vegetation types - specifically, the ability to separate forest areas from non-forest vegetation. Such classifications can improve land usage monitoring and aid in identifying ecological risks, which in turn can help prevent natural disasters and increase emergency preparedness.
Currently, techniques exist to identify areas of trees at the pixel level, but these are primarily designed for close-to-ground aerial images and urban environments. This paper seeks to demonstrate that by utilizing both textural and spectral data from Sentinel-2 satellite images, the binary classification of forest vs. non-forest vegetation can be performed. The static classification algorithm uses a texture mask derived from edges within an RGB image, combined with the NDVI image, to classify forest regions with high accuracy when evaluated against alternative methods. Compared to more complex classification techniques, this paper presents evidence that a simple, easy-to-understand static algorithm that combines textural and spectral data can produce high-quality results.
## II Related Work
### _Satellite imagery and preparation_
The European Space Agency (ESA) has been making Sentinel-2 data available since 2014 and offers a platform for freely downloading corrected data. [2] In this study the imagery is instead downloaded from the Earth Daily Analytics platform on Amazon Web Services. For the moment, the platform offers imagery corrected through the pipeline offered by the ESA, but there are many other emerging options. One of the authors participated in a study of correction techniques for certain classes of coastal waters [3] and it is clear that in specific cases alternative correction mechanisms are preferred. The primary element of correction is taking the top-of-atmosphere signal from the satellite and mapping it to a bottom-of-atmosphere estimate. Essentially, the goal is to cleanly remove the effects of the atmosphere from the data. There are both fixed and adaptive methods to perform this correction. The default atmospheric correction mechanism performed by ESA, called sen2cor, is a good general correction. [4] Other methods, many involving neural networks, provide superior results in specific situations. [5] In future work, the investigation of correction methods for forest identification will be further investigated.
### _Texture Detection_
Texture, alongside spectral and contextual information, is a well-established feature in segmenting and classifying images. [6] Many techniques for utilizing texture-based features as the primary algorithmic classifier exist, as there are many ways to identify spatial and statistical patterns within an image. The conventional approach uses texture independent of spectral andcontextual information by evaluating it in grey-tone images. [6] This can be seen when using a feature bank of textures based on image statistics [6] or other techniques such as a discrete wavelet transform that decomposes an image into sub-bands that are used for feature extraction and classification. [7] Texture can also be a useful feature for machine learning classification, such as using local texture descriptors alongside a KNN classifier, which has shown promise in terrain segmentation and classification. [8]
Texture has been a popular topic of research in terrain classification. Sali and Wolfson showed that texture, via second-order statistics about local pixel neighborhoods, could result in near-perfect segmentation of satellite images. [9] When classifying areas of terrain, especially related to forest and \"natural\" vegetation such as meadows and shrubs, previous works have identified the benefits of including texture as a feature in classification [10] or solely relying on texture to train a model. [8] These results are especially positive when compared to areas of planted vegetation such as crops. [10][8] It has also been shown that combining spectral information with texture features can further enhance the accuracy of classification, [6] and that spectral ratios specifically designed to analyze vegetation can provide further improved accuracy - again, especially for forest, meadow, and shrub-like vegetation. [10]
Several methods exist to classify tree vs. non-tree areas at the pixel level. Yang et al. utilized a color, texture, and entropy feature vector to create a pixel-level classifier for trees from aerial imagery. [11] They achieved 91.7% accuracy using their classifier in urban areas with a high resolution between 0.5-1 meters per pixel. [11] However, the authors note that the classifier can struggle with areas like grass and when classifying tree types that the classifier was not trained on. [11] Similarly, the work of Jain et al. uses a Mask Region-based CNN to classify trees from aerial images taken at a high resolution, with an average precision score of 0.36 and an average recall score of 0.42. [12] Bosch builds on the work of Yang et al. through \"DetecTree\", a binary AdaBoost classifier using a GIST feature vector. [13] DetecTree was able to classify pixels with between 85.98% and 91.75% accuracy in testing, depending on the scene. [13] While strong methods exist for classifying trees from aerial images, all of these methods focus on high-resolution scenes near to the earth where there is more detail and individual trees can be detected. This is different than the low-resolution and far-distance satellite image scenes where individual trees cannot be accurately discerned (e.g. forests), which are the goal of this paper.
## III Methods
### Algorithmic Classification
To classify regions as forest or non-forest from Sentinel-2 satellite image data, the following process is used as part of a static algorithm:
_1. Obtain the RGB image from the Sentinel-2 satellite images:_ To create the RGB image, the corresponding color bands from the Sentinel-2 images were used: the B2 band for blue, B3 band for green, and B4 band for red. [14] After creating the RGB image, it is transformed from the float range [0.0, 1.0] to a range of [0, 255] integers (8-bit unsigned) by multiplying the float values by 255, then integer-dividing by 1 to round decimal values down to integer values. The images were captured from RGB Sentinel-2 tiles, with 500x500 cropped target pixel regions. The satellite images are captured from a mean altitude of 786km, [15] with an orbital swath of 290km. [16]
_2. Take the Laplacian of the image to detect edges:_ First, the RGB satellite image is converted to greyscale. Then, a Laplacian edge detection filter is applied, with a kernel size of 5. This generates an image where edges are visible, but areas within the edges are not. The filter is applied using the OpenCV Laplacian function. This function uses a Laplace operator that calculates the second derivatives of pixel neighborhoods about a target pixel. [17]
_3. Close regions of detected edges to create areas of high and low texture, rather than just showing where edges exist:_
Using a 5x5 kernel, the regions of the Laplacian image are closed using the OpenCV morphologyEx function. The morphological closing works by sliding the kernel across the image, and first dilating (a target pixel is given a value of 1 if any pixel under the kernel has a value of 1), then eroding (a target pixel is given a value of 1 if all pixels under the kernel have a value of 1). This generates an image where high-texture regions are \"activated,\" while regions of low texture are not. [17] This output image serves as the texture mask in classification.
_4. Apply a vegetation mask on the image to identify candidate areas for forests:_ The NDVI (Normalized Difference Vegetation Index) image is calculated in the same way that the Level-2A algorithm calculates it: (B8 - B4)/(B8 + B4). [1] This image shows areas of healthy vegetation that absorb the visible red wavelength but reflect the infrared wavelength (higher values in the ratio), as opposed to regions lacking healthy vegetation where more visible red and less infrared wavelengths are reflected (low values). [1] This image is used as the vegetation mask in classification.
_5. Combine the texture mask and the vegetation mask to identify areas that are likely forest vegetation:_ Each pixel of the output image is iterated over. For the examples within this report, this is the original RGB image, but it could also be the fully classified image from the scl (full terrain classification) data generated by the Sentinel-2 tiles. Pixels that are activated in both the texture mask (value greater than 64) and the NDVI image (value greater than 0.5) are classified as forest-pixels in the output image as forest pixels.
### Ground-Truth Image Preparation
To compute the relative accuracy of the static algorithm, as well as the DetecTree classifications for comparison, the Forest Resources Inventory leaf-on LiDAR Landcover data provided by the Ontario Natural Resources and Forestry Ministry was used. [18] This data was last updated in April of 2023, compared to the satellite image data from June of 2023, which makes them reasonable to compare. [18] However, processing was required to try to match the original Ontario data (referred to as \"ground-truth\" data from here on) with the Sentinel-2 satellite image data.
The sentinel-2 imagery for the test region was provided in a projection appropriate to Northern Ontario (EPSG:32615). Generally, to reduce inaccuracies it is desirable to use a projection that is as specific to the region of interest as possible. However, the more general ground truth data (the Ontario Forest Resources Inventory, FRI) is collected from airborne LiDAR sources and provided in an EPSG:3857 projection. To align the satellite and ground truth imagery, it was necessary to use QGIS to re-project the FRI data into EPSG:32615, and then to extract the portion of the imagery that corresponds to the area of interest. This was FRI data exported and aligned with the Sentinel-2 tile region.
With a ground-truth region similar to the Sentinel-2 tile region, additional steps were taken to obtain the analysis region to compare with. Due to the challenges with aligning ground-truth data and Sentinel-2, two methods were used to obtain an estimated ground-truth scene:
1. **Different Projections** Using an original image of size (1811, 1465, 3) [rows, columns, color channels], the full-size ground-truth map is cropped to resemble the Sentinel-2 image. This corresponds roughly to trimming pure-black pixels from the top, right, and bottom edges of the ground-truth map, though the actual cropping may differ due to slight misalignments resulting from the different projection types. Pure-black ([0, 0, 0] RGB value) pixels are then concatenated horizontally to the left edge of the ground-truth image, to make it a square with the dimensions of (1783, 1783, 3). After cropping the full map to the target scene, the final ground-truth image size in this method is (81, 81, 3).
2. **Similar Projections** Using an original image of size (1098, 872, 3), the full-size ground-truth map aligned with the Sentinel-2 image data is used. Similar to method 1, pure-black pixels were concatenated to the left edge to pad the image dimensions to a square of size (1098, 1098, 3). After cropping to the target scene, the final image size in this method is (50, 50, 3). After preparing the square images, both methods use a ratio of ground-truth size (respective to the method) divided by Sentinel-2 image size. These use the number of rows in the image but could have used the number of columns because the image is a square. This ratio of pixels in the ground-truth image is used to obtain a square scene that roughly matches the target scene from the satellite image. When performing the comparison, for accuracy, these images are also resized to (500, 500, 3) to match the satellite image size, but this resizing is not shown in figures 1 or 2.
To compare the static algorithm's classification and ground-truth image, a pixel-by-pixel comparison of images was performed. Pixels were ignored entirely if the ground-truth image did not classify them, which could manifest as a ground-truth pixel classification of \"Other\", \"Cloud/Shadow\", or \"Disturbance\". Forest pixels were identified as belonging to any of the following classifications: \"Sparse Treed\", \"Treed Upland\", \"Deciduous Treed\", \"Mixed Treed\", \"Coniferous Treed\", \"Plantations - Treed Cultivated\", or \"Tallgrass Woodland\". The remaining terrain classifications were considered non-forest.
## IV Results
An example scene with a variety of natural terrain types such as forest, non-forest vegetation, and lake water was used in experiments to test how the static algorithm handled multiple input types. The target scene is the area around Keating Lake in Ontario, Canada, Southeast of Pickle Lake. The masks/steps that lead to a forest identification using this paper's static algorithm are visualized individually, including the output identification image. An image from an open-source tree detector, DetecTree, is also included as a comparison. [13]
The images included in the figures are as follows:
* **RGB** The original RGB-generated satellite image, using the appropriate visual bands from Sentinel-2.
* **Laplacian** A Laplacian filtered image generated by the OpenCV Laplacian function.
* **Closed** A morphologically closed image generated using the OpenCV morphologyEx function.
* **NDVI** The NDVI image generated using the appropriate visual bands from Sentinel-2.
* **Forest Identification** The output image where forest pixels are highlighted in the original RGB image to demonstrate the classification. Pixels determined to be
Fig. 1: Comparison of RGB image from Sentinel-2 satellite with ground-truth map generated via method 1.
Fig. 2: Comparison of RGB image from Sentinel-2 satellite with ground-truth map generated via method 2.
forest are given a value of [Red: 255, Green: 0, Blue: 0]. Non-forest pixels remain unchanged compared to the original RGB image.
* **DetecTree Forest Identification** The predicted tree regions from the DetecTree classifier. [13] Tree-classified pixels are given a pure-red value as in the Forest Identification image, for comparison between the two methods.
The figures show initially promising results, as areas that appear to be forest are highlighted red while non-forest areas are not altered. The combination of texture and spectral features tends to prevent non-forest areas from being identified, while also capturing forest areas. DetecTree provides a more conservative classification but does not always avoid type one or two errors. For instance, parts of the lake are classified as trees, while areas of forest are ignored. DetecTree seems more conservative with classifications near forest edges but also appears to under-classify these forest regions. It is worth mentioning again that the DetecTree model being used in the comparison is designed with aerial images taken closer to the surface in mind, as opposed to satellite images when terrain would be identified with very little closeup detail.
While the next component of the paper looks into accuracy values, it is important to keep in mind that, due to the sampling and processing methods, there may be inaccuracies in what is used as ground-truth data. From the comparison of RGB and classification images with the ground-truth image, it becomes clear that there may be differences in pixel precision for the ground-truth image. For example, in figure 4, the bottom left quadrant seems to switch somewhat randomly between forest and non-forest regions and does not appear to match exactly.
Viewing the confusion matrix for both methods of ground truth image generation, method 1 (figure 7) seems to do a better job of matching the scenes, as there are more true values and less false values between both classification processes than in figure 8 for method 2. In figure 7, the differences between the static algorithm and DetecTree are clearly visible, as the static algorithm predicts fewer false negatives, but also predicts more false positives. This also leads to more true positives, however.
Based on these tests, figure 6 shows that, using ground-truth method 1, the static algorithm accurately classifies 59.75% of pixels, while DetecTree accurately classifies 58.69% of pixels. For method 2, the accuracy for both detection processes decreases, with the static algorithm correctly classifying 49.91% and DetecTree correctly classifying 47.79% of pixels. From these results, it does appear that the static algorithm is outperforming DetecTree slightly.
not accurately classifying the lake region, while the static algorithm is performing extremely well for this scene from a visual inspection. The discrepancy between accuracy values and the visual results is most likely due to flaws in the ground-truth data. Again, areas that appear as forests in the RGB satellite image are not always correctly classified in forest regions in the ground truth image. It appears that the terrain classification in the ground-truth image is more generalized, and less precise than what can be seen in the satellite images. This, combined with the morphological resizing operation to create images of the same size, and the manual steps necessary to generate a comparable ground-truth image, leaves room for error in the accuracy calculation. Without precise ground-truth data, it is difficult to make assumptions about the quantitative comparison, but the static algorithm may be performing even better than its accuracy value suggests, and there is evidence to suggest it performs better than DetecTree.
## V Conclusion and Next Steps
This work sought to identify a static algorithm that could provide a pixel-level classification of vegetation as either forest or non-forest from Sentinel-2 satellite images. Classifiers currently exist for identifying tree pixels in images, but they are designed for aerial photos that are closer to the ground than the high-level Sentinel-2 satellite images being used in this paper. [13] The process identified in this paper shows strong initial results, even with just a basic texture filter and NDVI mask, but more work is necessary to properly quantify its accuracy level.
As seen in the results section, the ground-truth data used to compare classification processes may lack some inherent accuracy, which diminishes the metrics for the classification processes compared in this paper. Identifying a source of ground-truth data, with label precision at the same level of detail as the satellite images being used, is a critical next step to verify the results of any detection algorithm. Various government agencies maintain detailed terrain inventories, though these can present inaccuracies. [18] A complete assessment of multiple forest classification processes will require creating a database with various terrain types, including multiple forest types (e.g. temperate, tropical, boreal), labeled with the true terrain classifications. [19]
Future work will also require creating a more rigorous feature bank designed specifically for parsing forest areas at the desired field of view and resolution. By supplementing texture features with additional spectral and contextual features, there is potential to further increase accuracy. One example is testing which color bands and visual ratios best leverage the spectral information the satellites capture, which may also include using multiple ratios in a spectral feature bank. To this end, steps remain to fine-tune a highly accurate static algorithm to identify forest areas separately from non-forest vegetation, but initial results show that even a basic combination of textural and spectral features can provide a high-quality classification.
## References
* sentinel-2 MSI technical guide
- sentinel online. [Online]. Available: [https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-mssi/level-2a/algorithm-overview](https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-mssi/level-2a/algorithm-overview)
* [2] F. Gascon, C. Bouziane, O. Theguat, M. Jung, B. Francesconi, J. Louis, V. Looijt, B. Lafrance, S. Massera, A. Gaudel-Vacaresse, F. Languille, B. Alahmund, F. Vialleflefon, B. Pflug, J. Bieniarz, S. Clerc, L. Pessiot, T. Tremas, E. Cadau, R. De Bonis, C. Isola, P. Martimort, and V. Fernandez, \"Copernicus Sentinel-2A Calibration and Products Validation Status,\" _Remote Sensing_, vol. 9, no. 6, p. 584, Jun. 2017, number: 6 Publisher: Multidisciplinary Digital Publishing Institute. [Online]. Available: [https://www.mdpi.com/2072-4292/9/6/584](https://www.mdpi.com/2072-4292/9/6/584)
* [3] F. Giannini, B. P. Hunt, D. Jacoby, and M. Costa, \"Performance of OLCL Sentinel-3A satellite in the Northeast Pacific coastal waters,\" _Remote Sensing of Environment_, vol. 256, p. 112317, 2021, publisher: Elsevier. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0034425721000353](https://www.sciencedirect.com/science/article/pii/S0034425721000353)
* [4] M. Main-Knorm, B. Pflug, J. Louis, V. Debacker, U. Muller-Wilm, and F. Gascon, \"Sen2Cor for Sentinel-2,\" in _Image and Signal Processing for Remote Sensing XXIII_, vol. 10427. SPIE, Oct. 2017, pp. 37-48. [Online]. Available: [https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10427/1042704/Sen2Cor-Sentinel-2/10.1117/12.2278218.full](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10427/1042704/Sen2Cor-Sentinel-2/10.1117/12.2278218.full)
* [5] W. Li, Y. Huang, Q. Shen, Y. Yao, W. Xu, J. Shi, Y. Zhou, J. Li, Y. Zhang, and H. Gao, \"Assessment of Seven Atmospheric Correction Processors for the Sentinel-2 Multi-Spectral Image over Lakes in Qinghai Province,\" _Remote Sensing_, vol. 15, no. 22, p. 5370, Jan. 2023, number: 22 Publisher: Multidisciplinary Digital Publishing Institute. [Online]. Available: [https://www.mdpi.com/2072-42927/25/570](https://www.mdpi.com/2072-42927/25/570)
* [6] R. M. Haralick, K. Shanmugam, and I. Dinstein, \"Textural features for image classification,\" vol. SMC-3, no. 6, pp. 610-621. [Online]. Available: [http://ieeexplore.ieee.org/document/43039134/](http://ieeexplore.ieee.org/document/43039134/)
* [7] S. Arivazhangasi and L. Gaaneson, \"Texture classification using wavelet transform,\" vol. 24, no. 9, pp. 1513-1521. [Online]. Available: [https://www.sciencedirect.com/science/article/pii/S0167865502003902](https://www.sciencedirect.com/science/article/pii/S0167865502003902)
* [8] A. Surulanidi and S. Jenkacs, \"Texture-based classification of remotely sensed images,\" vol. 8, no. 4, p. 260. [Online]. Available: [http://www.inderscience.com/link.php?id=70546](http://www.inderscience.com/link.php?id=70546)
* [9] E. Sali and H. Wolfson, \"Texture classification in aerial photographs and satellite data,\" vol. 13, no. 18, pp. 3395-408. [Online]. Available: [https://www.tandfonline.com/doi/full/10.1080/01431169208904130](https://www.tandfonline.com/doi/full/10.1080/01431169208904130)
* [10] Y. Zhang, X. Han, and J. Yang, \"Exploring the performance of spectral and textural information for leaf area index estimation with homogeneous and heterogeneous surfaces,\" _The Photogrammetric Record_, vol. 38, no. 183, pp. 233-251, 2023. [Online]. Available: [https://onlinelibrary.wiley.com/doi/abs/10.1111/brbr.12450](https://onlinelibrary.wiley.com/doi/abs/10.1111/brbr.12450)
* [11] L. Yang, X. Wu, E. Praun, and X. Ma, \"Tree detection from aerial imagery,\" in _Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems_, ser. GIS '09. Association for Computing Machinery, pp. 131-137. [Online]. Available: [https://doi.org/10.1145/1653771.1653792](https://doi.org/10.1145/1653771.1653792)
* [12] S. Jain, R. Mittal, A. Jain, and A. Sinha, \"An efficient framework for monitoring tree cover in an area through aerial images,\" in _Applications of Machine Learning_, vol. 11139. SPIE, pp. 327-333. [Online]. Available: [https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11139/](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11139/)
Fig. 8: Confusion matrix for both the static algorithm and DetecTree using ground-truth method 2.
1113918/An-efficient-framework-for-monitoring-tree-cover-in-an-area/10.1117/12.2529442.full
* [13] M. Bosch, \"DetecTree: Tree detection from aerial imagery in python,\" vol. 5, no. 50, p. 2172. [Online]. Available: [https://joss.theoj.org/papers/10.21105/joss.02172](https://joss.theoj.org/papers/10.21105/joss.02172)
* [14] GISGeography. Sentinel 2 bands and combinations. [Online]. Available: [https://gisgeography.com/sentinel-2-bands-combinations/](https://gisgeography.com/sentinel-2-bands-combinations/)
* sentinel 2
- sentinel online. [Online]. Available: [https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-2/satellite-description/orbit](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-2/satellite-description/orbit)
- sentinel online. [Online]. Available: [https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-2/overview](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-2/overview)
* [17] OpenCV: Image filtering. [Online]. Available: [https://shorturl.at/cfiAL](https://shorturl.at/cfiAL)
- ontario data catalogue. [Online]. Available: [https://data.ontario.ca/dataset/forest-resources-inventory-leaf-on-lidar](https://data.ontario.ca/dataset/forest-resources-inventory-leaf-on-lidar)
* [19] Forest biome. [Online]. Available: [https://education.nationalgeographic.org/resource/forest-biome](https://education.nationalgeographic.org/resource/forest-biome) | Identifying terrain within satellite image data is a key issue in geographical information sciences, with numerous environmental and safety implications. Many techniques exist to derive classifications from spectral data captured by satellites. However, the ability to reliably classify vegetation remains a challenge. In particular, no precise methods exist for classifying forest vs. non-forest vegetation in high-level satellite images. This paper provides an initial proposal for a static, algorithmic process to identify forest regions in satellite image data through texture features created from detected edges and the NDVI ratio captured by Sentinel-2 satellite images. With strong initial results, this paper also identifies the next steps to improve the accuracy of the classification and verification processes.
texture, image processing, satellite images, forest, vegetation, terrain classification | Provide a brief summary of the text. | 150 |
mdpi/0072e0c2_9bb3_4286_a2ff_4f7b44bbfaf9.md | The Asymmetric Effects of Oil Price Shocks on the Chinese Stock Market: Evidence from a Quantile Impulse Response Perspective
Huiming Zhu
1 College of Business Administration, Hunan University, Changsha 410082, China; [email protected] (X.S.); [email protected] (Y.G.); [email protected] (Y.R.)
* Correspondence: [email protected]; Tel.: +86-731-8882-3670
Xianfang Su
1 College of Business Administration, Hunan University, Changsha 410082, China; [email protected] (X.S.); [email protected] (Y.G.); [email protected] (Y.R.)
* Correspondence: [email protected]; Tel.: +86-731-8882-3670
Yawei Guo and Yinghua Ren
1 College of Business Administration, Hunan University, Changsha 410082, China; [email protected] (X.S.); [email protected] (Y.G.); [email protected] (Y.R.)
* Correspondence: [email protected]; Tel.: +86-731-8882-3670
## 1 Introduction
Crude oil has been the focus of greater attention than other commodities. It plays a significant role in the economy and the financial market. Many economists have noticed the impacts of crude oil price on the economy [1; 2; 3; 4; 5]. As a barometer of the economy, the stock market's reaction is a reasonable and useful measure to reflect the impact of oil price shocks on the economy [6]. Abundant literature has investigated the link between oil price and stock markets and concludes that oil price shocks have substantial effects on stock markets [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21].
Particularly, China has become the largest oil importer and consumer [22]. Table 1 shows that the crude oil imports of China increased rapidly from 126.8 million tons in 2005 to 335.5 million tons in 2015. And oil imports accounted for 60.6% of the country's total oil consumption in 2015. China is a giant player in the global oil market. In addition, the Chinese stock market has been the second largest in terms of both trading volume and capitalization. Therefore, a lot of literature has focused on the link between oil prices and the Chinese stock market [23; 24; 25; 26; 27; 28; 29; 30]. These studies however, do not elaborate various complexities in the link between oil price and stock prices. For instance, stock markets may react differently to oil price shocks during bullish and bearish times. The effects of oil price shocks may be nonlinear or may be asymmetrical in negative and positive oil price shocks. Therefore, the effects of oil price shocks may be heterogeneous and varying depending on market conditions and the types of the oil price shocks. It is worth examining the effect of oil price shocks on the Chinese stock market simultaneously under the control of the oil and stock market conditions.
In this paper, this issue will be addressed by using a quantile impulse response function which can be calculated via a quantile vector autoregression frame. This method was first proposed by Cecchetti and Li [33] and has been used to explore the asymmetric and nonlinear effects between financial and economic variables [34; 35]. Lee and Kim [36] argue that the quantile impulse response function is widely applicable and is beneficial in that it captures the dynamic responses that the conventional impulse response does not explain. Moreover, the impulse responses in multiple quantiles can be deemed a scenario analysis that is a forecast of a variable under a scenario of a specific economic shock. Therefore, we use a quantile impulse response approach to investigate the impact of oil price shocks on stock market returns during different oil market states, with particular interest in extreme stock market circumstances (i.e., bearish and bullish markets).
We will investigate the responses of Chinese stock market returns to oil price shocks in terms of several aspects. First, each variable in the quantile VAR (Vector Autoregression) model can be defined to be at some specific state of the business for the analysis, i.e., the quantiles may differ across equations. In order to analyse the asymmetric shock effects, we propose relating conditional quantiles of the change rate of supply and demand in the oil market to business states [34]. In the oil market, the lower negative change rate of global oil production, global real economic activity and the real oil price (at lower quantiles of the change rate distribution) can be attributed to business busts, whereas their higher positive change rates (at upper quantiles of the change rate distribution) can be attributed to a boom business state. This means that during a business bust phase, the supply and demand in the oil market should decrease while during a business boom phase, the supply and demand will increase. This assumption is reasonable. For example in a boom state, on one hand oil producers may adjust their oil production to increase the oil supply. On the other hand, a high demand for oil would drive up the price of oil. Therefore lower quantiles correspond to a bust oil market state, while upper quantiles correspond to a boom oil market state. Additionally, the lower quantiles of stock market return distributions correspond to a bearish market and its upper quantiles correspond to a bullish market.
Second, following Kilian and Park [12], oil price shocks are broken down into three different components: supply shocks, aggregate demand shocks and specific demand shocks. Then we investigate their effects on the Chinese stock market. Applying a quantile impulse response analysis, we study the direction, magnitude and duration of effects of oil price shocks on the Chinese stock market during different oil market conditions. Our empirical evidence indicates that the reaction of the Chinese stock market to oil price shocks depends on oil market conditions and the underlying forces of oil price shocks. Specifically during a bust phase, oil supply and demand shocks significantly depress stock market returns, while during a boom phase, only aggregate demand shock is positive, persistent and statistically significant. Additionally, we find that the different stock market circumstances (i.e., bearish and bullish markets) have different responses to oil price shocks.
Third, we examine the impact of oil price uncertainty on the Chinese stock market. Some literatures have investigated the relationship between oil price and the Chinese stock market. However, few studies examine the effects of oil price uncertainty on the Chinese stock market. In this paper, we use a quantile impulse response analysis to investigate the shock effects of oil price uncertainty. Our results indicate that in a bearish market, oil supply and demand uncertainties
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline & **2005** & **2006** & **2007** & **2008** & **2009** & **2010** & **2011** & **2012** & **2013** & **2014** & **2015** \\\\ \\hline WOP & 3938 & 3964 & 3951 & 3987 & 3887 & 3979 & 4012 & 4119 & 4127 & 4229 & 4362 \\\\ COI & 126.8 & 145.1 & 163.1 & 178.8 & 203.6 & 239.3 & 253.7 & 271.0 & 281.9 & 308.3 & 335.5 \\\\ OID & 42.9 & 47 & 47.2 & 49.8 & 51.3 & 54.8 & 56.1 & 56.4 & 57 & 59.6 & 60.6 \\\\ \\hline \\end{tabular} Note: WOP denotes the world oil production (in million tons), COI denotes the Chinese oil imports (in million tons) and OID denotes the oil import dependence (in percentage) [31; 32].
\\end{table}
Table 1: World’s oil productions, China’s crude oil imports and China’s oil import dependence during the 2005–2015 period.
have positive shock effects on Chinese stock market returns, while in a bullish market, oil supply and demand uncertainties show negative shocks effects.
The rest of this paper is organized as follows. Section 2 briefly reviews the literature. The methodology is presented in Section 3. Section 4 describes the data and their descriptive statistics. Section 5 provides empirical results and Section 6 concludes.
## 2 Literature Review
Numerous literatures have illustrated the effects of oil price shocks on stock market, for example, with a standard cash flows/dividends valuation model, Jones and Gautam Jones and Gautam (2007) investigate the effects of oil price shocks on the stock markets in four developed countries US, Canada, Japan and England. Their empirical results show the important effects of the change in the oil price on the four countries' stock markets. Huang et al. Huang et al. (2017) support causality effects from oil futures prices to stock prices. Sadorsdy Sadorsdy (2016) shows the significant role of oil price shocks and volatility in explaining the real stock returns in the US and a negative effect of oil price volatility on stock prices. Papapetrou Papapetrou (2017) provides evidence that oil prices goes a long way in explaining stock price movements in the Greek stock market and shows the suppressing effects of positive oil price shocks on real stock market returns. Park and Ratti Park and Ratti (2011) comprehensively analysed the impact of oil price shocks on the stock markets of the United States and 13 European countries. The effect of oil price shocks on oil-importing countries is negative, however the effects on oil-exporting countries are positive.
To capture the specific relationships between the oil price and stock market returns, Kilian Kilican (2005) adopts a structural VAR to decompose the oil price shocks into oil supply shocks, aggregate demand shocks and oil-specific demand shocks. Following the procedure of Kilian Kilican (2005), many studies have examined the impact of oil supply and demand shocks on the stock market. Kilian and Park Kilian and Park (2012) show the different responses of aggregate US real stock market to different types of oil shocks. Specifically, oil supply shocks have no significant impact on stock market. Apergis and Miller Apergis and Miller (2009) investigate the effects of structural oil price shocks on eight developed countries' stock markets (Australia, Canada, France, Germany, Italy, Japan, the United Kingdom and the United States). They find little evidence that oil market shocks have a significant impact on those stock markets. Basher et al. Basher et al. (2017) investigate the dynamic relationship between oil shocks and emerging stock markets. They show that the positive oil supply shocks decrease the oil prices while the positive oil demand shocks create an increase the oil price. Moreover, oil prices react positively to positive shocks of the emerging stock markets. Wang et al. Wang et al. (2018) study the effects of three structural shocks on the stock market in nine oil-importing countries and seven oil-exporting countries. Their empirical results suggest that the effect of oil price shocks on the stock market of a country depends on the net position of that country and the type of oil price shocks.
Recently, many economists have studied the effect of oil price shocks on the Chinese stock market. For example, Cong et al. Cong et al. (2018) adopted the VAR approach to explore this topic and found no effect of oil price shocks on the Chinese stock market, except in the manufacturing sector and some oil companies. Employing an ARJI(-ht)-EGARCH model, Zhang and Chen Zhang and Chen (2018) found that the Chinese stock market correlated only with expected volatilities in global oil prices and global oil prices have only minor positive effects on the Chinese stock market. Fang and You Fang and You (2018) evaluated the effects of oil supply and demand shocks on the Chinese stock market with a structural VAR approach. They drew the conclusion that it presented a significant negative effect on oil-specific demand shocks but no statistically significant effects on global demand oil shocks. Based on the extreme value theory, Chen and Lv Chen and Lv (2018) show a positive extremal dependence between oil prices and the Chinese stock market. It is noteworthy that Fang and You Fang and You (2018) seemed to look at different types of oil shocks but not look at the effects of different quantiles, whereas Chen and Lv Chen and Lv (2018) suggested that the extremes of the distribution of the oil market and stock market are dependent; however, they do not look at different types of oil shock.
In a previous paper [43], we applied a quantile regression technique to investigate the linkage between oil price and Chinese stock market returns. Our results showed that the effects of crude oil price shocks on industry stock market returns presented significant heterogeneous across the stock return distribution. However, we did not identify the trigger forces of oil price shocks in the previous paper. In addition, relative to the quantile regression method the quantile impulse response method can systematically analyse the shock effects under different stock and oil market states (boom or bust). Therefore, we further analyse the impact of oil price shocks on the Chinese stock market in the present paper.
Focusing on understanding how large positive or negative oil price shocks impact different stock market return quantiles, Sim and Zhou [44] proposed a novel quantile-on-quantile approach to examine the quantile dependence relationship between oil prices and United States stock returns. In this paper, the lower oil price shock quantiles are regarded as large and negative oil price shocks, while the upper oil price shock quantiles are seen as large and positive oil price shocks. They find that the relationship between oil prices on US equities is asymmetric. Along these lines, Reboredo and Ugolini [45] use a copula-quantile method to capture quantile dependence between oil price shocks and stock market returns in three developed economies and five BRICS countries. They examine how quantile and interquantile oil price movements impact different stock return quantiles and find asymmetric effects of oil price shocks on stock market returns. However, both studies analyse the asymmetric effects of oil price shocks on stock market returns in two-dimensional systems, so we cannot extend their methods to higher-dimensional situations. The present methodology has the advantage in that it can easily accommodate higher-dimensional models.
Our research sought to explore the impact of oil supply and demand shocks on Chinese stock market returns during different oil market phases. To the best of our knowledge, no paper has yet thoroughly investigated this topic. Furthermore, we adopt the quantile impulse response analysis based on a quantile structural VAR model because it offers a structural multiple equation framework i.e., it is vital to model the interaction effects of a system of variables.
## 3 Methodology
Our interest is using the quantile impulse response to uncover the dynamic responses of Chinese stock market returns to oil market structural shocks in the different phases of the business. In order to calculate quantile impulse responses, we use a quantile structural vector autoregression technique which employs a quantile regression method to estimate a structural vector auto-regressions model. In this section, we first briefly introduce the quantile regression method, then present the reduced form of the quantile structural VAR and identify the structural shocks to conduct quantile impulse response analysis.
### Quantile Regression
Quantile regression can be seen as a flexible generalization of standard regression equations [46]. Compared with the least-square method, it has two major advantages. The first is that quantile regression estimations can be robust to non-Gaussian or heavy-tailed data, and the second is that the quantile regression model allows practitioners to provide more easily interpretable regression estimates \\(\\tau\\in(0,1)\\). The quantile regression model explains the \\(\\tau-th\\) quantile of \\(y_{t}\\), given the values of a vector of explanatory variables \\(x_{t}\\) as
\\[Q_{\\tau}(y_{t}|x_{t})=x_{t}\\beta(\\tau) \\tag{1}\\]
where \\(Q_{\\tau}(y_{t})=F^{-1}(\\tau)\\), \\(F(y_{t})\\) is the probability distribution function of the random variable \\(y_{t}\\). The notation \\(\\beta(\\tau)\\) highlights that there is a potentially different parameter vector at each respective quantile \\(\\tau\\) of the distribution. The unknown parameter vector \\(\\beta(\\tau)\\) can be estimated for any quantile \\(\\tau\\in(0,1)\\) by minimizing the following expression with respect to \\(\\beta(\\tau)\\):
\\[\\hat{\\beta}(\\tau)=\\underset{\\beta(\\tau)}{arg\\min}\\sum_{t=1}^{T}\\left(\\tau-1_{ \\{y_{t}<x_{t}\\beta(\\tau)\\}}\\right)\\ \\left|y_{t}-x_{t}\\beta(\\tau)\\right| \\tag{2}\\]
where \\(1_{\\{y_{t}<x_{t}\\beta(\\tau)\\}}\\) is the usual indicator function. The solution to the quantile regression model is obtained using the programming algorithm suggested by Koenker and D'Orey (2007). The resulting estimate \\(\\hat{\\beta}(\\tau)\\) can capture the extent to which the \\(\\tau-th\\) quantile of the conditional distribution of \\(y_{t}\\) changes if \\(x_{t}\\) changes by one unit. In this way, one can characterize the impact of changes in the \\(x_{t}\\) variables on the whole conditional distribution of \\(y_{t}\\), measured at any of its quantiles that are of interest to the researcher.
### The Reduced-Form Quantile Structural VAR
From the point of view of program implementation, we first present the reduced form of quantile structural VAR. The structural model is then recovered in a second step by decomposing the covariance matrix of the error term. In this paper, the reduced form quantile VAR is given as
\\[y_{t}=c(\\tau)+\\sum_{i=1}^{p}B_{i}(\\tau)y_{t-i}+e_{t}(\\tau),\\quad for\\quad t=1, ,T \\tag{3}\\]
where
\\[y_{t}=\\left(\\begin{array}{c}\\Delta prod_{t}\\\\ \\Delta era_{t}\\\\ \\Delta rpo_{t}\\\\ \\Delta rps_{t}\\end{array}\\right)\\ c(\\tau)=\\left(\\begin{array}{c}c_{1}(\\tau_{ 1})\\\\ c_{2}(\\tau_{2})\\\\ c_{3}(\\tau_{3})\\\\ c_{4}(\\tau_{4})\\end{array}\\right)\\ e_{t}(\\tau)=\\left(\\begin{array}{c}e_{t}^{ Aprod}(\\tau_{1})\\\\ e_{t}^{Area}(\\tau_{2})\\\\ e_{t}^{A\\pi ro}(\\tau_{3})\\\\ e_{t}^{A\\pi rps_{t}}(\\tau_{4})\\end{array}\\right),\\]
\\[B_{i}(\\tau)=\\left(\\begin{array}{ccc}\\beta_{i,11}(\\tau_{1})&\\beta_{i,12}(\\tau _{1})&\\beta_{i,13}(\\tau_{1})&\\beta_{i,14}(\\tau_{1})\\\\ \\beta_{i,21}(\\tau_{2})&\\beta_{i,22}(\\tau_{2})&\\beta_{i,23}(\\tau_{2})&\\beta_{i,24 }(\\tau_{2})\\\\ \\beta_{i,31}(\\tau_{3})&\\beta_{i,32}(\\tau_{3})&\\beta_{i,33}(\\tau_{3})&\\beta_{i, 34}(\\tau_{3})\\\\ \\beta_{i,41}(\\tau_{4})&\\beta_{i,42}(\\tau_{4})&\\beta_{i,43}(\\tau_{4})&\\beta_{i,4 4}(\\tau_{4})\\end{array}\\right).\\]
Here, \\(y_{t}\\) is a 4 vector of endogenous variables in which \\(\\Delta prod\\) denotes the percentage change in global crude oil production, \\(\\Delta rea\\) denotes the percentage change in global real economic activity, \\(\\Delta rpo\\) denotes percentage change in the real price of oil, and \\(\\Delta rps\\) denotes the percentage change in real stock price index. \\(c(\\tau)\\) is a 4 vector of intercepts at quantiles \\(\\tau=(\\tau_{1},\\tau_{2},\\tau_{3},\\tau_{4})^{\\prime}\\), \\(B_{i}(\\tau)\\) for \\(i=1, ,p\\) denotes the matrix of lagged coefficients at quantiles \\(\\tau=(\\tau_{1},\\tau_{2},\\tau_{3},\\tau_{4})^{\\prime}\\), and \\(e_{t}(\\tau)\\) is a vector of error terms.
The quantile VAR model emphasizes the important fact that the parameters in each equation may pertain to different quantiles of the conditional distribution of the respective left-hand side variable. This model is attractive because it allows us to use the estimated model to answer a variety of interesting empirical questions. For instance, there is a boom oil market in which the oil supply, the real economic activity and oil price are in the 90th percentile of the data \\((\\tau_{1}=0.9,\\ \\tau_{2}=0.9,\\ \\tau_{3}=0.9)\\), whereas in a bearish stock market, stock returns are in the 10th percentile of the distribution \\((\\tau_{4}=0.1)\\).
To obtain the estimated coefficient matrices \\(\\hat{B}_{i}(\\tau)\\) and \\(\\hat{c}(\\tau)\\), we assume that the errors \\(e_{t}(\\tau)\\) satisfy the population quantile restrictions \\(Q_{\\tau}(e_{t}(\\tau)\\left|y_{t-1}, ,y_{t-p}\\right)=0\\). These restrictions imply that the population responses of the \\(y\\) at quantiles \\(\\tau=(\\tau_{1},\\tau_{2},\\tau_{3},\\tau_{4})^{\\prime}\\) are characterized by
\\[Q_{\\tau}(y_{t}|y_{t-1}, ,y_{t-p})=c(\\tau)+\\sum_{i=1}^{p}B_{i}(\\tau)y_{t-i} \\tag{4}\\]Since each equation of (4) has the same right-hand side, the model can be estimated on an equation-by-equation basis for each vector \\(\\tau\\) using quantile regression [33].
### The Quantile Impulse Response
In order to identify the structural shocks of the global oil market, we give these error terms \\(e_{t}\\) as a structure restriction, which specifies the contemporaneous relationship between the reduced-form disturbances and the underlying structural disturbances. By modifying the procedure of Kilian and Park [12], we decompose the structural shocks as follows:
\\[e_{t}=\\left(\\begin{array}{c}e_{t}^{\\Delta prod}\\\\ e_{t}^{\\Delta expo}\\\\ e_{t}^{\\Delta expo}\\\\ e_{t}^{\\Delta exps}\\end{array}\\right)=\\left(\\begin{array}{cccc}a_{11}&0&0&0 \\\\ a_{21}&a_{22}&0&0\\\\ a_{31}&a_{32}&a_{33}&0\\\\ a_{41}&a_{42}&a_{43}&a_{44}\\end{array}\\right)\\left(\\begin{array}{c}\\epsilon_ {t}^{\\text{\\emph{Coll supply shock}}}\\\\ \\epsilon_{t}^{\\text{\\emph{Aggregate demand shock}}}\\\\ \\epsilon_{t}^{\\text{\\emph{Coll-specific shock}}}\\\\ \\epsilon_{t}^{\\text{\\emph{Stock-specific shock}}}\\end{array}\\right) \\tag{5}\\]
where \\(\\epsilon_{t}\\) represents a white noise process in which the covariance matrix is an identity matrix. The identifying restrictions underlying Equation (5) have their own nature and origin. First, because oil production is capital-intensive and time-consuming, therefore the oil supply shock cannot trigger changes of aggregate demand and crude oil price in the short term. Second, although the response of global economy activity to oil-specific demand shock is lagging behind, it can still react immediately to oil supply shock. In addition, in the short term, stock price shock cannot influence world economic activity. Third, the expected shortfall in oil supply and global economic activity can create changes in precautionary demand [12]. The oil-specific shock responses to oil supply and aggregate demand shocks. Finally, many other factors such as changes in interest rates and exchange rates can trigger stock-specific shock [17; 48].
According to the above structural shocks identified, we orthogonalize the covariance matrix of the residuals using a Cholesky decomposition and calculate the associated quantile-specific impulse response function. The 95% confidence interval of the impulse response function is obtained using the 'bootstrapping' approach. The 95% confidence bounds are provided to judge the statistical significance of the impulse response functions. Using the impulse response function, we are able to implement impulse response analysis in our compelling quantile levels and plot the impact of a one-unit increase in the one variable's innovation at date \\(t\\) on another variable at date \\(t+s\\). Therefore, we can find the response of the interesting variable's change when shocks occur in other variables under various quantiles.
## 4 Data and Summary Statistics
The data applied in this paper is monthly and spans the period from October 2001 to December 2015. We selected October 2001 as the start date because the State Council of China further deregulated the domestic oil-pricing mechanism until October 2001. In order to identify oil market supply and demand shocks on the Chinese stock market, global oil production, global real economic activity index, the world oil price and stock price index are collected. The global oil production data measured in thousands of barrels per day are sourced from the US Energy Information Administration. The global real economic activity index is likely to capture shifts in the demand for industrial global business markets and is available to download from Lutz Kilian's homepage. Moreover, Kilian [5] has fully explained the rationale of using this index to measure the global demand for oil. The world real oil price in dollars per barrel is measured using the US refiner acquisition cost of crude oil and then deflated by US Consumer Price Index (CPI). It is worth noting that we abandoned the country-specific oil price and instead used the world oil price because the world oil price could better capture the impact of oil price shock on stock markets than country-specific prices, which reflected the offsetting movements in exchange rates [11]. The world oil price may affect the Chinese stock market through different channels. On one hand, because China is net oil-importer, therefore, a dramatic increase in the world oil price may lead to lower consumption demand and higher production costs, all of which would constrain the economic development and ultimately decreasing stock market returns. On the other hand, the global economic cycles will influence the China's real economy and financial market. Therefore, a fluctuation of the world oil price will affect China's exports and real economy and the Chinese stock market will reflect the world oil price trends. As a proxy for the Chinese stock market, Shanghai Composite index is collected from CSMAR. The Shanghai composite index is a stock market index of all stock (A shares and B shares) traded on the Shanghai stock exchange. It reflects the changes in stock prices traded on the Shanghai stock exchange. The nominal price index is converted to the real price index using Chinese CPI. These CPI data are available from the OECD.
Figure 1 plots the real oil prices, as well as Shanghai composite index prices. Observe that the real oil prices and stock prices appear in an inconsistent variation trend, sometimes moving together but frequently moving in opposite directions, which verifies the fact that various complexities exist in the relationship between oil prices and stock prices. According to Figure 1, we can intuitively identify the periods of boom and bust of oil and stock markets. For the world oil market, the most remarkable surge in the price of oil since 1979 occurred between mid-2003 and mid-2008, in which the oil price surge was not caused by oil supply disruptions but by a series of increases in the crude oil demand associated with an unexpected expansion of the global economy and driven by strong additional demand for oil from emerging Asia in particular. From June 2008 to February 2009, the world oil price presented a sharp drop which was caused by the financial crisis of 2008. In 2009, the collapse of the global financial system was not imminent, the demand for oil recovered to levels prevailing in 2007 and the price of oil stabilized near 100 US dollars per barrel between 2010 early 2014. Between June 2014 and January 2015 the world oil price appeared a sharply decline. After a slight and transitory rise, since July 2015 the world oil price continued to fall. For the Chinese stock market, it underwent a bull market four times in the sample periods. The Shanghai composite index increased from 1080.94 in June 2005 to 5954.77 in October 2007, which was triggered by the reform of shareholder structure and the RMB appreciation. From November 2008 to July 2009, the four trillion investments of the central government and the ten industrial revitalization planning antagonized the stock price surge. Due to the influences of the second round of quantitative easing monetary policy, the Chinese stock market appeared in a transitory bull market from July to November 2010. Between July 2014 and June 2015, a series of reform programs and policy guidance of the Chinese government triggered an increase in the Shanghai composite index from 2201.56 to 4277.22.
Figure 1: Dynamics of the real oil price and Shanghai composite index from October 2001 to December 2015.
In order to check if the fundamental oil supply and demand shocks influence Chinese stock market, we try to incorporate changes of crude oil production, economic activity and crude oil price into the shocks mentioned above. Thus, these variables showed the first changes in obtaining their real change rate.
Table 2 presents descriptive summary statistics for this study. Maximum and minimum values demonstrate change of crude oil production is far less than the variability of global economy activity and oil price change, which indicates that crude oil supply presents a stable increase however, global economic activity and oil-specific demand is increasing drastically. Thus, we conclude that shocks on crude oil demand have a large impact on the fluctuations of the crude oil price. Different from other variables, the mean of the global real economic activity is negative. However, all of the variables show negative skewness and excess kurtosis. The Shapiro-Wilk test results show that the normality of the change rate of oil market shocks and real stock returns are strongly rejected, which further validates the rationality of using the quantile regression method to estimate the VAR model.
Granger and Newbold (2009) noted that a VAR model that contains non-stationary variables may suffer from a spurious regression problem and the results may not be reliable. Accordingly, we pretest our variables for stationarity data using the Augmented Dickey Fuller (ADF) test. The results of unit root tests are shown in Table 3, in which each variable in the structure VAR model is stationary.
## 5 Empirical Results
To improve our understanding of the heterogeneous impact of oil shocks on Chinese stock returns, we investigate not only the effects of oil price shocks on stock market returns but also the impact of oil price uncertainty on stock market returns. We use a quantile impulse response function to capture the asymmetric effects of oil market shocks on the Chinese stock market.
### Impact of Oil Price Shocks on Stock Market Returns
Following Kilian and Park (2009), we examine the reaction of the Chinese stock market to oil supply shock, aggregate demand shock and oil-specific shock. Moreover, by using a quantile structural VAR model to calculate the quantile impulse response function, we can capture the impacts of oil price
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Variables** & \\(\\Delta\\)_prod_ & \\(\\Delta\\)_rea_ & \\(\\Delta\\)_rpo_ & \\(\\Delta\\)_rps_ \\\\ \\hline Min & \\(-\\)2.4254 & \\(-\\)40.8204 & \\(-\\)33.7239 & \\(-\\)27.9749 \\\\ Max & 2.3601 & 36.2818 & 17.5295 & 22.8623 \\\\ Median & 0.0967 & 0.6412 & 1.2439 & 0.5883 \\\\ Mean & 0.1005 & 0.1651 & 0.1596 & 0.1884 \\\\ Std. dev & 0.7415 & 9.2117 & 8.2394 & 8.2547 \\\\ Skewness & \\(-\\)0.443 & \\(-\\)0.6207 & \\(-\\)1.2566 & \\(-\\)0.4209 \\\\ Kurtosis & 3.6462 & 6.9927 & 5.7796 & 4.0909 \\\\ Shapiro–Wilk test & 0.9904 (0.3122) & 0.9658 (0.0003) & 0.9199 (0.0000) & 0.9760 (0.0048) \\\\ \\hline \\hline \\end{tabular} Note: The \\(p\\)-values of Shapiro-Wilk test statistics are reported in parentheses.
\\end{table}
Table 2: Summary statistics of oil and stock markets.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Variables** & **None** & **With Intercept** & **With Trend and Intercept** \\\\ \\hline \\(\\Delta\\)_prod_ & \\(-\\)11.6683 (0.0000) & \\(-\\)12.0333 (0.0000) & \\(-\\)11.9969 (0.0000) \\\\ \\(\\Delta\\)_rea_ & \\(-\\)9.5033 (0.0000) & \\(-\\)9.5682 (0.0000) & \\(-\\)9.5118 (0.0000) \\\\ \\(\\Delta\\)_rpo_ & \\(-\\)7.7154 (0.0000) & \\(-\\)7.6991 (0.0000) & \\(-\\)7.8854 (0.0000) \\\\ \\(\\Delta\\)_rps_ & \\(-\\)11.4368 (0.0000) & \\(-\\)11.4099 (0.0000) & \\(-\\)11.3815 (0.0000) \\\\ \\hline \\hline \\end{tabular} Note: The optimal lag length for the tests was determined using the Schwarz Information Criterion (SIC). The \\(p\\)-values are reported in parentheses.
\\end{table}
Table 3: ADF unit root tests for the VAR model.
shocks under different oil and stock market states, which help us to obtain a more complete picture of the factors affecting stock market returns. In each quantile VAR model, the optimal order for the VAR (p) process is chosen by Akaike's information criterion.
#### 5.1.1 Asymmetric Effects over Bust and Boom Market States
This section assumes that the oil market and stock market are driven by one business state. Thus, variables in this part of the analysis are estimated at the same conditional quantile, i.e., \\(\\tau_{1}=\\tau_{2}=\\tau_{3}=\\tau_{4}\\). For conciseness, we present the baseline results for just five quantiles \\(\\tau=\\{0.05,\\ 0.1,\\ 0.5,\\ 0.9,\\ 0.95\\}\\). These quantiles are just some typical quantiles of distribution and the general results are robust over different choices of other possible quantiles. When \\(\\tau=0.05\\) and \\(\\tau=0.1\\), we evaluate the shock effects of oil market on stock market returns for a business in a bust. If \\(\\tau=0.5\\), we estimate the shock effects at the median of stock returns, which is close to its mean. When \\(\\tau=0.9\\) and \\(\\tau=0.95\\), the cyclical component of oil and the stock market is predicted to be booming. We present these quantiles to guarantee the readability and centre the focus on this topic.
Figure 2 shows the responses of the Chinese stock market to oil market structural shocks during different oil and stock market states. As shown in Panels A and B, at the average state of business, global oil supply shocks and demand shocks have no statistically significant effects on the Chinese stock market. This result is consistent with the previous empirical findings of Cong et al. (2013), who found that oil price shocks did not show statistically significant impacts on the real returns of the Chinese stock market composite index. The following reasons may explain this phenomenon. Chinese financial market is supervised strictly and has a lack of transparency. Because the shares held by the state and legal entities are not allowed to be traded, Chinese stock returns only reflect the demand of the domestic investor. Therefore, as a result of the incomplete market, Chinese stock market returns do not have a significant response to the global demand of oil price shocks. The stock-specific shock however, has a significant, positive effect on Chinese stock market. This implies the positive effect of the deregulations.
Panels C and D in Figure 2 present the impact of the oil market structural shocks on Chinese stock market returns during a bust phase. The global oil supply shocks, aggregate demand shocks and oil-specific demand have negative statistically significant shock effects on Chinese stock market returns after period 3. China is a net oil-importing country. An increase in oil prices may lead to lower real consumption and higher production costs, all of which may constrain the economy and limit the number and types of investment opportunities available, ultimately decreasing stock market returns. Especially in a bust business phase, the negative shock effects are more significant. A plausible explanation is that the companies are more sensitive to the rising cost of production during a business bust. In addition, stock markets are more strongly correlated with various classes of assets when the stock markets are bearish (2014). Moreover, the negative response of stock market returns is persistent as Panel B shows, which implies that an increase in the oil price derived from oil supply or oil demand will cause the stock market returns to persistently decrease.
During a boom phase, as shown in Panels E and F in Figure 2, the aggregate demand shock has a positive statistically significant effect, which may seem surprising because as a net oil-importing country, the relationship between its stock market returns and oil price shocks should be negative. However, during an economic boom, crude oil price and stock price are increased simultaneously. Especially, in spite that a rise in oil price increases the production costs, the negative shock effects would be weakened by the special adjustment mechanism in domestic oil prices. On the other hand, positive demand shock should lead to higher oil prices but this could have two effects. First, an increase in a firm's costs depresses profits. However, a second effect, the higher sales for Chinese companies' products overseas, have a positive impact on profits and hence, firm value; thus, the cost impact could be outweighed by increased revenues. Therefore, we find a positive relationship between the increase in the oil price derived from aggregate demand and the Chinese stock market returns.
Figure 2: _Cont._
Figure 2: Impulse response of Chinese stock returns to structural shocks under different oil and stock market states for the full sample period.
#### 5.1.2. Asymmetric Effects in Bearish and Bullish Stock Markets
In this section, we analyse the asymmetric effects of oil supply and demand shocks on bearish and bullish stock markets during different oil market states. For the business state, we use quantile \\(\\tau=0.2\\) to represent an oil market bust and \\(\\tau=0.8\\) to denote an oil market boom. Putting it differently, the vector \\(\\tau\\) is assumed to be \\((0.2,\\ 0.2,\\ 0.2,\\ \\tau_{4})^{\\prime}\\) during the bust phase and \\((0.8,\\ 0.8,\\ 0.8,\\ \\tau_{4})^{\\prime}\\) during the boom phase. \\(\\tau_{4}\\) is then allowed to vary, to represent the various stock market states. For conciseness, we present the baseline results for the \\(\\tau_{4}=\\{0.1,\\ 0.9\\}\\) quantiles of the stock market return distribution. The \\(\\tau_{4}=0.1\\) represents a bearish market and \\(\\tau_{4}=0.9\\) denotes a bullish market.
Figure 3 depicts the responses to oil market shocks for an oil market in a bust period, in which Panel A illustrates the response when the stock market is in a good state (bullish market) and the Panel B shows the case when the stock market is in a bad state (bearish market). Thus, it can compare the effects of an oil market shock in situations when the stock market is at its best and its worst states. In a bearish market, the impacts of oil price shocks on stock market returns are similar to the situation where the oil and stock markets are driven by a bust oil market state, in which the shock effects of oil supply and demand on stock market returns have negative statistical significance. In a bullish market, the aggregate demand shock has a statistically significant positive effect. The speculative behavior of investors in the stock market provides a possible explanation for this phenomenon. Although the global economy is in a bust phase, the investors remain bullish on Chinese economic prospects. The increase in the oil price derived from aggregate demand shocks is seen by investors as a sign of the Chinese economy's recovery. As a result, they will buy shares in the Chinese stock market and push up the price of stock. These shock effects are key ingredients of the development and final burst of speculative bubbles.
Panel A in Figure 4 illustrates the responses of the bearish stock market to oil price shocks for an oil market in a boom period. There is no statistically significant effect of the response of Chinese stock market returns to oil supply shock, aggregate demand shock and oil-specific demand shock. However, as shown in Panel B in Figure 4, the increase in the oil price derived from the aggregate has positive and statistically significant shock effects on the Chinese stock market, which is similar to the condition in which the oil and stock markets are driven by a boom business state.
Furthermore, because our study period includes the recent 2007-2010 financial crisis and the financial crisis can have a significant impact on world oil and financial markets [51; 52], it is opportune to examine the potential impact of the financial crisis on market interdependencies between real oil prices and stock prices. To do so, we divide our study period into two sub periods, referred to as a pre-crisis period (October 2001 to August 2008) and a post-crisis period (September 2008 to December 2015). September 2008 is chosen as the breakpoint because oil prices plunged 12% on 23 September 2008 and the financial crisis erupted on 15 September 2008 [53]. We use the quantile impulse response method to separately estimate the models for the two sub periods and find a significant increase in the effects of oil-specific demand shocks on stock prices over the post-crisis period when compared to the pre-crisis period. The result supports the previous studies' findings [30; 54]. However, the pattern of oil price shocks on stock markets over the post-crisis period does not differ from the pre-crisis period and the whole sample period.
Figure 3: Impulse response of Chinese stock returns to structural shocks at best oil market state for the full sample period.
Figure 4: Impulse response of Chinese stock returns to structural shocks at boom oil market state for the full sample period.
### Asymmetric Effects of Oil Price Uncertainty Shocks on Stock Market Returns
In this section, we explore the asymmetric effects of oil price uncertainty shocks on Chinese stock market returns. Because oil price fluctuations can lead to cyclical fluctuations in investments [55; 56] it is essential to investigate the effects of oil price uncertainty shocks on stock market.
Following Wang et al. [40], we first decompose oil price shock into three different supply and demand shocks and then square these shocks to measure the three corresponding oil price uncertainty: oil supply uncertainty (\\(opu\\)), aggregate demand uncertainty (\\(adu\\)) and precautionary demand uncertainty (\\(pdu\\)). After quantifying the oil price uncertainties, the quantile structural VAR model is used to examine the impacts of the three uncertainties on the bearish and bullish stock market returns, where we replace the variable vector \\(y_{t}\\) by \\(y_{t}=(opu_{t},\\ adu_{t},\\ pdu_{t},\\ \\Delta rps_{t})\\) in Equation (3). In the quantile structural VAR model, the quantile vector \\(\\tau=(0.5,\\ 0.5,\\ 0.5,\\ \\tau_{4})\\), which means that we measure the three oil price uncertainties at the medians of their conditional distributions and evaluate the effects on stock market returns at various quantiles of its conditional distribution. In this way, we estimate if the effects of oil uncertainty shocks differ between relatively low or high stock market returns. Although the estimation can be performed at any quantile of stock market returns, in principle, we present the results for just three quantiles of stock returns, i.e., \\(\\tau_{4}=\\{0.1,\\ 0.5,\\ 0.9\\}\\).
Figure 4 illustrates the effect of oil price volatility on Chinese stock market. Panel A shows the general stock market situation in which we can see that the three oil price uncertainties do not have statistically significant shock effects on Chinese stock market returns. They, however, still perform negative shock effects on Chinese stock market returns. A plausible explanation is that the business uncertainty depresses world economic development resulting in a bust oil market. As an oil-importing country, a decrease in the oil price would reduce production costs which in turn offset the negative impact which is included because of oil price uncertainty. On the other hand, the special adjustment mechanism in Chinese oil prices would weaken the negative effects of oil price uncertainty shocks on the stock market. Panels B and C in Figure 5 reveal the impact of oil price uncertainty on bearish and bullish stock markets, respectively. It is obvious that there are opposite shock effects in bearish and bullish stock markets. In a bearish market, the three oil price uncertainties have positive shock effects on stock market returns, which imply that oil price uncertainty will raise the price of the stock. On the contrary, in a bullish market, the impacts of oil price uncertainties are significantly negative in the month when the shock started, although it loses its significance as time elapses. We argue in light of the evidence, that this finding is sensible. In the stock market, stock prices are the present discounted value of the future net earnings of a firm, so both the current and the expected future impacts of an oil price uncertainty shock should be absorbed quickly into stock market prices and returns. In a bearish market, firms and consumers interpret the unexpected rises in oil price uncertainty as an improvement of the economic outcome. The oil price uncertainty may trigger higher volatility in the stock market, which provokes speculators to buy the shares and as a result, boost the price of stock. However, in a bullish market, oil price uncertainty will cause panic psychology among investors who worry about their assets being reduced. Consequently, they will sell their shares and consolidate the profits obtained from the bullish market. Therefore, we observe the opposite shock effects of oil price uncertainty during bearish and bullish stock markets.
Figure 5: Impulse response of Chinese stock returns to oil price uncertainty shocks at different stock market states.
## 6 Conclusions
This paper sheds light on the asymmetric effects of global oil price shocks on the Chinese stock market. We argue that the conditional quantiles of oil price distribution may be related to the different oil market states. Building on this, we use a quantile impulse response approach to uncover the asymmetric effects of oil price shocks in various oil and stock market states.
The empirical results show that the impacts of oil price shocks on Chinese stock market returns depends on whether the oil and stock market state is in bust or boom, as well as whether the shock is caused by supply or demand. During a bust phase, the oil supply and demand shock effects have a negative, statistically significant effect on Chinese stock market returns, which is in line with theoretical considerations. This can be explained by the fact that China is a net oil-importing country, so the increase in the oil price will raise the production costs of companies and then depress the stock market returns. However during a boom period, only the aggregate demand has positive statistically significant shock effects on Chinese stock market returns. This is easy to explain. On one hand, the growth in global economic activity simultaneously stimulates the increase in crude oil and stock prices. On the other hand, the special adjustment mechanism in Chinese oil prices may greatly weaken the negative effects. Moreover, when the oil market is in a bust period, while the stock market is a bullish market, the aggregate demand shock shows positive significant effects on Chinese stock market returns. This may be argued to constitute empirical evidence for the phenomenon of speculative bubbles, as unexpected changes to oil prices leads to positive growth in the stock market.
In addition, we find that the impact of oil price uncertainty on the Chinese stock market could vary across the stock return distribution. In a bearish market, the impact of oil price uncertainty is positive, while in a bullish market, the impact of oil price uncertainty is negative. This indicates that in different stock markets, investors have different responses to the uncertainty of shocks.
Our results suggest several important implications. First, they show that the impact of oil price shocks on the Chinese stock market is heterogeneous, providing us with a scientific basis in which to make clear the dynamics of stock prices and helps to have a better understanding of some counterintuitive stock market behavior appearing in China. Second, our results uncover some stock market risks related to oil price shocks and this can offer valuable advice to investors and decision makers. Investors should hold oil stocks to diversify their investment portfolio and pay close attention to the asymmetric effects of oil price shocks on the stock market in order to consider long and short positions in a bearish and bullish stock market. The decision makers, during different oil and stock market states, should adopt diverse strategies to hedge the risk of oil price shocks, thereby lessening upheaval in the financial markets caused by a significant oil price shocks.
This research is supported by the National Natural Science Foundation of China under grants No. 71171075, No. 71521061 and No. 71431008.
Huiming Zhu designed the study and was involved in drafting the initial manuscript. Xianfang Su designed the main stages of the research, acquired and analysed the data, and drafted the initial manuscript. Yawei Guo and Yinghua Ren revised the manuscript critically for important content. All authors read and approved the final manuscript.
The authors declare no conflict of interest.
## References
* (1) Hamilton, J.D. Oil and the macroeconomy since World War II. _J. Political Econ._**1983**, _91_, 228-248. [CrossRef]
* (2) Hamilton, J.D. This is what happened to the oil price-macroeconomy relationship. _J. Monet. Econ._**1996**, _38_, 215-220. [CrossRef]
* (3) Hamilton, J.D. What is an oil shock? _J. Econ._**2003**, _113_, 363-398. [CrossRef]
* (4) Kilian, L. Exogenous oil supply shocks: How big are they and how much do they matter for the US economy? _Rev. Econ. Stat._**2008**, _90_, 216-240. [CrossRef]
* (5) Kilian, L. Not all oil price shocks are alike: Disentangling demand and supply shocks in the crude oil market. _Am. Econ. Rev._**2009**, _99_, 1053-1069. [CrossRef]* (6) Jones, D.W.; Leibby, P.N.; Paik, I.K. Oil price shocks and the macroeconomy: What has been learned since 1996. _Energy J._**2004**, _25_, 1-32. [CrossRef]
* (7) Jones, C.; Gautam, K. Oil and the stock markets. _J. Finance_**1996**, _51_, 463-491. [CrossRef]
* (8) Sadorsky, P. Oil price shocks and stock market activity. _Energy Econ._**1999**, _21_, 449-469. [CrossRef]
* (9) Sadorsky, P. Correlations and volatility spillovers between oil prices and the stock prices of clean energy and technology companies. _Energy Econ._**2012**, _34_, 248-255. [CrossRef]
* (10) El-Sharif, I.; Brown, D.; Burton, B.; Nixon, B.; Russell, A. Evidence on the nature and extent of the relationship between oil prices and equity values in the UK. _Energy Econ._**2005**, _27_, 819-830. [CrossRef]
* (11) Park, J.; Ratti, R. Oil price shocks and stock markets in the US and 13 European countries. _Energy Econ._**2008**, _30_, 2587-2608. [CrossRef]
* (12) Kilian, L.; Park, C. The impact of oil price shocks on the US stock market. _Int. Econ. Rev._**2009**, _50_, 1267-1287. [CrossRef]
* (13) Chen, S.S. Do higher oil prices push the stock market into bear territory? _Energy Econ._**2010**, _32_, 490-495. [CrossRef]
* (14) Elder, J.; Serletis, A. Oil price uncertainty. _J. Money Credit Bank._**2010**, _42_, 1137-1159. [CrossRef]
* (15) Masih, R.; Peters, S.; De Mello, L. Oil price volatility and stock price fluctuations in an emerging market: Evidence from South Korea. _Energy Econ._**2011**, _33_, 975-986. [CrossRef]
* (16) Filis, G.; Degiannakis, S.; Floros, C. Dynamic correlation between stock market and oil prices: The case of oil-importing and oil-exporting countries. _Int. Rev. Financ. Anal._**2011**, _20_, 152-164. [CrossRef]
* (17) Basher, S.; Haug, A.; Sadorsky, P. Oil prices, exchange rates and emerging stock markets. _Energy Econ._**2012**, _34_, 227-240. [CrossRef]
* (18) Arouri, M.E.H.; Jouini, J.; Nguyen, D.K. On the impacts of oil price fluctuations on European equity markets: Volatility spillover and hedging effectiveness. _Energy Econ._**2012**, _34_, 611-617. [CrossRef]
* (19) Aloui, R.; Hammoudeh, S.; Nguyen, D.K. A time-varying copula approach to oil and stock market dependence: The case of transition economies. _Energy Econ._**2013**, _39_, 208-221. [CrossRef]
* (20) Broadstock, D.C.; Filis, G. Oil price shocks and stock market returns: New evidence from the United States and China. _J. Int. Financ. Mark. Inst. Money_**2014**, _33_, 417-433. [CrossRef]
* (21) Olson, E.; Vivian, A.; Wohar, M. The relationship between energy and equity markets: Evidence from volatility impulse response functions. _Energy Econ._**2014**, _43_, 297-305. [CrossRef]
* (22) Energy Information Administration. Annual Energy Outlook 2015. 2016. Available online: [http://www.eia.gov/forecasts/aeo/](http://www.eia.gov/forecasts/aeo/) (accessed on 15 March 2016).
* (23) Cong, R.; Wei, Y.; Jiao, J.; Fan, Y. Relationships between oil price shocks and stock market: An empirical analysis from China. _Energy Policy_**2008**, _36_, 3544-3553. [CrossRef]
* (24) Broadstock, D.C.; Cao, H.; Zhang, D. Oil shocks and their impact on energy related stocks in China. _Energy Econ._**2012**, _34_, 1888-1895. [CrossRef]
* (25) Nguyen, C.C.; Bhatti, M.I. Copula model dependency between oil prices and stock markets: Evidence from China and Vietnam. _J. Int. Financ. Mark. Inst. Money_**2012**, _22_, 758-773. [CrossRef]
* (26) Cunado, J.; de Gracia, F.P. Oil price shocks and stock market returns: Evidence for some European countries. _Energy Econ._**2014**, _42_, 365-377. [CrossRef]
* (27) Caporale, G.M.; Ali, F.M.; Spagnolo, N. Oil price uncertainty and sectoral stock returns in China: A time-varying approach. _China Econ. Rev._**2014**, _34_, 311-321. [CrossRef]
* (28) Chen, Q.; Lv, X. The extreme-value dependence between the crude oil price and Chinese stock markets. _Int. Rev. Econ. Finance_**2015**, _39_, 121-132. [CrossRef]
* (29) Li, Q.M.; Cheng, K.; Yang, X.G. Response pattern of stock returns to international oil price shocks: From the perspective of China's oil industrial chain. _Appl. Energy_**2016**. in press. [CrossRef]
* (30) Zhu, H.M.; Li, R.; Li, S. Modelling dynamic dependence between crude oil prices and Asia-Pacific stock market returns. _Int. Rev. Econ. Finance_**2014**, _29_, 208-223. [CrossRef]
* (31) British Petroleum Public Limited Company. Statistical Review of World Energy 2015. 2016. Available online: [http://www.bp.com/en/global/corporate/energy-economics/](http://www.bp.com/en/global/corporate/energy-economics/) (accessed on 15 March 2016).
* (32) General Administration of Customs of the People's Republic of China. Customs Statistics. 2016. Available online: [http://www.customs.gov.cn/tabid/](http://www.customs.gov.cn/tabid/) (accessed on 15 March 2016).
* (33) Cecchetti, S.; Li, H. Measuring the Impact of Asset Price Booms. In _Using Quantile Vector Autoregressions_; Working Paper; Department of Economics, Brandeis University: Waltham, MA, USA, 2008.
* (34) Schuler, Y. _Asymmetric Effects of Uncertainty over the Business Cycle: A Quantile Structural Vector Autoregressive Approach_; Working Paper; University of Konstanz: Konstanz, Germany, 2014.
* (35) Linnemann, L.; Winkler, R. _Estimating Asymmetric Effects of Fiscal Policy Using Quantile Regression Methods_; Working Paper; Department of Economics, TU Dortmund University: Dortmund, Germany, 2015.
* (36) Lee, D.; Kim, T. _Impulse Response Analysis in Conditional Quantile Models and an Application to Monetary Policy_; Working Paper; Department of economics, University of Connecticut: Storrs, CT, USA, 2015.
* (37) Huang, R.; Masulis, R.; Stoll, H. Energy shocks and financial markets. _J. Futures Mark._**1996**, _16_, 1-27. [CrossRef]
* (38) Papapetrou, E. Oil price shocks, stock market, economic activity and employment in Greece. _Energy Econ._**2001**, _23_, 511-532. [CrossRef]
* (39) Apergis, N.; Miller, S.M. Do structural oil-market shocks affect stock prices? _Energy Econ._**2009**, _31_, 569-575. [CrossRef]
* (40) Wang, Y.; Wu, C.; Yang, W. Oil price shocks and stock market activities: Evidence from oil-importing and oil-exporting countries. _J. Comp. Econ._**2013**, _41_, 1220-1239. [CrossRef]
* (41) Zhang, C.; Chen, X. The impact of global oil price shocks on Chinese stock returns: Evidence from the ARJI (-h\\({}_{t}\\))-EGARCH model. _Energy_**2011**, _36_, 6627-6633. [CrossRef]
* (42) Fang, C.R.; You, S.Y. The impact of oil price shocks on the large emerging countries' stock prices: Evidence from China, India and Russia. _Int. Rev. Econ. Finance_**2014**, _29_, 330-338. [CrossRef]
* (43) Zhu, H.; Guo, Y.; You, W.; Xu, Y. The heterogeneity dependence between crude oil price changes and industry stock market returns in China: Evidence from a quantile regression approach. _Energy Econ._**2016**, in press. [CrossRef]
* (44) Sim, N.; Zhou, H. Oil prices, US stock return, and the dependence between their quantiles. _J. Bank. Finance_**2015**, _55_, 1-8. [CrossRef]
* (45) Reboredo, J.; Ugolini, A. Quantile dependence of oil price movements and stock returns. _Energy Econ._**2016**, _54_, 33-49. [CrossRef]
* (46) Koenker, R. _Quantile Regressions, Econometric Society Monographs No. 38_; Cambridge University Press: New York, NY, USA, 2005.
* (47) Koenker, R.W.; d'Orey, V. Algorithm AS 229: Computing regression quantiles. _Appl. Stat._**1987**, _36_, 383-393. [CrossRef]
* (48) Jung, H.; Park, C. Stock market reaction to oil price shocks: A comparison between an oil-exporting economy and an oil-importing economy. _J. Econ. Theory Econ._**2011**, _22_, 1-29.
* (49) Granger, C.; Newbold, P. Spurious regressions in econometrics. _J. Econ._**1974**, \\(2\\), 111-120. [CrossRef]
* (50) Longin, F.; Solnik, B. Extreme correlation of international equity markets. _J. Finance_**2001**, _56_, 649-676. [CrossRef]
* (51) Fan, Y.; Xu, H. What has driven oil prices since 2000? A structural change perspective. _Energy Econ._**2011**, _33_, 1082-1094. [CrossRef]
* (52) Wen, X.; Wei, Y.; Huang, D. Measuring contagion between energy market and stock market during financial crisis: A copula approach. _Energy Econ._**2012**, _34_, 1435-1446. [CrossRef]
* (53) Ewing, B.T.; Malik, F. Volatility transmission between gold and oil futures under structural breaks. _Int. Rev. Econ. Finance_**2013**, _25_, 113-121. [CrossRef]
* (54) Tsai, C.L. How do US stock returns respond differently to oil price shocks pre-crisis, within the financial crisis, and post-crisis? _Energy Econ._**2015**, _50_, 47-62. [CrossRef]
* (55) Henry, C. Investment decisions under uncertainty: The irreversibility effects. _Am. Econ. Rev._**1974**, _64_, 1006-1012.
* (56) Brennan, M.; Schwartz, E. Evaluating natural resource investment. _J. Bus._**1985**, _58_, 1135-1157. [CrossRef] | This paper uses a quantile impulse response approach to investigate the impact of oil price shocks on Chinese stock returns. This process allows us to uncover asymmetric effects of oil price shocks on stock market returns by taking into account the different quantiles of oil price shocks. Our results show that the responses of Chinese stock market returns to oil price shocks differ greatly, depending on whether the oil and stock market is in a bust or boom state and whether the shock is driven by demand or supply. The impacts of oil price shocks on Chinese stock returns present asymmetric features. In particular during a bust phase, oil supply and demand shocks significantly depress stock market returns, while during a boom period, the aggregate demand shock enhances stock market returns. These results suggest some important implications for investors and decision makers.
Chinese stock market; oil price shock; asymmetric effects; quantile impulse response; quantile vector autoregression Article | Provide a brief summary of the text. | 178 |
arxiv-format/2301_12934v3.md | Coarse-to-fine Hybrid 3D Mapping System with Co-calibrated Omnidirectional Camera and Non-repetitive LiDAR
Ziliang Miao\\({}^{1}\\), Buwei He\\({}^{1}\\), Wenya Xie\\({}^{1}\\), Wenquan Zhao\\({}^{1}\\), Xiao Huang\\({}^{1}\\), Jian Bai\\({}^{2}\\), and Xiaoping Hong\\({}^{1}\\)
Manuscript received: Nov. 21, 2022; Revised Jan. 19, 2023; Accepted Jan. 28, 2023.This paper was recommended for publication by Editor Javier Civera upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by Shenzhen Science and Technology Project (JSGG20211209095803004, JSGG2021013300401004) and SUSTech startup fund. (Ziliang Miao and Buwei He contributed equally to this work; Corresponding author: Xiaoping Hong)\\({}^{1}\\)These authors are with School of System Design and Intelligent Manufacturing (SDIM), Southern University of Science and Technology (SUSTech), China [email protected], [email protected], [email protected] Bai is with State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, ChinaDigital Object Identifier (DOI): see top of this page.
## I Introduction
Three-dimensional scanning (obtain the raw points) and mapping (register or stitch the points into a point cloud map) are becoming increasingly important in robotics [2], digital construction [3], and virtual reality [4], where digitization of the physical 3D space could provide tremendous insights in modeling, planning, management, optimization, and quality assurance. Photogrammetry has been developed to capture the 3D world. However, its application has been limited in aviation settings where accurate GPS RTK signals are required. Recently, the need for large-scale mapping of building environments has been rising, mainly due to the requirements from Building Information Modeling (BIM) systems. Thanks to the availability of emerging 3D robotic LiDAR sensors [5, 6], Mobile Laser Scanner (MLS) systems are increasingly adopted [7] (Fig. 1a, #3 and #4), where point clouds from these sensors could be registered to the global frame through sensor motion estimation (i.e., odometry) at each instance. However, due to the movement nature, such approaches largely depend on estimations of temporal characteristics such as translation and rotation, or spatial characteristics such as sensor FoV and landmark coverages. The results vary from scan to scan with no guarantee of precision. Hence, a more robust and precise method is desired.
On the other hand, the traditional Terrestrial Laser Scanner (TLS) has been employed in many precision-stringent applications (Fig. 1a, #1 and #2). The TLS-based stationary mapping is usually inefficient (due to the accurate but slow laser rotation) but could provide precise results. Viewpoints (also known as stationary scanning locations) need to be carefully planned to ensure the spatial coverage and enough overlapping regions of adjacent viewpoints to make accurate point cloud stitching [8], but on the other hand, as fewer as possible to reduce scanning time and cost. The planning for viewpoints largely relies on the overall layout of the scene, which has been done by human experience so far [9].
Combining the strength from both worlds would be ideal in large-scale 3D mapping applications. As shown in Fig. 1b,
Fig. 1: 3D mapping systems: (a) the current TLS (#1 FARO Focus Premium, #2 LEICA BLK360) and MLS (#3 LEICA BLK2GO, #4 NavVis VLX) systems; (b) the proposed hybrid mapping robotic system.
the proposed hybrid mapping robot is developed carrying a gimbal mount and a novel sensor suite consisting of an omnidirectional non-repetitive Livox Mid-360 LiDAR1 and an omnidirectional camera. The sensors' FoV and the non-repetitive scanning nature are shown in Fig. 2a. In the odometry-based mapping mode, the sensor suite is kept horizontal by fixing the gimbal mount to coarsely and efficiently map the entire space with the mobile platform. Based on the coarse map, a few viewpoints are planned for the stationary mapping of targeted ROIs. In the stationary mapping mode, the robot will navigate and stay still at each viewpoint, performing \\(360^{\\circ}\\times 300^{\\circ}\\) scanning by traversing the vertical FoV through the gimbal mount. These precise scans are registered with each other and then stitched with the pre-generated coarse map forming a global map with fine ROIs.
Footnote 1: The authors gratefully acknowledge Livox Technology for the equipment support.
The main contributions of this work are as follows:
1. The first hybrid 3D mapping robot system that integrates odometry-based and stationary mapping modes is proposed. The consistency of point clouds in two modes can be guaranteed with the single omnidirectional non-repetitive Livox Mid-360 LiDAR.
2. An omnidirectional camera is introduced in the proposed system to complement the omnidirectional LiDAR. A novel automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters and the extrinsic parameters.
3. An automated coarse-to-fine hybrid mapping workflow is demonstrated, including odometry-based coarse mapping in the global environment, planning for the viewpoints in the ROIs, and finer stationary mapping at viewpoints. The entire project is open-sourced on GitHub2 to aid the development of this emerging field. Footnote 2: [https://github.com/ZilianoMiao/Hybrid_Mapping_Cocalibration.git](https://github.com/ZilianoMiao/Hybrid_Mapping_Cocalibration.git)
## II Related Works
### _Mapping Solutions_
3D mapping solutions are of great interest in many emerging fields [3]. TLS-based and MLS-based approaches are commonly adopted.
The traditional TLS-based approach uses a heavy-duty single-laser scanner and traverses the entire FoV through step-wise rotations about the horizontal and vertical axes. It provides sufficiently dense points with good precision. However, this method is slow and laborious. It has to be repeated on many viewpoints, which need to be chosen wisely because a lack of viewpoints will cause missing information in the desired ROI, while the excess of viewpoints will lead to longer scanning hours and poorer efficiency. Currently, viewpoints planning relies on human intuition or experiences, making it challenging to plan effectively in large and complex working environments like the construction scenes [9].
On the contrary, the MLS-based approach provides real-time scanning and mapping results as the LiDAR moves. The current MLS devices are classified by their usage configurations, such as handheld (Fig. 1a, #3), backpack (Fig. 1a, #4), and trolley. Most of these mobile systems rely on conventional LiDARs (16, 32, or 64 lines) and construct the 3D map by registering the point cloud with LiDAR odometry or LiDAR-IMU odometry. Such mobile systems greatly speed up the mapping process without planning for viewpoints. However, it cannot replace the TLS-based approaches due to insufficient mapping precision and sparse point clouds [3]. The repetitive scanning nature of mechanical LiDAR is unsuitable for stationary scanning due to limited FoV coverage (20% coverage for 32-line LiDAR). Therefore, the indispensable motion for more coverage will cause errors in pose estimation, which are accumulated throughout the process, limiting the usage in high-precision applications.
Both TLS-based and MLS-based approaches have their unique advantages and drawbacks. It is desired to devise a mechanism to combine both modes. For example, a combination of TLS and MLS is used to solve the registration problem between non-overlapping spaces [8] or use TLS scans as references to MLS mapping registration to achieve low mapping errors [10]. Moreover, MLS is also used to provide a 3D map to solve the viewpoints planning problem of TLS [9]. However, all these methods are based on heterogeneous sensors for different modes, with different synchronization, data structure, and protocols, which are difficult to construct a one-stop mapping robot with a streamlined and automated workflow.
The unique non-repetitive scanning nature of the Livox LiDAR provides a combination of an instantaneous high density at a short time interval for odometry (with effective point density as 32-line LiDAR within 0.1 seconds) and an image-level resolution at relatively long time intervals for scanning (within 3 seconds, as shown in Fig. 2b), which makes it surprisingly suitable for such hybrid working mechanism. The feature provides sufficiently good performance in odometry scenarios [11] and a dense FoV coverage for image-like feature processing [6, 12, 13]. In this paper, the two working modes are integrated into the same robot, ensuring overall mapping efficiency and precision with an automated coarse-to-fine hybrid mapping workflow.
### _Calibration Methods_
In addition to LiDAR, cameras are usually required in 3D mapping systems to give an overview of the mapped environment [14]. Cameras could provide high-quality geometric, color, and texture information [15], which enables further modeling and rendering [16] of the point clouds and permits tasks in object detection, segmentation, and classification [17]. Meanwhile, for autonomous navigation, the camera is also vital to visual-LiDAR odometry through sensor fusion [4]. All these functions would rely on the accurate calibration of the intrinsic parameters of the camera and extrinsic parameters between the cameras and LiDAR [15].
Traditionally, multiple cameras are usually required to be complementary to the omnidirectional FoV of LiDAR. This work employs an omnidirectional camera over the traditional multi-camera vision to avoid bulky construction, high cost, shutter synchronization, and cascaded extrinsic calibrations. The intrinsic and extrinsic parameters of this novel omnidirectional sensor suite are essentially needed.
The intrinsic parameters of the omnidirectional camera must be well calibrated since these types usually possess much larger and more complex distortions than pin-hole cameras [18]. In [18, 19, 20], higher-order polynomial-based intrinsic models are introduced with many degrees of freedom to obtain satisfactory results. A popular OcamCalib toolbox based on the checkerboard is provided [19]. These methods could be susceptible to over-fitting with high-order polynomials and often require evenly distributed artificial targets and dense features across the entire space. Typically, these calibration processes are manual and could lead to tedious procedures with a large margin of error. Additionally, the omnidirectional camera in our work is constructed with a refractive-reflective geometry to capture a ring-like FoV beyond \\(180^{\\circ}\\). This construction makes intrinsic calibration even more difficult. An accurate, automatic, and targetless calibration method is desired.
The extrinsic calibration method between the omnidirectional camera and LiDAR has only been explored in [21] using edge correspondence to match point clouds and images. The bearing angle images highlight the edge features, which are manually positioned. Targetless extrinsic calibration methods for monocular cameras and LiDAR have been developed recently. With the non-repetitive LiDARs, CamVox [12] could project the image-like LiDAR point clouds onto the camera image plane and extract edge pixels using the grayscale images based on reflectivity and depth. The method proposed in [13] uses voxels to extract the edge points in 3D space and classifies the edges based on depth continuity. Both methods work well with conventional pin-hole cameras and need to be extended toward the omnidirectional cameras with significantly larger distortions. An additional targetless extrinsic calibration method employing mutual information (MI) is also developed [22], which maximizes the intensity correlations of LiDAR and camera. However, the misrepresented information caused by lighting conditions, surface reflection properties, and spectral reflectance disagreement could result in worse calibration than the edge-based methods.
In the proposed targetless co-calibration method, the high-resolution dense point cloud of the non-repetitive scanning LiDAR gives abundant and ground-truth-level features, which eliminates the artificial targets and manual involvement and reduces the error caused by insufficient coverage and sparse features of the targets. With the co-calibration method, the intrinsic and extrinsic parameters are obtained simultaneously and can be re-calibrated fast and reliably in work scenes.
## III Proposed System
### _Co-calibrated Omnidirectional Sensor Suite_
The Livox Mid-360 LiDAR has a \\(360^{\\circ}\\times 55^{\\circ}\\) FoV and features a non-repetitive scanning pattern, with increasingly denser points over time (the coverage of FoV approaches 100%), as shown in Fig. 2b. The unique feature specifically benefits both odometry-based and stationary mapping modes. The omnidirectional camera provides color information of the surroundings and has a corresponding \\(360^{\\circ}\\times 70^{\\circ}\\) FoV (Fig. 2a). Both sensors are synchronized and are mounted on a two-axis gimbal (Fig. 1b) to extend the scanning FoV to \\(360^{\\circ}\\times 300^{\\circ}\\).
The co-calibration simultaneously obtains the intrinsic (camera) and extrinsic (camera-LiDAR) parameters, defined respectively as \\(\\mathbf{\\Theta}\\triangleq\\left[u_{0},v_{0},c,d,e,a_{0},\\ldots,a_{n}\\right]^{ \\mathrm{T}}\\) and \\(\\mathbf{\\Delta}\\triangleq\\left[\\alpha,\\beta,\\gamma,t_{x},t_{y},t_{z}\\right]^{ \\mathrm{T}}\\), which will be introduced later. With
Fig. 3: Proposed co-calibration process. * The grayscale value indicates the average reflectivity of the projected LiDAR points within a pixel.
Fig. 2: Configuration of the sensors: (a) omnidirectional camera and Livox Mid-360 LiDAR, both on the gimbal mount; (b) point cloud accumulation over time due to the non-repetitive scanning nature of the Livox LiDAR.
the unique benefit of the non-repetitive scanning LiDAR, an extremely dense point cloud is always available, which provides a 3D ground truth of the environment. This high-resolution point cloud could be projected onto the 2D image plane with pixel values from LiDAR reflectivity, from which clear edge features could be extracted. To align the edges from LiDAR and the camera, the co-calibration iteratively maximizes the correspondence of projected LiDAR edge points with the omnidirectional camera edge pixels. Kernel Density Estimation (KDE) is employed to estimate the camera edge distribution with different distribution smoothness (by varying bandwidth coefficient) to obtain global optimum. The entire process of co-calibration can be divided into the following two steps (Fig. 3):
#### Iii-A1 Edge Extraction
Edge extractions are performed for both camera and LiDAR. For the camera, exposure fusion [23] is adopted to enhance the dynamic range of images to capture more details for low and high-brightness objects. Canny edge extraction [24] is performed on the enhanced image, with edge points \\(\\mathbb{Q}=[\\mathbf{q}_{1},\\mathbf{q}_{2}\\ldots,\\mathbf{q}_{n}]\\). For LiDAR, since the FoV is smaller, point clouds scanned from different pitch angles are stitched together. The stitching is performed by the generalized iterative closest point (GICP) algorithm [25] with the initial transformation given by the state of the gimbal. The stitched point cloud with reflectivity is then projected to an image plane with the azimuthal angle and elevation angle as the coordinates, generating a grayscale image by taking the average reflectivity of the projected LiDAR points within each pixel. The Canny edge extraction is performed on this grayscale image. Uniform sampling is performed in each stage to remove the non-uniform point distribution. The edge pixels are then identified in the original 3D point cloud \\(\\mathbb{P}=[^{L}\\mathbf{P}_{1},{}^{L}\\mathbf{P}_{2}\\ldots,{}^{L}\\mathbf{P}_{m}]\\).
#### Iii-A2 Iterative Optimization
The iterative optimization is performed in the omnidirectional image space. The LiDAR edge points are projected to the image coordinates through the following equations:
\\[{}^{C}\\mathbf{P} ={}_{L}^{C}\\mathbf{T}(^{L}\\mathbf{P};\\mathbf{\\Delta})={}_{L}^{C} \\mathbf{R}\\cdot{}^{L}\\mathbf{P}+{}_{L}^{C}\\mathbf{t},{}^{L}\\mathbf{P}\\in \\mathbb{P}, \\tag{1}\\] \\[\\mathbf{p} =\\mathbf{\\Pi}(^{C}\\mathbf{P};\\mathbf{\\Theta})=\\begin{bmatrix}c&d\\\\ e&1\\end{bmatrix}\\begin{bmatrix}r\\cos\\phi-u_{0}\\\\ r\\sin\\phi-v_{0}\\end{bmatrix},\\] (2) \\[r =\\mathbf{F}(\\theta;a_{0},\\ldots,a_{n})=a_{0}+a_{1}\\theta^{1}+ \\ldots+a_{n}\\theta^{n},\\] (3) \\[\\theta =\\arccos(\\frac{z}{\\sqrt{x^{2}+y^{2}+z^{2}}}),\\] (4) \\[\\phi =\\arccos(\\frac{x}{\\sqrt{x^{2}+y^{2}}}), \\tag{5}\\]
where \\({}^{C}\\mathbf{P}\\) and \\({}^{L}\\mathbf{P}\\) denote the 3D point coordinates in camera and LiDAR coordinate systems, respectively, and they are related through the extrinsic transformation \\({}_{L}^{C}\\mathbf{T}(^{L}\\mathbf{P};\\mathbf{\\Delta})\\), i.e., rotation \\({}_{L}^{C}\\mathbf{R}\\) and translation \\({}_{L}^{C}\\mathbf{t}\\) with the extrinsic parameters \\(\\mathbf{\\Delta}\\). The symbol \\(\\mathbf{p}\\) denotes the location of the point in the camera image space, and \\(\\mathbf{\\Pi}(^{C}\\mathbf{P};\\mathbf{\\Theta})\\) expresses the intrinsic transformation from \\({}^{C}\\mathbf{P}=[x,y,z]^{\\mathrm{T}}\\) (3D point) to \\(\\mathbf{p}\\) (2D point), with the distortion correction matrix \\(\\begin{bmatrix}c&d\\\\ e&1\\end{bmatrix}\\). The pixel radius \\(r\\) from the image center \\([u_{0},v_{0}]^{\\mathrm{T}}\\) is transformed from the elevation angles \\(\\theta\\) by a polynomial function \\(\\mathbf{F}(\\theta;a_{0},\\ldots,a_{n})\\) in the camera model; \\(\\theta\\) and \\(\\phi\\) are the elevation and azimuth angle of \\({}^{C}\\mathbf{P}\\) (Note the omnidirectional camera features a ring image).
To facilitate the alignment between the camera edges and the LiDAR edges, the camera edge distribution with nonparametric probability density function is constructed with the Gaussian Kernel by Kernel Density Estimation (KDE) [26]. The optimization is based on maximizing the probabilities of the projected LiDAR edge points onto the camera edge distribution:
\\[\\hat{\\mathbf{\\Theta}},\\hat{\\mathbf{\\Delta}} =\\arg\\max_{\\mathbf{\\Theta},\\ \\mathbf{\\Delta}}\\frac{1}{m}\\sum_{i=1}^{m}||\\hat{\\mathbf{f}}(\\mathbf{p}_{i};h, \\mathbb{Q})||^{2}, \\tag{6}\\] \\[\\hat{\\mathbf{f}}(\\mathbf{p}_{i};h,\\mathbb{Q}) =\\frac{1}{nh^{2}}\\sum_{j=1}^{n}\\mathbf{K}\\left(\\frac{\\mathbf{p}_{i}- \\mathbf{q}_{j}}{h}\\right),\\] (7) \\[\\mathbf{K}(\\mathbf{x}) =\\frac{1}{\\sqrt{2\\pi}\\det(\\mathbf{\\Sigma})}e^{-\\frac{1}{2}(\\mathbf{x}-\\bm {\\mu})^{\\mathrm{T}}\\mathbf{\\Sigma}^{-1}(\\mathbf{x}-\\mathbf{\\mu})},\\] (8) \\[\\mathbf{\\mu} =\\left[0,0\\right]^{\\mathrm{T}},\\ \\mathbf{\\Sigma}=\\mathbf{I}_{2\\times 2}, \\tag{9}\\]
where \\(h\\) denotes the bandwidth of the KDE.
Several rounds of iterative optimization with reducing bandwidth are carried out to approach the correct calibration values smoothly. At the start of the process, the bandwidth is set at a large number to get a continuous and smooth cost function, which allows the optimization to approach the optimal region quickly without many local optima. Then the bandwidth is reduced gradually to increase the gradient, ensuring a sensitive optimization around the optimum (optimization of the x-axis translation is shown in Fig. 4).
The optimization uses the Levenberg-Marquardt method implemented in Ceres-solver [27]. For computational efficiency, the parabolic Epanechnikov kernel \\(\\mathbf{K}(\\mathbf{x})=\\frac{3}{4}(1-\\mathbf{x}^{\\mathrm{T}}\\mathbf{x})\\) can be substituted for the Gaussian kernel.
### _Coarse-to-fine Hybrid Mapping_
The coarse-to-fine hybrid mapping workflow is outlined in Fig. 5. With the co-calibration and synchronization, all the obtained LiDAR points are represented in both coordinates and color. Odometry/SLAM methods are used as a backbone to provide localization in both coarse and fine mapping. We used FAST-LIO (LiDAR-Inertial odometry [11]) in our current
Fig. 4: Iterative optimization with the reducing KDE bandwidth: (a) the normalized cost w.r.t. the translation in the x-axis under the different values of bandwidth; (b) zoom in to a sub-region of (a) to demonstrate the iterative process.
system, but the choice is not limited; other odometry/SLAM methods could be utilized as well. At the coarse mapping stage, the robot obtains the localization and motion results from the odometry, from which the scanned points are converted and registered to the global map. Based on the coarse map, a few viewpoints for stationary mapping are planned for the targeted ROIs, which is well developed in previous work by considering the constraints such as range, grazing angle, FoV, and overlap [1]. The robot then navigates to the generated viewpoints one-by-one through the backbone odometry/SLAM and performs the fine mapping, respectively. At each viewpoint, stationary scans are performed at several gimbal states, with overlapping FoV regions between the adjacent two states, and cover a large overall FoV (\\(360^{\\circ}\\times 300^{\\circ}\\)). These point clouds will be pre-registered based on the gimbal angles (as initial angles) at each viewpoint. The scans from all the viewpoints are then combined with the global coarse map based on robot localization (again provided by the LiDAR-Inertial odometry) as the initial state for optimization. Finally, the GICP [25] algorithm is used to optimize all the localization results and gimbal states and refine all stationary scans and the coarse map to form the fine map. Notably, we could choose either odometry or SLAM methods in the localization backbone. Although SLAM has more loop-closure functions than odometry, the final GICP optimization is accurate enough to yield a much better localization result.
## IV Experiments and Results
### _Co-calibration Results_
The effectiveness of the proposed co-calibration method is demonstrated in three natural scenes, as shown in Fig. 6. The projection error (in pixels) is defined as:
\\[e=\\frac{1}{n}\\sum_{i=1}^{n}\\mathbf{d}(\\mathbf{p}_{i};\\mathbb{Q}), \\tag{10}\\]
where \\(\\mathbf{d}\\) is to calculate the distance from the LiDAR projected point \\(p_{i}\\) to the nearest point in target set \\(\\mathbb{Q}\\). Note that the largest 10% of the distances are considered outliers with no correspondences and are eliminated. Overall, the co-calibration works well in all scenes with projection errors on the order of 3 pixels or less. The colorized point clouds after co-calibration also show much better consistency, as seen in Fig. 6b.
We further compare our co-calibration results with the classical target-based intrinsic calibration [19, 28], and the state-of-the-art MI-based extrinsic calibration [22], respectively, as shown below.
#### Iv-A1 Analysis of the Intrinsic Results
As a comparison, the target-based intrinsic calibration for omnidirectional cameras is performed [19]. Thirty checkerboards are manually selected as a reference set (Fig. 7a). As the number and position of the targets affect the calibration profoundly, we evaluate the calibration result as a function of the targets' number and randomly select a specific number of checkerboards from the reference set for calibration (repeated 100 times independently). The mean reprojection error is used to represent the calibration accuracy. The results in Fig. 7b show that as the number of checkerboards increases, the calibration is more accurate and converged. It is likely that more checkerboards would increase the FoV coverage and feature points density and improve the effectiveness of the target-based method. However, it is labor-intensive to place many checkerboards uniformly and densely around the sensor and manually select the appropriate ones, which may be impossible in the field. The co-calibration method, on the contrary, employs dense LiDAR points as abundant, well-covered, and accurate features; and the elimination of artificial targets and human involvement enables an accurate, efficient, and field-friendly approach. Our co-calibration result yields a significantly improved performance on the same reference set, compared with the conventional method (orange and blue boxplot in Fig. 7b, respectively).
#### Iv-A2 Analysis of the Extrinsic Results
The mutual information (MI)-based extrinsic calibration method utilizes the fact that the reflectivity of LiDAR points and corresponding grayscale intensity values of camera pixels are correlated since both of them capture the spectral response of the object at light frequencies (LiDAR 905 nm, camera 400-800 nm), which are usually similar. These values
Fig. 5: Proposed coarse-to-fine hybrid mapping workflow. The odometry/SLAM serves as a backbone to provide localization results.
Fig. 6: Co-calibration results in three scenes: (a) aligned LiDAR edge points (red) on camera images; (b) comparison of colorized point clouds before and after co-calibration with the average projection errors in pixels.
are then used to calibrate the extrinsic parameters between the camera and LiDAR by maximizing the MI of the two distributions [22]. Fig. 8 shows the comparisons of the two optimization methods demonstrating the normalized costs on different extrinsic parameters. The proposed co-calibration method shows a much more sensitive and reliable gradient in the cost function near the optimum than the MI-based method.
The inaccurate calibration result of the MI-based method could be attributed mainly to three reasons: the lighting conditions, the surface reflection properties, and the spectral reflectance disagreement. The camera's light source \\(I_{i}\\) is the external ambient lighting which does not change with the camera pose. On the contrary, LiDAR uses an active laser from the sensor and therefore differs significantly from the camera, as shown in Fig.(a)a. Besides the lighting, the surfaces of the objects are important. The detected intensity could be modeled as follows:
\\[I_{r}=K_{d}\\cdot I_{i}\\cdot f(\\theta), \\tag{11}\\]
where \\(I_{r}\\) and \\(I_{i}\\) indicate the reflection intensity and incident intensity, respectively, \\(K_{d}\\) is the reflectance, and \\(f(\\theta)\\) describes the surface properties of the object with respect to incident angle \\(\\theta\\). For most objects, the surface is Lambertian (diffusive), and in that case, \\(f(\\theta)=\\cos\\theta\\). However, many surfaces do not follow this property, and it could be a specular reflection that the LiDAR does not collect any signal; or the retroreflection that the majority of the energy will be directed back toward the LiDAR itself and gives a strong intensity, such as those on traffic signs and warning stickers, which show a contrast difference in the LiDAR intensities from the camera intensities shown in the red boxes in Fig. (b)b. Additionally, the spectral reflectance of objects at various light wavelengths could be different. For instance, materials composed of plant fibers show a large reflectance at around 905 nm, even those dyed in black colors. As a result, no contrast could be seen in LiDAR intensities of materials with different colors, as shown in green boxes in Fig. (b)b. All three factors mentioned above could cause significant differences in intensity response from the LiDAR and the camera and reduce the applicability of the MI-based method.
### _Coarse-to-fine Hybrid Mapping Results_
The proposed coarse-to-fine hybrid mapping method is demonstrated in an academic building on the SUSTech campus. The global coarse map is generated by Fast-LIO in ten minutes, and the ROI is selected based on this global coarse map (Fig. (a)a). In this case, five viewpoints are properly planned in this ROI (Fig. (b)b), and perform stationary scanning for three minutes in each (Fig. (c)c).
Plane thickness could be used as a quantitative metric for precision evaluation and comparison between coarse and fine mapping. Local planes with a small third eigenvalue \\(\\lambda_{3}\\) are selected by diagonalizing the covariance matrix. Assuming the points along the plane's normal direction follow the Gaussian distribution (corresponding to the third eigenvalue \\(\\lambda_{3}\\) with the normal direction of the plane defined by its eigenvector), we could set the thickness of the plane as \\(4\\sqrt{\\lambda_{3}}\\)
Fig. 8: Comparisons of the normalized cost function between the proposed method and the MI-based method. The optimal values should lie in the gray areas estimated based on manufacturing.
Fig. 7: Comparison with the target-based intrinsic calibration: (a) the poses of the thirty checkerboards; (b) boxplots of projection errors of target-based calibration (blue) and the proposed co-calibration (orange).
Fig. 9: Analysis of the MI-based extrinsic calibration: (a) the types of reflection of the LiDAR and camera w.r.t. the rough surface and the retroreflective surface; (b) the inconsistent intensity cases between LiDAR and camera, including retroreflection cases (red boxes), and the special spectral reflectance cases (green boxes).
The coarse and fine maps of the three different scenes are shown in Fig. (a)a, whereas the zoomed views show the point cloud quality with the top view of the selected planes to demonstrate the mapping quality. The quantitative evaluations of the plane thickness (the mapping precision) in these scenes are summarized in Fig. (b)b. Besides precision (spread of data), accuracy (correctness) is also important to examine. Fig. (c)c illustrates the measurement accuracy (compared to results from a TLS system, which we regard as ground truth). It is evident that both the precision and accuracy of fine mapping outperform coarse mapping. Although odometry-based coarse mapping has good performances in best-case scenarios, it could be significantly improved by fine mapping in the average values and worse-case scenarios, which are the main concerns of the surveying and mapping industry.
With the accurate co-calibration results, LiDAR points can be colorized from the image information through the transformation in Eqn. 1 and Eqn. 2. Fig. (d)d shows the colorized hybrid mapping, and Fig. (e)e illustrates the fine mapping of the zoomed-in ROI. The coarse-to-fine map with great precision and accurate colorization pave the way for higher precision with a single unified setup and workflow. It benefits industries requiring both efficiency and accuracy, such as construction automation and building inspection.
Lastly, a detailed comparison of the proposed system with the current widely used TLS and MLS systems (shown in Fig. (a)a) is made in Table I, where several key
Fig. 11: Comparison of coarse and fine mapping: (a) coarse and fine maps in three scenes (scene #1 is from Fig. (b)b, scene #2 and #3 are new). The left column shows the large-scale coarse map, and the right column shows the zoomed-in coarse and fine map in top view (to visualize wall thickness) and third person view (to visualize scene); (b) mapping precision from the three scenes; (c) mapping accuracy from the three scenes; (d) top view of the colorized fine map; (e) third-person view of the colorized ROI.
Fig. 10: Coarse-to-fine hybrid mapping: (a) odometry-based global coarse mapping; (b) coarse map of the selected ROI, with markers indicating the planned viewpoints; (c) fine map of the ROI, the color illustrates the scans from respective viewpoints.
parameters are listed. The most crucial difference is that the proposed system integrates two working modes in a single streamlined workflow, ensuring overall mapping efficiency and precision/accuracy. All other systems are either TLS which only works in stationary mode, or MLS in mobile mode. Due to this capability, it is the first robotic system that allows automatic viewpoint planning instead of human intuition-based viewpoints selection. In addition, the mobile robot could navigate itself with overall good localization and provide good initial states for fine map optimization. The mapping precision and accuracy of the proposed system are also compared with these systems [29, 30, 31]. The proposed system achieves performance close to the LEICA TLS but allows mobility as MLS, agreeing with the purpose of the system.
## V Conclusion
This paper proposed a coarse-to-fine hybrid 3D mapping robotic system based on an omnidirectional camera and a non-repetitive Livox LiDAR. A hybrid mapping approach with both odometry-based and stationary mapping modes is integrated into one mobile mapping robot, achieving a streamlined and automated mapping workflow with the assurance of efficiency and mapping precision and accuracy. Meanwhile, the proposed automatic and targetless co-calibration method provides accurate parameters to generate colorized mapping. Specifically, the calibration is based on edges extracted from camera images and LiDAR reflectivity, and the result is compared with the mutual-information-based calibration method, which was under-performing possibly due to varied reflection nature in light sources, surface reflection properties, and the spectral reflectance disagreement in the MI-based method. In future work, more complicated planning strategies could be developed to further optimize both the objectives of scanning time and spatial coverage. We believe this new automated mapping robot will open up a new horizon for surveying and inspection robotics.
## References
* [1] P. S. Blaer and P. K. Allen, \"View planning and automated data acquisition for three-dimensional modeling of complex sites,\" _Journal of Field Robotics_, vol. 26, no. 11-12, pp. 865-891, 2009.
* [2] C. Debeunne and D. Vivet, \"A review of visual-lidar fusion based simultaneous localization and mapping,\" _Sensors_, vol. 20, no. 7, p. 2068, 2020.
* [3] V. V. Lehtola, H. Kaartinen, A. Nuchter, R. Kaijlaudio, A. Kukko, P. Litkey, E. Honkavara, T. Rosnell, M. T. Vaaja, J.-P. Virtanen _et al._, \"Comparison of the selected state-of-the-art 3d indoor scanning and point cloud generation methods,\" _Remote sensing_, vol. 9, no. 8, p. 796, 2017.
* [4] J. Lin and F. Zhang, \"R 3 live: A robust, real-time, rgb-colored, lidar-inertial-visual tightly-coupled state estimation and mapping package,\" in _2022 International Conference on Robotics and Automation (ICRA)_. IEEE, 2022, pp. 10 672-10 678.
* [5] B. Schwarz, \"Mapping the world in 3d,\" _Nature Photonics_, vol. 4, no. 7, pp. 429-430, 2010.
* [6] Z. Liu, F. Zhang, and X. Hong, \"Low-cost retina-like robotic lidars based on incommensurable scanning,\" _IEEE/ASME Transactions on Mechatronics_, vol. 27, no. 1, pp. 58-68, 2021.
* [7] R. Otero, S. Laguela, I. Garrido, and P. Arias, \"Mobile indoor mapping technologies: A review,\" _Automation in Construction_, vol. 120, p. 103399, 2020.
* [8] A. Keitamniemi, J.-P. Virtanen, P. Ronnholm, A. Kukko, T. Rantanen, and M. T. Vaaja, \"The Combined Use of SLAM Laser Scanning and TLS for the 3D Indoor Mapping,\" _Buildings_, vol. 11, no. 9, 2021.
* [9] A. Aryan, F. Bosch, and P. Tang, \"Planning for terrestrial laser scanning in construction: A review,\" _Automation in Construction_, vol. 125, p. 103551, 2021.
* [10] J. Shao, W. Zhang, N. Mellado, N. Wang, S. Jin, S. Cai, L. Luo, T. Lejemble, and G. Yan, \"Slam-aided forest plot mapping combining terrestrial and mobile laser scanning,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 163, pp. 214-230, 2020.
* [11] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, \"Fast-lio2: Fast direct lidar-inertial odometry,\" _IEEE Transactions on Robotics_, 2022.
* [12] Y. Zhu, C. Zheng, C. Yuan, X. Huang, and X. Hong, \"Cmavox: A low-cost and accurate lidar-assisted visual slam system,\" in _2021 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2021, pp. 5049-5055.
* [13] C. Yuan, X. Liu, X. Hong, and F. Zhang, \"Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments,\" _IEEE Robotics and Automation Letters_, vol. 6, no. 4, pp. 7517-7524, 2021.
* [14] I. Puente, H. Gonzalez-Jorge, J. Martinez-Sanchez, and P. Arias, \"Review of mobile mapping and surveying technologies,\" _Measurement_, vol. 46, no. 7, pp. 2127-2145, 2013.
* [15] J. S. Berrio, M. Shan, S. Worrall, and E. Nebot, \"Camera-lidar integration: Probabilistic sensor fusion for semantic mapping,\" _IEEE Transactions on Intelligent Transportation Systems_, 2021.
* [16] A. Javaheri, C. Bries, F. Pereira, and I. Ascenso, \"Point cloud rendering after coding: Impacts on subjective and objective quality,\" _IEEE Transactions on Multimedia_, vol. 23, pp. 4049-4064, 2020.
* [17] A. Jaakkola, J. Hypyra, A. Kukko, X. Yu, H. Kaartinen, M. Lehtomaki, and Y. Lin, \"A low-cost multi-sensor mobile mapping system and its feasibility for tree measurements,\" _ISPRS journal of Photogrammetry and Remote Sensing_, vol. 65, no. 6, pp. 514-522, 2010.
* [18] J. Kamala and S. S. Brandt, \"A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,\" _IEEE transactions on pattern analysis and machine intelligence_, vol. 28, no. 8, pp. 1335-1340, 2006.
* [19] D. Scaramuzza, A. Martinelli, and R. Siegwart, \"A toolbox for easily calibrating omnidirectional cameras,\" in _2006 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2006, pp. 5695-5701.
* [20] K. Kanatani, \"Calibration of ultrawide fisheye lens cameras by eigenvalue minimization,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 35, no. 4, pp. 813-822, 2012.
* [21] D. Scaramuzza, A. Harati, and R. Siegwart, \"Extrinsic self calibration of a camera and a 3d laser rangefinder from natural scenes,\" in _2007 IEEE/RSJ International Conference on Intelligent Robots and Systems_, 2007, pp. 4164-4169.
* [22] G. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, \"Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information,\" in _Twenty-Sixth AAAI Conference on Artificial Intelligence_, 2012.
* [23] T. Mertens, J. Kautz, and F. Van Reeth, \"Exposure fusion,\" in _15th Pacific Conference on Computer Graphics and Applications (PG'07)_. IEEE, 2007, pp. 382-390.
* [24] L. Ding and A. Goshtasby, \"On the canny edge detector,\" _Pattern recognition_, vol. 34, no. 3, pp. 721-725, 2001.
* [25] A. Segal, D. Haehnel, and S. Thrun, \"Generalized-icp.\" _Robotics: science and systems_, vol. 2, no. 4, p. 435, 2009.
* [26] G. R. Terrell and D. W. Scott, \"Variable kernel density estimation,\" _The Annals of Statistics_, pp. 1236-1265, 1992.
* [27] S. Agarwal, K. Mierle, and T. C. S. Team, \"Ceres Solver,\" 2022. [Online]. Available: [https://github.com/ceres-solver/Ceres-solver](https://github.com/ceres-solver/Ceres-solver)
* [28] S. Urban, J. Leitloff, and S. Hinz, \"Improved wide-angle, fisheye and omnidirectional camera calibration,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 108, pp. 72-79, 2015.
* [29] FARO, \"Faro focus premium: Capture with confidence and connect your world faster,\" 2022, [https://media.faro.com/media/Project/FARO/FARO/Resources1_BROCHURE2022/FARO-Spired/AE_Cores-Premium/3154_Brochure_Focus/Premium_AEC_ENG_LT.pdf](https://media.faro.com/media/Project/FARO/FARO/Resources1_BROCHURE2022/FARO-Spired/AE_Cores-Premium/3154_Brochure_Focus/Premium_AEC_ENG_LT.pdf).
* [30] A. Dlesk, K. Vach, J. Sedina, and K. Paveleka, \"Comparison of leica bild3k60 and leica bild2go on the chosen test objects,\" _ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences_, 2022.
* [31] S. De Geyter, J. Vermandere, H. De Winter, M. Bassier, and M. Vargawen, \"Point cloud validation: On the impact of laser scanning technologies on the semantic segmentation for bim modeling and evaluation,\" _Remote Sensing_, vol. 14, no. 3, p. 582, 2022. | This paper presents a novel 3D mapping robot with an omnidirectional field-of-view (FoV) sensor suite composed of a non-repetitive LiDAR and an omnidirectional camera. Thanks to the non-repetitive scanning nature of the LiDAR, an automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters for the omnidirectional camera and the extrinsic parameters for the camera and LiDAR, which is crucial for the required step in bringing color and texture information to the point clouds in surveying and mapping tasks. Comparisons and analyses are made to target-based intrinsic calibration and mutual information (MI)-based extrinsic calibration, respectively. With this co-calibrated sensor suite, the hybrid mapping robot integrates both the odometry-based mapping mode and stationary mapping mode. Meanwhile, we proposed a new workflow to achieve coarse-to-fine mapping, including efficient and coarse mapping in a global environment with odometry-based mapping mode; planning for viewpoints in the region-of-interest (ROI) based on the coarse map (relies on the previous work [1]); navigating to each viewpoint and performing finer and more precise stationary scanning and mapping of the ROI. The fine map is stitched with the global coarse map, which provides a more efficient and precise result than the conventional stationary approaches and the emerging odometry-based approaches, respectively.
Mapping, Robotic Systems, Omnidirectional Vision, Calibration and Identification, SLAM. | Summarize the following text. | 293 |
arxiv-format/2012_03317v1.md | # SMAP-based Retrieval of Vegetation Opacity and Albedo
## 1 Introduction
For monitoring vegetation abundance and function, microwave radiometry offers distinct advantages over optical indices because the atmosphere is nearly transparent (sensing regardless of cloud cover and solar illumination) and the canopy may be partially penetrated with microwaves (Shi et al., 2013; Du et al., 2016). In this study we extend the applicability of the new global L-band radiometry to the terrestrial biosphere. We propose that the measurements of the surface at this microwave frequency are as valuable to global ecology as they have proven to be for the ocean and hydrologic sciences. In this study we use the one full annual cycle of SMAP radiometer data to derive information of the water status and structure of vegetation canopy around the globe. The new information is complementary to existing visible/infrared (e.g., vegetation indices such as Extended Vegetation Index [EVI], Normalized Difference Vegetation Index [NDVI], Solar-Induced Fluorescence [SIF]), laser altimeter tree height and radar scattering vegetation indicators. The study shows that useful information on the canopy status can be derived from the L-band radiometer measurements. Importantly the process of extracting the information does not rely on any ancillary information that could influence the spatial and temporal patterns of the vegetation indicators. As independent information on the vegetation canopy, the new data complements the visible/infrared, lidar and radar data. Together they can enable new capabilities in monitoring ecosystem function in different biomes (e.g., water relations in the soil-vegetation continuum and ecosystem response to dry episodes and seasons) and potentially reveal evolutionary adaptation of different biomes and vegetation types to water stress, photosynthetic radiation availability and nutrient, predator and competition stresses.
## 2 Approach
In this study we implement an interpretation framework that uses two-channels of the SMAP radiometer (H- and V-pol) to infer the three key variables (_x_, \\(o\\) and _k_) without reliance on ancillary information on vegetation. The inputs to the algorithm are the two SMAP brightness temperature measurements and an estimate of physical temperatureat 06:00 local time. The outputs are \\(\\tau\\), \\(\\omega\\) and soil rough surface reflectivity \\(r_{p}\\). The approach recognizes that the number of unknowns (in this case three) is less than the number of measurements (two and less owing to the correlation among the observations). In this study we estimate the number of independent information (fractional between one and two) for the two SMAP measurements using the Degrees-of-Information (DoI) introduced by Konings et. al. (2015). Based on the DoI of observations, we implement a retrieval algorithm for \\(\\tau\\), \\(\\omega\\) and \\(r_{p}\\). The approach is to use adjacent-in-time overpasses (two to three days apart) to infer faster-changing soil and slower-changing vegetation characteristics.
The attempt to retrieve more parameters than the DoI of the observations can result in multitude of local minima in the retrieval cost function. Under these circumstances the retrievals are susceptible to observations noise and initial conditions. Attempts to retrieve more parameters than the DoI is prevalent in the case of tau-omega-based surface soil moisture and vegetation microwave opacity retrievals. These approaches are often characterized by higher noise standard deviation owing to jumps between local minima. They are often also characterized by high incidence of non-convergence. It is important to measure the DoI of observations and avoid retrieval of more parameters than information in the observations allow.
The estimates of \\(\\tau\\) and \\(\\omega\\) in this case are based on the low-frequency microwave measurements and free of any influence from visible/infrared vegetation indices or classifications. As such they serve the goals of this study which is to introduce new/complementary information on the dynamic global terrestrial biosphere in the recently emerging new era of global low-frequency microwave remote sensing.
The new dynamic maps of \\(\\tau\\) and \\(\\omega\\) should also help reduce the uncertainty of estimates used in the SMAP surface soil moisture retrieval algorithms. Again these algorithms rely on NDVI seasonal climatology and static vegetation visible/infrared-based vegetation classification.
## 3 Data and Analyses
The L-band brightness temperature (horizontal and vertical polarization) used in this study are from the enhanced Chaubell et al. (2016) enhanced SMAP radiometer product. The period of coverage is one full annual cycle spanning April 1, 2015 to March 31, 2016. The data is posted at 9 [km] over an Equal-Area Scalable Earth-2 (EASE2) grid domain. The resolution of the brightness temperatures are about 40 [km] based on the geometric mean of the major and minor axes defined by the oval containing half of the power (-3 [dB]) across the SMAP antenna gain.
Figure 1: Global distribution of the time-averaged vegetation optical depth \\(\\tau\\) at nadir (dimensionless). Retrievals are based on the application of the dual-channel time-series algorithm on one year of SMAP radiometer instrument measurements. The maximum values are concentrated across dense and wet tropical forests and correspond to transmissivity values as low as 15% at 40 degrees. Seasonally dry tropics and savannas (central Africa and southern Brazil for example) are distinct from the nearby wet tropics. Boreal forests have high values as well reaching transmissivity values up to 40%. The inset figure in this and subsequent maps are an estimate of the mapped data marginal probability density.
related to the wet biomass content of the canopy follows patterns of light-limitation in vegetation growth. The effective single-scattering globally has a median close to 8% and half of the retrieved values are within 5% of the median.
The results confirm and expand on the more limited (in spatial and temporal resolution) study by Konings et al. (2016) using Aquarius measurements. The approach of this study also allows estimation of a dynamic \\(\\omega\\) as well as dynamic \\(\\tau\\).
The angular information allows retrieval of vegetation opacity and surface reflectivity simultaneously. In forthcoming studies regional perspectives are used to focus on particular biomes (e.g., agro-ecosystems, wet and dry tropical forests). The regional studies use knowledge of the dominant vegetation phenology to select a relevant window for the estimation of dynamic vegetation parameters. With radiometry-based retrieval of surface volumetric soil water content and vegetation microwave optical depth, the studies diagnose timing of soil water uptake in the soil-vegetation continuum as well as characterize the vegetation growth response to seasonal water- and light-limitation.
## References
* Chaubell et al. (2016) Chaubell, M. J., S. Chan, R. S. Dunbar, J. Peng, and S. Yueh. 2016. _SMAP Enhanced L1C Radiometer Half-Orbit 9 km EASE-Grid Brightness Temperatures, Version 1_. [Indicate subset used].
* Boulder (2016) Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. doi: [http://dx.doi.org/10.5067/2C9O9KT6JAWS](http://dx.doi.org/10.5067/2C9O9KT6JAWS).
* Du et al. (2016) Du, J., Kimball, J. S., & Jones, L. A. (2016). Passive Microwave Remote Sensing of Soil Moisture Based on Dynamic Vegetation Scattering Properties for AMSR-E. IEEE Transactions on Geoscience and Remote Sensing, 54(1), 597-608.
* Konings et al. (2015) Konings, A. G., McColl, K. A., Piles, M., & Entekhabi, D. (2015). How many parameters can be maximally estimated from a set of measurements?. _IEEE Geoscience and Remote Sensing Letters_, _12_(5), 1081-1085.
* Konings et al. (2016) Konings, A. G., Piles, M., Rotzer, K., McColl, K. A., Chan, S. K., & Entekhabi, D. (2016). Vegetation optical depth and scattering albedo retrieval usingtime series of dual-polarized L-band radiometer observations. _Remote sensing of environment_, _172_, 178-189.
* Shi et al. (2008) Shi, J., Jackson, T., Tao, J., Du, J., Bindlish, R., Lu, L., & Chen, K. S. (2008). Microwave vegetation indices for short vegetation covers from satellite passive microwave sensor AMSR-E. Remote Sensing of Environment, 112(12), 4285-4300. Kurum, Mehmet. \"Quantifying scattering albedo in microwave emission of vegetated terrain.\" _Remote Sensing of Environment_ 129 (2013): 66-74.
Figure 2: Retrievals of the effective single-scattering albedo \\(\\omega\\) (dimensionless) based on one year of SMAP radiometer measurements and the multi-temporal dual-channel algorithm. The values are generally below 12% for all vegetation types. Regions with values as high as 25% are restricted to far northern boreal regions. These regions contain a high density of small lakes and inland water bodies. Inadequate correction of water-body contribution in the brightness temperature data can result in low brightness temperatures. An inflated vegetation albedo can fit the brightness temperature values. In other regions of the globe the effective single-scattering albedo \\(\\omega\\) is generally between 0% and 12% with the more densely vegetated areas (eastern North America and tropical Africa for example) showing values in the upper half of the range. Sparsely-vegetated areas (western US, subtropical Africa, central Asia and Australia for example) show with values in the 0% to 5% range. | Over land the vegetation canopy affects the microwave brightness temperature by emission, scattering and attenuation of surface soil emission. The questions addressed in this study are: 1) what is the transparency of the vegetation canopy for different biomes around the Globe at the low-frequency L-band?, 2) what is the seasonal amplitude of vegetation microwave optical depth for different biomes?, 3) what is the effective scattering at this frequency for different vegetation types?, 4) what is the impact of imprecise characterization of vegetation microwave properties on retrieval of soil surface conditions? These questions are addressed based on the recently completed one full annual cycle measurements by the NASA Soil Moisture Active Passive (SMAP) measurements.
Dara Entekhabi\\({}^{1}\\), Alexandra Konings\\({}^{2}\\), Maria Piles\\({}^{3}\\) and Narendra Das\\({}^{4}\\)\\({}^{1}\\)Department of Civil and Environmental Engineering, Massachusetts Institute of Technology,
Cambridge 02139, USA
\\({}^{2}\\)Department of Earth System Science, Stanford University, Stanford, CA
\\({}^{3}\\)Remote Sensing Laboratory, Departament
de Teoria del Senyal i Comunicacions, Universitat Politecnica
de Catalunya (UPC), 08034 Barcelona, Spain
\\({}^{4}\\)Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, 91109 | Give a concise overview of the text below. | 301 |
arxiv-format/2005_02140v3.md | # BlackBox: Generalizable Reconstruction of Extremal Values from Incomplete Spatio-Temporal Data
Tomislav Ivek
Tomislav Ivek Institut za fiziku, Bijenicka 46, HR-10000 Zagreb, Croatia
1Domagoj Vlah (corresponding author) University of Zagreb, Faculty of Electrical Engineering and Computing, Department of Applied Mathematics, Unska 3, HR-10000 Zagreb, Croatia
2
Domagoj Vlah
Tomislav Ivek Institut za fiziku, Bijenicka 46, HR-10000 Zagreb, Croatia
1Domagoj Vlah (corresponding author) University of Zagreb, Faculty of Electrical Engineering and Computing, Department of Applied Mathematics, Unska 3, HR-10000 Zagreb, Croatia
2
Footnote 2: email: [email protected]
Received: 30 April 2020 / Accepted: date
## 1 Introduction
The EVA 2019 Data Challenge posited a problem to predict extremes of the Red Sea surface temperature anomaly within spatio-temporal regions of missing data (Huser 2020). Daily temperature anomaly values were provided forcontestants spanning over 31 years and covering the geographical area of the Red Sea. For each day, temperature anomaly values were given at fixed spatial points on a regular geographical grid. About 31.6% of data was deliberately removed from the dataset. Regions of the missing data were approximately contiguous with irregular boundaries, relatively large, at least one calendar month in duration, and present for every calendar day in the provided dataset. The exact process of data removal was not disclosed to contestants. The goal was to predict the distribution of extremes of temperature anomaly on a number of specified space-time cylindrical regions (50km in radius and 7 days in length), chosen in the most difficult part of the dataset which had 60% percent of data missing for any day. The quality of predicted extremes was evaluated using the threshold-weighted continuous ranked probability score averaged over all prediction regions \\(\\overline{\\text{twCRPS}}\\)(Huser, 2020).
Recently there has been an increase in adoption of deep neural network models in various areas of research and technology which feature high-dimensional interdependent data including time series and imaging. Inspired by some of these advances (Asadi and Regan, 2019; Li et al., 2019; Schlemper et al., 2017) we used state-of-the-art neural network techniques primarily from image processing in order to complement existing extreme value theory approaches for spatial extremes (Davison et al., 2012; Davison and Huser, 2015; Davison et al., 2019). Deep learning techniques typically do not depend on expert domain knowledge and generalize well to data from different domains. However, they do rely on large quantities of data being available for model training. Furthermore, currently the engineering of training and deploying neural networks is ahead of their precise theoretical treatment, especially regarding error bounds, model optimization in the sense of minimization of loss function with potentially billions of parameters and associated rate of convergence, optimal model complexity and size, and the related speed of model inference.
In the absence of expert domain knowledge, to predict the distribution of extremes of Red Sea surface temperature anomaly within regions of withheld data we attempt to reconstruct the missing information. We introduce additional damage, i.e., we remove even more data from the original dataset in order to teach autoencoder-like models based on convolutional deep neural networks how to repair or in-paint the missing data based on the remaining information. Then, we evaluate trained models on originally provided data in order to create stochastic plausible reconstructions of temperature anomaly within regions of missing data. The extremal values within regions of interest are then trivially calculated for each stochastically sampled reconstruction which finally allows us to create their distribution. We discuss details of our implementation, possible extensions to the technique and its generalizability to different problem domains.
## 2 Overview of related work
We first review some of the recently published techniques based on neural networks relevant to missing data reconstruction. Neural networks are parameterizable approximators based on multiple compositions of affine and nonlinear functions which are fitted to, or trained on, some desired dataset. They have long been used to recover data gaps in time series, including geophysical datasets (Rossiev et al. 2002; Lee and Park 2015). Conventionally, raw data is first reduced in dimensionality and mapped onto a small-dimension manifold, also called latent or hidden space, which aims to capture salient features underlying the modeled phenomena. In case of time series, latent vectors are then reconstructed at missing time stamps by a neural network predicting the next step based on history. Finally, the predicted reconstruction is projected back to original data space. Various model architectures are researched here and used in production, most common being simple fully connected networks (Rossiev et al. 2002), recurrent neural networks (Che et al. 2018), which are either fully-connected for tabular data or combined with convolutional neural networks where spatial or temporal proximity is important (Asadi and Regan 2019). Particularly interesting is the recently introduced BRITS architecture (Cao et al. 2018), which uses a novel bidirectional recurrent component based on learned feature correlation and temporal decay in order to impute data. While this technique seems most promising for tabular data with measured or engineered features of interest, it needs to be adapted in order to handle image-like data on a large spatial grid.
Since the number of parameters of neural layers grows proportionally to dimensionality of their input, fully connected neural networks where each element of input influences the whole of output are often deemed intractable for large image-like inputs. Moreover, in computer vision and image processing it is commonly desirable that algorithms operate independently of feature position, e.g., a face detection network should correctly identify human faces regardless of their position in a photograph. With this in mind, convolutional neural networks (Zhang 1988) have become the prevalent choice for modeling ordered data on space-like grids. Such an architecture again comprises layers, each with a small common neural network, or \"kernel\", which slides over the whole input. Output of a convolutional layer can be regarded as a spatial map of detected learnable features which grows semantically richer with every consecutive layer. Thanks to their shared-weight architecture and local connectivity (Behnke 2003), convolutional neural networks train well on smaller datasets, generalize to unseen data examples, and are used with great success in various classification and prediction tasks (Krizhevsky et al. 2012; Schlemper et al. 2017).
Recently an innovative convolutional framework was proposed for missing data reconstruction called MisGAN (Li et al. 2019). It is particularly suited for high-dimensional data with underlying spatial correlations. MisGAN uses multiple generative adversarial networks (Goodfellow et al. 2014) where the so-called generators create increasingly more convincing fake samples as wellas their missing data masks, while the discriminators attempt to discern them from real samples. This complex scheme simultaneously learns the distribution of missing data, or \"masks\", as well as the conditional distribution of data predicated on masks. It finally constructs a probabilistic imputer model which repairs the data by sampling from the learned distribution given some known data and its mask.
MisGAN provides state-of-the-art reconstruction results on several standard datasets and appears to be particularly suited to the task at hand. However, in our preliminary experiments to apply MisGAN on Red Sea temperature anomalies it was difficult to achieve convergence. Generative adversarial networks are notoriously difficult to train as they are a minmax problem where the optimal state is a saddle point with a local minimum in generator network cost and local maximum of the discriminator network cost (Wei et al., 2018; Le et al., 2017). In our admittedly limited tests, MisGAN tended to diverge and create patterned artifacts. Furthermore, the training itself took prohibitive amounts of time while it also appeared the provided amount of data was not sufficient.
Due to these issues, for the particular problem of Red Sea temperature anomalies we opt to step back from generative adversarial networks and construct a simpler framework based on autoencoder networks (Kramer, 1991; Goodfellow et al., 2016). Still, MisGAN provides us with certain valuable tools and avenues to explore. We adopt its use of noise-imputed samples in order to treat trained models as conditional distributions. Samples of temperature anomaly generated from our models are conditioned on the known temperature anomaly data for a given day. In that sense their distribution is conditioned on known data. Also, we use a simpler version of convolutional networks to exploit spatial coherence and short-range temporal correlations. We forego long-term dependencies and leave them for future consideration.
## 3 Model and methodology
We aim to construct one or more models which would take as input incomplete, damaged temperature anomaly data and attempt its best guess to reconstruct or predict the original complete data. In the following section we introduce separate \"ingredients\" which come together to form a powerful framework for probabilistic data repair. First, we present a general notion of models which map from incomplete to complete data. Then, we introduce the concept of autoencoder neural networks and modify their fitting procedure to take into account missing data. We also discuss the concept of spatial coherence and the benefits of using convolutional models. Our model training protocol is described. The obtained fitted models take as input incomplete temperature anomaly data and infer repaired data from which temperature anomaly extremes are trivially computed. In the end, we use model ensembling to improve predictions of extremes distributions.
### Trivial case: target data is complete and available for model training
Starting with the most simple case, let us for now disregard time dependence of temperature anomaly data and consider each day as a separate multi-dimensional point. Suppose we have the desired input-output relations \\(\\left\\{(x^{(n)}_{\\text{orig}},y^{(n)}_{\\text{complete}});n\\in\\{1, ,T\\}\\right\\}\\), where \\(T\\) is the number of days included in the dataset, \\(x^{(n)}_{\\text{orig}}\\in\\mathbb{R}^{W\\times H}\\) is a matrix representing originally incomplete or damaged data provided of weight \\(W\\) and height \\(H\\) in the problem statement and \\(y^{(n)}_{\\text{complete}}\\in\\mathbb{R}^{W\\times H}\\) the matrix representing ideal, undamaged data for each day \\(n\\). If \\(\\theta\\) designates all the parameters of some model attempting to summarize these relations, their optimal value to reconstruct the missing data based on \\(x^{(n)}_{\\text{orig}}\\) as input can be obtained by minimizing the loss function
\\[\\mathcal{L}=\\frac{1}{T}\\sum_{n=1}^{T}\\ell\\left(y^{(n)}_{\\text{complete}};o^{(n) }(\\theta)\\right) \\tag{1}\\]
where \\(o^{(n)}(\\theta)\\in\\mathbb{R}^{W\\times H}\\) is the model output predicated on parameters \\(\\theta\\) for model input \\(x^{(n)}_{\\text{orig}}\\), for each day \\(n\\), and \\(\\ell\\) is a suitable distance function between targets and corresponding model outputs.
In principle, the posited problem could be solved by taking \\(\\mathcal{L}\\) to be the threshold-weighted continuous ranked probability score (twCRPS) averaged over all space-time validation points \\(s,t\\) specified by the Data Challenge, where
\\[\\text{twCRPS}(\\hat{F}_{s,t},u_{s,t})=\\int_{-\\infty}^{\\infty}\\left\\{\\hat{F}_{s, t}(u)-\\mathbb{I}(u_{s,t})\\leq u\\right)\\right\\}^{2}w(u)\\,\\mathrm{d}u. \\tag{2}\\]
\\(\\hat{F}_{s,t}\\) denotes the distributions of extremes of predicted temperature anomaly \\(o^{(n)}\\), and \\(u_{s,t}\\) is the observed extremes of temperature anomaly \\(y^{(n)}_{\\text{complete}}\\), \\(\\mathbb{I}(\\cdot)\\) is the indicator function, \\(w(x)=\\Phi\\{(x-1.5)/0.4\\}\\), and \\(\\Phi(\\cdot)\\) is the standard normal distribution. Both predicted and observed extremes are evaluated over the spatio-temporal cylinder around spatial location \\(s\\) and day \\(t\\). For more details see (Huser 2020). Inconveniently, the complete data \\(y^{(n)}_{\\text{complete}}\\) is unavailable by the very nature of the problem we wish to solve so the average twCRPS cannot be taken as an optimization target. Therefore, a working solution will necessarily grow more complex as we need to find an adequate proxy cost function to minimize.
### Extracting salient information by introducing additional damage
The provided daily temperature anomaly data on a geographical grid can be regarded as a time series of raster images represented by real matrices with \\(W\\) columns in width and \\(H\\) rows in height where each matrix element corresponds to the value of temperature anomaly at a certain geolocation for day \\(n\\). We introduce its masking matrix \\(m^{(n)}_{\\text{orig}}\\in\\{0,1\\}^{W\\times H}\\) which describes the extent ofdamage present in \\(x_{\\text{orig}}^{(n)}\\). It carries binary information for each spatial location: 1 encodes that a value is observed and available and 0 designates unobserved values e.g., due to damage or the location itself not being present in the dataset (Cao et al. 2018; Li et al. 2019).
Let us now introduce additional data loss to the already damaged original data. We describe the total damage i.e., newly introduced damage together with the originally missing data, in the form of a new binary masking matrix \\(m^{\\prime(n)}\\in\\{0,1\\}^{W\\times H}\\). The additionally damaged data matrix then becomes
\\[x^{\\prime(n)}=x_{\\text{orig}}^{(n)}\\odot m^{\\prime(n)}, \\tag{3}\\]
where \\(\\odot\\) denotes element-wise multiplication of two matrices. We discuss generating \\(m^{\\prime(n)}\\) later in the text.
This setup in principle allows us to train a model which maps data with additional damage \\(x^{\\prime(n)}\\) onto the data with original amount of damage as the target, \\(y^{\\prime(n)}=x_{\\text{orig}}^{(n)}\\). Our main idea is that by training to repair the additional damage, a sufficiently powerful convolutional model could learn to extract features underpinning the data manifold which are robust to damage. Then, when such a trained model is applied to the original data, we hypothesize it will be able to reconstruct missing data in an adequate manner.
### Weighted distance function
The training goal also needs to codify that the relevant cost is evaluated only where data is provided by the inherently damaged training target. For the distance function \\(\\ell\\) in (1) we substitute weighted \\(L^{1}\\) or \\(L^{2}\\) distances evaluated exclusively on the original masks \\(m_{\\text{orig}}^{(n)}\\) so that missing or unobserved data is ignored:
\\[\\ell\\left(y^{(n)},m_{\\text{orig}}^{(n)};o^{(n)}\\right)=\\frac{\\left\\|\\left(y^{ (n)}-o^{(n)}\\right)\\odot m_{\\text{orig}}^{(n)}\\right\\|}{\\left\\|m_{\\text{orig }}^{(n)}\\right\\|_{1,1}}. \\tag{4}\\]
Here \\(\\left\\|\\cdot\\right\\|\\) in the numerator denotes either \\(L^{1}\\) or \\(L^{2}\\) vector norm i.e., \\(L_{1,1}\\) and \\(L_{2,2}\\) matrix norms, which are defined as
\\[\\|A\\|_{r,r}=\\left(\\sum_{i=1}^{W}\\sum_{j=1}^{H}|a_{ij}|^{r}\\right)^{1/r},\\]
where \\(A=(a_{ij})\\in\\mathbb{R}^{W\\times H}\\), for \\(r=1,2\\). In the denominator \\(\\left\\|\\cdot\\right\\|_{1,1}\\) is strictly the \\(L_{1,1}\\) matrix norm i.e., the sum of absolute values of all mask elements effectively counting the number of observed temperature anomaly values in a particular day. Note that the computation of \\(o^{(n)}\\) requires additionally damaged data \\(x^{\\prime(n)}=x_{\\text{orig}}^{(n)}\\odot m^{\\prime,(n)}\\), but the distance \\(\\ell\\) being minimized in (4) utilizes the original mask \\(m_{\\text{orig}}^{(n)}\\).
With such modifications in place, the training procedure will create models that reproduce known data but are not punished when \"speculating\" on regions of missing data.
### Sampling from trained models as multivariate probability distributions
We found it most fruitful to introduce both a level of stochasticity to model training input, as well as to the evaluation input fed into trained models. Specifically, we impute noise wherever masks indicate missing data:
\\[x^{\\prime(n)}_{\\text{noise}} =\\left[x^{(n)}\\odot m^{\\prime(n)}+\\mathcal{N}^{(n)}\\odot\\left(1-m ^{\\prime(n)}\\right)\\right]\\odot m_{\\text{master}}, \\tag{5}\\] \\[x^{(n)}_{\\text{orig,noise}} =\\left[x^{(n)}\\odot m^{(n)}_{\\text{orig}}+\\mathcal{N}^{(n)} \\odot\\left(1-m^{(n)}_{\\text{orig}}\\right)\\right]\\odot m_{\\text{master}}, \\tag{6}\\]
where \\(\\mathcal{N}^{(n)}\\in\\mathbb{R}^{W\\times H}\\) is noise sampled independently for each spatio-temporal location and \\(m_{\\text{master}}\\in\\{0,1\\}^{W\\times H}\\) is the master mask with 1 at every valid spatial location and 0 otherwise which effectively removes any noise spilling outside of the valid geographical Red Sea region.
Setting the distribution of imputed noise \\(\\mathcal{N}\\) equal to the expected distribution of missing data allows the model to learn expected ranges of valid data within the damaged regions. Moreover, imputed noise in evaluation input allows us to stochastically sample from the trained model, effectively using it as a multivariate conditional probability distribution. Such a sampling procedure has the most desirable property that the dimensionality of noise is proportional to the amount of data loss: the more information the model receives as input, the less variation it creates at its output (Li et al. 2019).
### Convolutional autoencoder architecture
Finally, we describe a flexible family of parameterizable functions with sufficient capacity to learn patterns inherent to the provided incomplete matrix data and provide reasonable reconstructions of the complete pristine data.
The models we employ are sequential convolutional neural networks similar to autoencoder neural networks (Kramer 1991; Goodfellow et al. 2016). In short, autoencoders are models which are fitted to approximate the identity function by preserving only the most relevant features of the data. They could be regarded as a composition of two fittable functions. The first function, the encoder, maps the input to a usually lower-dimensional \"latent\" space, effectively compressing the data. The result is mapped by the second function, the decoder, back to the original space. The fitting procedure ensures that autoencoder output is close to its input by minimizing some chosen loss function. The encoder-decoder architecture is robust to input noise and in principle may allow it to generalize to data not seen during the fitting procedure. In our case, we do not approximate an identity function but wish to reconstruct missing data. With this goal we change the autoencoder fitting procedure to use masks as described in the previous sections.
Encoders and decoders could be regarded as compositions of functions, so-called layers. Each layer is in principle a composition of a fittable affine mapping and an element-wise nonlinear function. In order to detect and utilize any spatial correlations inherent to our data, for the affine mapping we use 2D matrix convolutional operations (Krizhevsky et al., 2012; Schlemper et al., 2017). Figure 1 shows the architecture of our convolutional model. The domain of the first layer of the network is \\(\\mathbb{R}^{d_{\\mathrm{in}}\\times W\\times H}\\). So, on the input we have a total of \\(d_{\\mathrm{in}}\\) matrices of size \\(W\\times H\\) which may comprise e.g., temperature anomalies for a number of consecutive days and their masks. The output of this first layer is a tensor from space \\(\\mathbb{R}^{d_{\\mathrm{ch}}\\times W\\times H}\\), where \\(d_{\\mathrm{ch}}\\) is conventionally called the number of channels. Subsequent \\(N_{\\mathrm{outer}}\\) encoder layers map from \\(\\mathbb{R}^{d_{\\mathrm{ch}}\\times W\\times H}\\) to \\(\\mathbb{R}^{d_{\\mathrm{ch}}\\times W\\times H}\\) i.e., they do not change the spatial size or number of channels. However, the next \\(N_{\\mathrm{reduce}}\\) layers each reduce the spatial extents of data by a factor of 2 using convolutions of stride 2, more specifically every reducing layer \\(i=1,\\ldots,N_{\\mathrm{reduce}}\\) maps from \\(\\mathbb{R}^{d_{\\mathrm{ch}}\\times\\left(W/2^{i-1}\\right)\\times\\left(H/2^{i-1} \\right)}\\) to \\(\\mathbb{R}^{d_{\\mathrm{ch}}\\times\\left(W/2^{i}\\right)\\times\\left(H/2^{i} \\right)}\\). Finally, the last set of \\(N_{\\mathrm{inner}}\\) encoder layers preserves tensor dimensions and maps to latent space \\(\\mathbb{R}^{d_{\\mathrm{ch}}\\times\\left(W/2^{N_{\\mathrm{reduce}}}\\right)\\times \\left(H/2^{N_{\\mathrm{reduce}}}\\right)}\\). The decoder closely follows this architecture in reverse using transposed convolutions instead of convolutions and ends with an additional single layer which contains only a transposed convolution and outputs a so-called one-channel tensor \\(\\mathbb{R}^{W\\times H}\\) corresponding to the reconstructed temperature anomaly.
We can use a block of consecutive days as input by setting \\(d_{\\mathrm{in}}>1\\). Inspecting several days of data to reconstruct a single day's temperature anomaly exploits short-term temporal correlations inherent to oceanographic data and
Figure 1: Architecture of the convolutional autoencoder model. At encoder input a tensor with \\(d_{\\mathrm{in}}\\) channels is given containing several days worth of data (training data depicted, see text). The first block of \\(N_{\\mathrm{outer}}\\) convolutional layers increase the number of channels to \\(d_{\\mathrm{ch}}\\). Then, \\(N_{\\mathrm{reduce}}\\) layers reduce the spatial extents of the tensor. Following those, \\(N_{\\mathrm{inner}}\\) convolutional layers map the tensor into latent space. The decoder structure is approximately symmetric to the encoder. The model outputs a single-channel tensor as prediction.
teaches the model to also use rates of change instead of separate \"snapshots\" at a single point in time.
One typical advantage of convolutional neural networks, translational invariance, can become a hindrance when data might depend on absolute position in space. In our case, it may very well be that Red Sea temperature anomalies consistently differ depending on geographical location. Inspired by recent developments in natural language processing (Vaswani et al. 2017), we optionally concatenate two additional channels to the input of the first convolutional layer which provide information about absolute geographical latitude and longitude of each point as horizontal and vertical linear sweep of real numbers between \\(-1\\) and \\(1\\). In this way a model might find it advantageous to use this so-called positional encoding for greater selectivity and more precise predictions.
### Training protocol and implementation
Our model, training and evaluation code is available online at [https://github.com/BlackBox-EVA2019/BlackBox](https://github.com/BlackBox-EVA2019/BlackBox).
Training dataset and validation dataset.In order to train our models we separate the provided data \\((x_{\\text{orig}}^{(n)},m_{\\text{orig}}^{(n)})\\), per day basis, into two datasets, named (as customary in machine learning model) the training and the validation dataset. The training dataset is used to train models. The validation dataset is used during model development to measure the generalization capability of models on unseen data. Finally, the test set, data withheld from contest participants, is not available to us and was not used for model training purposes or optimization of model architecture.
Only after the models are trained and evaluated, we calculate the \\(\\overline{\\text{twCRPS}}\\) score of our final predictions using the observed distribution of extremal values which was provided by the contest organizers after the final ranking was determined.
Notice here the important difference between our naming scheme and the one in the Data Challenge problem statement (Huser 2020). In the problem statement, the whole available data \\(x_{\\text{orig}}^{(n)}\\) where \\(m_{\\text{orig}}^{(n)}=1\\) is called the training dataset, and the subset of rest of the data hidden to contestants where \\(m_{\\text{orig}}^{(n)}=0\\) but \\(m_{\\text{master}}^{(n)}=1\\) is called the validation dataset.
Not all days in the Red Sea dataset have the same amount of non-missing data. The available time window spans over a total of 31 years, but for the first 22 years 20% of data per day is missing, and for the last 9 years 60% of data per day is missing. Incidentally, the latter 9 years with more missing data also represents the period of accelerated climate change. The validation set has to be representative of all data, so we choose it as 5 continuous years where 22/31 parts are from the first 22 years and 9/31 parts are from the last 9 years i.e., data belonging from 6736th to 8560th day. The remaining data is used for the training set. In this way both our training and validation datasets consist of large contiguous blocks of time, have the same distribution of missing data percentage, and potentially both contain data from the period of accelerated climate change.
Generating additional masks.Regarding the model training process, as already stated in Section 3, we introduce further damage to the data for training input. A nontrivial question is how much additional data we should mask. We consider the natural choice to mask the same percentage of data for the process of training as would be masked during model inference when generating predictions of unknown test data. In the last 9 years of data, where we are tasked to generate predictions, 60% of data is masked. Therefore we need to create masks \\(m^{\\prime(n)}\\) such that approximately 60% percent of data is removed for each day in training and validation datasets. Notice that the total data loss by masking \\(m^{\\prime(n)}\\) is 68% for the first 22 years of data, and even 84% for the last 9 years!
A further issue is how to actually generate masks \\(m^{\\prime(n)}\\). Our limited experiments with training generative adversarial networks (Li et al., 2019) to create convincing masks did not produce desired results. Therefore it is important to devise a method to create adequate masks \\(m^{\\prime(n)}\\). Except noting that \\(m^{(n)}_{\\text{orig}}\\) changes once only every calendar month so our generated masks need to do the same, the exact mechanism by which \\(m^{(n)}_{\\text{orig}}\\) were generated is unknown to us. We developed a stochastic diffusion algorithm that generates random masks similar in appearance to those provided by the problem statement. Our algorithm parameters needed to be manually tweaked until generated masks \\(m^{\\prime,(n)}\\) appeared visually indistinguishable from \\(m^{(n)}_{\\text{orig}}\\), which is certainly one obvious drawback of our method.
Noise imputation.The imputed noise is sampled from a Gaussian distribution closely mimicking the marginal distribution of all provided temperature anomaly measurements, parameterized by \\(\\mu=-0.0365\\,^{\\circ}\\text{C}\\) and \\(\\sigma=0.683\\,^{\\circ}\\text{C}\\).
Reducing the data footprint.The original anomaly data is spatially large and taxes the capacity of current GPU architectures, both in sense of used memory and computation time. We used two complementary approaches to successfully reduce the GPU footprint.
Using only a small number of padding rows and columns at the edges, the original temperature anomaly data fits into a spatial matrix of \\(W\\times H=256\\times 384\\). Notice that 256 is a power of 2 and 384 is 3 times a power of 2, which turns out to be essential for efficient computation with convolutions. However, note that the Red Sea is elongated but geographically laying in the direction NNW-SSE so in our rectangular image representation most image elements correspond to land masses which carry no relevant data.
In order to increase the density of usable data we skew every other image row together with remaining rows below it towards west, beginning from the top to the bottom of image. In this way the whole Red See can fit in an image \\(W\\times H=96\\times 384\\), while still preserving spatial coherence. At this point the data size is reduced but still leaves an unacceptably large GPU footprint.
Further, notice that our skew operation results with spatial extents of \\(96\\times 384\\) which are divisible by \\(3\\). We can conveniently down-sample the data by taking the average anomaly value over \\(3\\times 3\\) spatial cells which brings the data size down by almost an order of magnitude, a quite substantial amount. We get the final anomaly matrix resolution of \\(W\\times H=32\\times 128\\). Both numbers are factors of \\(2\\) which is suitable for efficient GPU computations. Our models are both trained and inferred in this lower resolution.
To generate a prediction from a model trained on such data, after inference with reduced resolution we have to first upsample and then unskew rows back to original position. Upsampling is done using bicubic interpolation. Special care is taken at the boundary of masks in order to avoid issues with fractional data availability and oscillation artifacts in the reconstruction of high resolution data near mask boundaries. The error introduced by resampling, measured by \\(L^{1}\\) loss function (1) and (4) on the whole dataset, is \\(0.012\\), which is much smaller than the mean values of \\(L^{1}\\) validation loss function obtained during model training, which is \\(0.020\\). This provides evidence that the error introduced by computation on downsampled data is less significant in comparison to the inherent error introduced from our models.
Convolutional autoencoder hyperparameters.The described model is implemented in Python using the PyTorch library (Paszke et al., 2019). For the encoder part we used from \\(2\\) to \\(11\\) layers, so after the first layer which changes the number of channels from \\(d_{in}\\) to \\(d_{ch}\\) we have \\(1\\) to \\(10\\) additional outer, reducing and inner layers. So \\(1\\leq N_{outer}+N_{reduce}+N_{inner}\\leq 10\\), where we prescribe \\(N_{outer}\\geq 1\\) and \\(N_{reduce}\\leq 5\\). Taking into account the decoder as explained in the previous sections, the total number of layers in a model is between \\(5\\) and \\(23\\). We include an additional so-called dropout layer between the encoder and decoder parts which reduces overfitting during training (Nitish et al., 2014). The dropout layer is a function having one hyperparameter \\(p\\in[0,1]\\) called the dropout percentage. The dropout layer is an identity map from \\(\\mathbb{R}^{k}\\) to itself which is multiplied element-wise by a random vector \\(\\mathbf{v}\\in\\{0,1\\}^{k}\\). Every coordinate in \\(\\mathbf{v}\\) is sampled independently, being \\(1\\) with probability \\(1-p\\) and \\(0\\) with probability \\(p\\), every time the dropout layer is evaluated. In our case \\(k=d_{\\text{ch}}\\times\\left(W/2^{N_{reduce}}\\right)\\times\\left(H/2^{N_{reduce}}\\right)\\).
For the number of channels we take \\(d_{ch}=64\\) and the convolution kernel size we fix at \\(5\\times 5\\). In every layer except last we use the SELU nonlinear function (Klambauer et al., 2017). Also, each convolutional layer except last includes batch normalization as we find it improves training convergence and generalization to unseen examples (Ioffe and Szegedy, 2015).
Depending on the number of layers, our model has approximately \\(2.3\\cdot 10^{5}\\) to \\(2.1\\cdot 10^{6}\\) parameters. Dimension of the latent space however is solely regulated by the number of reducing layers \\(N_{reduce}\\) and amounts to \\(256\\cdot 4^{5-N_{reduce}}\\)which ranges from 256 to 262144. Notice that the Red Sea downsampled spatial dimensionality of input data is around 1855, so the dimensionality of the resulting latent space can be smaller as well as larger than the downsampled input data. The number of temperature anomaly data points available for training is daily data matrix size multiplied by the number of days and the average percentage of available data which equals around \\(1.4\\cdot 10^{7}\\). This is one to two orders of magnitude larger than the number of model parameters, so we are confident our models do not overfit.
Input dimensionalityUsing multi-day input for training the model to predict the day in the middle of the input block allows the model to utilize the rate of change in time and learn correlations in short time scales. Our models are provided \\(N_{\\text{days}}=1,3,11\\) consecutive days at input. We further find that providing masks as input in addition to anomaly data improves training. Additionally, positional encoding may be concatenated to input and adds two channels.
Loss function and normSeeing as the marginal distribution of the provided real-world data is approximately Gaussian, it would be reasonable to use the \\(L^{2}\\) norm as the optimization goal in (1) because it should minimize deviation of predictions with regard to actual values. Intriguingly, in our experiments we find almost no difference in the \\(L^{1}\\) validation loss when \\(L^{1}\\) cost function was used as opposed to the \\(L^{1}\\) validation loss when the \\(L^{2}\\) cost function was optimized. We conservatively decided to use the \\(L^{1}\\) cost function to obtain possibly larger reconstruction errors but also capture a larger variance of extreme values.
Optimizer hyperparametersModels are trained using the fast.ai library (Howard et al., 2018). For the optimizer we employ the Ranger algorithm (Wright et al., 2019) which stabilizes the start of the training process using Rectified Adam (RAdam) optimizer (Liu et al., 2020). Additionally, to stabilize the rest of the training, parameter lookahead avoids overshooting good local minima in parameter space (Zhang et al., 2019). Flat-cosine one-cycle policy for learning rate and weight decay ensures convergence to a broad optimum which allows the trained model to generalize well (Smith, 2018). The following RAdam hyperparameters in particular influence convergence and generalizability of the trained model: maximum learning rate, weight decay factor, exponential decay rates of the first and second moments, number of epochs per training and training batch size. We fixed maximum learning rate to \\(0.003\\), the number of epochs to \\(50\\), and the RAdam exponential decay rates to \\((0.95,0.999)\\). Additionally, we regard dropout percentage as an optimizer hyperparameter.
To suitably tune weight decay, batch size and dropout hyperparameters, we use the following grid search algorithm. In order to avoid over- or under-fitting, for each set of hyperparameters a model is trained through several iterations of \\(50\\) epochs until we achieve a satisfactory ratio between validation and training losses between \\(1\\) and \\(1.05\\). We start the first iteration of model training with dropout percentage 0, weight decay 0.3 and batch size 32. In each iteration, we either decrease or increase regularization depending on whether the ratio of validation and training loss is less than 1 or greater than 1.05. To decrease regularization we decrease dropout percentage, weight decay, or batch size. Notice that since we use batch normalization in each convolutional layer, slightly decreasing batch size seems to actually decrease and not increase regularization contrary to expectation. To increase regularization we increase dropout percentage or weight decay. If in one iteration step we reach the ratio of validation and training loss less than 1, and in the next step it is greater than 1.05, or vice versa, for the following iteration we use a weighted linear interpolation of dropout percentage and weight decay hyperparameters from previous two iterations. At the end of this algorithm we select the model iteration with the the lowest validation loss using the missing data mask \\(m_{\\text{orig}}^{(n)}-m^{\\prime(n)}\\) as a relevant metric for the quality of reconstruction. We consider such a model to be well-trained.
Prediction ensembling.One of the most important aspects that we use in our method is independent training of an ensemble of models with different convolutional autoencoder hyperparameters. A number of predictions for missing temperature anomaly values are inferred from each of the well-trained models. Afterwards, all of those predictions are ensembled to calculate the empirical distribution of wanted temperature anomaly extremes.
## 4 Results and Discussion
For the solution of Data Challenge problem we ensemble predictions made by a set of 155 models with different combinations of convolutional autoencoder hyperparameters that satisfy \\(1\\leq N_{outer}+N_{reduce}+N_{inner}\\leq 10\\), with \\(N_{outer}\\geq 1\\), \\(0\\leq N_{reduce}\\leq 5\\) and \\(N_{inner}\\geq 0\\). Each model was trained independently using the described grid search to select optimizer hyperparameters that produce well-trained models. The slowest observed training took 13 iterations. On average it took 3.27 iterations to reach a well-trained model, for a cumulative of 507 trained models. For only one out of 155 final models, the algorithm failed to achieve the targeted validation and training loss ratio which ended at only 1.09.
Measured on a quad-core computer system with Nvidia RTX 2070 8 GB RAM GPU, the worst-case training time per model and per one iteration of hyperparameter grid search was around 1 hour and 20 minutes. The average training time was around 43 minutes. Cumulative training time for the full ensemble of 155 well-trained models was approximately 15 days.
For every of 155 trained models, 20 full historical predictions were inferred with a total of 3100 complete spatio-temporal reconstructions of Red Sea temperature anomalies. These were used in place of missing data to calculate the empirical distribution of temperature anomaly extremes over space and time at locations specified by the Data Challenge problem. Timed on the above equipment, this operation took about 20 minutes per model and finished in about 2 days for all 155 models.
We made altogether seven different runs of 155 models ensemble training and inference, while varying \\(N_{\\text{days}}=1,3,11\\) and training with or without positional encoding. This alone accounts for six different combinations of hyperparameters, while the seventh run was again using \\(N_{\\text{days}}=3\\) and with positional encoding. The best \\(\\overline{\\text{twCRPS}}\\) score achieved was 3.581 for \\(N_{\\text{days}}=3\\) and without using positional encoding.
As a contrast, in our original second place solution to the Extreme Value Analysis 2019 Data Challenge we also used an ensemble of predictions, but it used only 8 models. These models were all trained and evaluated with similar convolutional autoencoder hyperparameters but on full-resolution data without resampling, using single-day input and no positional encoding. They were trained with the Adam optimizer (Kingma and Ba 2017) without hyperparameter tuning, and at that time we were using both \\(L^{1}\\) and \\(L^{2}\\) norms for model training. Then we reached the \\(\\overline{\\text{twCRPS}}\\) score of \\(4.667\\cdot 10^{-4}\\), so our current result is a significant improvement.
Although every run is expensive in computer time, we decided to separately train two ensembles of models with the same model hyperparameters to assess the variability in score introduced by stochasticity in our model training procedure. In the end we calculate the \\(\\overline{\\text{twCRPS}}\\) score for every ensemble and present the results in Table 1 and Figure 2. By looking at this data, we could hypothesize that positional encoding is making \\(\\overline{\\text{twCRPS}}\\) score worse. However, it is important to notice the relatively large variability in score between runs 4 and 5 which have the same model hyperparameters. Unfortunately, it seems that the difference we see between different runs can largely be attributed to stochasticity in model training procedure.
We are also interested in what happens if we try to take a smaller ensemble of predictions. The ensemble produced by 155 models takes quite a long time to train and evaluate so it would be of benefit if a smaller ensemble had comparable performance regarding the \\(\\overline{\\text{twCRPS}}\\) score.
\\begin{table}
\\begin{tabular}{|l|l||l|l|} \\hline run no. & \\(N_{\\text{days}}\\) & pos. enc. & \\(\\overline{\\text{twCRPS}}\\) /\\(10^{-4}\\) \\\\ \\hline
1 & 1 & No & 3.604 \\\\ \\hline
2 & 1 & Yes & 3.782 \\\\ \\hline
3 & 3 & No & 3.581 \\\\ \\hline
4 (first) & 3 & Yes & 3.618 \\\\ \\hline
5 (second) & 3 & Yes & 3.728 \\\\ \\hline
6 & 11 & No & 3.603 \\\\ \\hline
7 & 11 & Yes & 3.681 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: \\(\\overline{\\text{twCRPS}}\\) scores computed for seven training runs of 155 model ensembles using different value combinations of \\(N_{\\text{days}}\\) and positional encoding hyperparameters. Number of channels \\(d_{ch}=64\\) in every ensemble. Notice the variation of score for runs 4 and 5 which share the same values for \\(N_{\\text{days}}\\) and positional encoding (see text).
First let us consider only a trivial ensemble created by a single model. Let us take the model with the lowest validation loss out of all 155 models when evaluated at additionally damaged data \\(m^{(n)}_{\\text{orig}}-m^{\\prime(n)}\\), i.e., the best model from ensemble in run no. 4 in Table 1. Let us sample from this model full 3100 data reconstructions, the same number of samples as the previously discussed large ensembles. We get the modest \\(\\overline{\\text{twCRPS}}\\) score of \\(4.804\\cdot 10^{-4}\\), which means that our large ensemble indeed helps improve the prediction quality over a single best-performing model. Notice that taking 3100 samples for a single model does not significantly improve the score when compared to only 20 samples (\\(4.802\\cdot 10^{-4}\\)).
Next, let us consider a family of ensembles, each ensemble a sub-ensemble consisting of the first \\(M\\) out of 155 well-trained models, ordered by the ascending validation loss evaluated at additionally damaged data \\(m^{(n)}_{\\text{orig}}-m^{\\prime(n)}\\). For \\(M=1\\) we get the trivial ensemble already considered, and for \\(M=155\\) we get the full ensemble. For comparison, let us take a couple of randomized model orders of run no. 4 and produce the same sub-ensembles, by taking only the first \\(M\\) out of 155 models, see Figure 3. Random picking the sub-ensemble wins over ordered picking, but we still have to put quite a large number of models into our ensemble to get close to the \\(\\overline{\\text{twCRPS}}\\) score achieved using all of 155 models.
Alternatively, in Figure 2 we consider taking only the first \\(M\\) out of 155 models for each of the seven trained model ensembles. Ordering of the mod
Figure 2: Graph of the \\(\\overline{\\text{twCRPS}}\\) score of a sub-ensemble prediction depending on the number of models used to form a sub-ensemble. Seven ensembles from Table 1 are used. The models are sorted ascending by validation loss evaluated at additionally damaged data. The first \\(M\\) models are chosen for each sub-ensemble.
els is again produced using an ascending sort by validation loss evaluated at additionally damaged data.
The grid search algorithm that tunes optimizer hyperparameters clearly helps with the quality of prediction. To illustrate this we took 155 models trained for 50 epochs once, of run no. 4, using fixed optimizer hyperparameters without any additional tuning. Many of these models were over- or under-trained. We calculated the \\(\\overline{\\text{twCRPS}}\\) score to be \\(3.970\\cdot 10^{-4}\\), meaning that proper hyperparameter selection and well-trained models improved the score by about 10%.
Training a large ensemble of models is prohibitively expensive regarding computation time and resources. Therefore it is difficult to investigate the impact of each decision in selecting individual model hyperparameters. It is far from clear which hyperparameters (number of layers, kernel size, number of channels, number of days at input ) had the greatest impact on the improvement of our prediction. A detailed ablation study with a full-blown set of models unfortunately may well require months or years to train and evaluate using our currently available hardware.
Figure 3 indicates that a randomly chosen ensemble of only about 25 out of 155 current models could prove sufficient for an ablation study. We have indeed made first attempts in that direction by selecting 5 different models as templates and varying \\(d_{ch}\\in\\{64,128,256\\}\\), \\(N_{days}\\in\\{1,3,11\\}\\) as well as whether positional encoding is used. This resulted in 90 well-trained mod
Figure 3: Graph of the \\(\\overline{\\text{twCRPS}}\\) score of a sub-ensemble prediction depending on the number of models used to form a sub-ensemble. The ensemble was trained using \\(N_{\\text{days}}=3\\) and positional encoding. The models are either sorted ascending by validation loss evaluated at additionally damaged data or the order is randomized. The first \\(M\\) models are chosen for each sub-ensemble.
els partitioned in 18 different small ensembles. Despite each small ensemble containing only 5 different models, there are preliminary indications that positional encoding is very helpful and that a large block of \\(N_{days}=11\\) at the input performs better than \\(N_{days}\\in\\{1,3\\}\\), although it seems that it happens only in cases when \\(d_{ch}\\in\\{128,256\\}\\). We hypothesize that the relatively small number of channels, \\(d_{ch}=64\\), is the main limiting factor for models with large \\(N_{days}\\) and positional encoding which otherwise might be able to show their strengths and significantly decrease the score, see Table 1. We also hypothesize that the other most probable suspect that hampers further reduction in score is the spatial down-sampling of data. Namely, in our case it reduces the amount of available training data by a factor of 9 and introduces additional prediction error in the up-sampling step. Unfortunately, either increasing \\(d_{ch}\\) or working with full resolution models is extremely expensive not just regarding computation time, but also regarding GPU memory used, which makes it unfeasible for our currently available computer system.
The presented technique relies on a suitable choice of masks that describe additional damage. In domains where such data loss is easily generated, such as tabular data or low-dimensional time series, our technique could prove to be useful. For image-like datasets found in medicine, geology, climatology, etc., an extensive study is needed to assess the influence of added damage and the distribution of imputed noise on the quality of recovered data.
We also note there are other viable model architectures that ought to be explored. For instance, in this work we use the simplest convolutional layers for our autoencoder. Instead, better generalization might be obtained using ResNet (He et al., 2015) or U-Net architectures which use skip connections as high-resolution pathways between distant layers (Ronneberger et al., 2015). In particular, U-Net places skip connections between corresponding encoder and decoder layers to preserve fine detail. Even though it is originally used for medical image segmentation or classification, U-Net might prove to be a good fit for regression problems such as ours as it is e.g., used to model MisGAN's generators (Li et al., 2019). Going further, in our dataset possibly the largest source of untapped information lie in long-term temporal correlations which we currently underutilize. A more extensive study of time-domain information is needed. Here, image latent space could be used as input to dedicated recurrent neural networks or even novel attention-based models currently explored by the natural language processing community (Vaswani et al., 2017).
## 5 Concluding remarks
In this work we present a solution to the Extreme Value Analysis 2019 Data Challenge. A technique is described to recover missing data by training an ensemble of models on additional data damage we introduce ourselves. Sampling from autoencoder-like approximations of observed data distributions provides a feasible way to analyze complex dynamics of geophysical phenomena. The described approach seems amenable to be applied in other areas of basic and applied research with rare and extreme events as well as complement existing extreme value theory techniques.
## Acknowledgements
We thank Ivan Balog for enlightening discussions.
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* Asadi and Regan (2019) Asadi R, Regan A (2019) A convolution recurrent autoencoder for spatio-temporal missing data imputation. URL [http://arxiv.org/abs/1904.12413v1](http://arxiv.org/abs/1904.12413v1);[http://arxiv.org/pdf/1904.12413v1](http://arxiv.org/pdf/1904.12413v1), 1904.12413v1
* Behnke (2003) Behnke S (2003) Hierarchical Neural Networks for Image Interpretation, Lecture Notes in Computer Science, vol 2766. Springer
* Cao et al. (2018) Cao W, Wang D, Li J, Zhou H, Li L, Li Y (2018) BRITS: Bidirectional recurrent imputation for time series. Advances in Neural Information Processing Systems 31 pp 6775-6785, URL [http://papers.nips.cc/paper/7911-brits-bidirectional-recurrent-imputation-for-time-series](http://papers.nips.cc/paper/7911-brits-bidirectional-recurrent-imputation-for-time-series)
* Che et al. (2018) Che Z, Purushotham S, Cho K, Sontag D, Liu Y (2018) Recurrent neural networks for multivariate time series with missing values. Scientific Reports 8(1):6085-, DOI 10.1038/s41598-018-24271-9
* Davison and Huser (2015) Davison A, Huser R (2015) Statistics of extremes. Annual Review of Statistics and Its Application 2(1):203-235, DOI 10.1146/annurev-statistics-010814-020133, URL [https://doi.org/10.1146/annurev-statistics-010814-020133](https://doi.org/10.1146/annurev-statistics-010814-020133), [https://doi.org/10.1146/annurev-statistics-010814-020133](https://doi.org/10.1146/annurev-statistics-010814-020133)
* Davison et al. (2012) Davison AC, Padoan SA, Ribatet M (2012) Statistical modeling of spatial extremes. Statist Sci 27(2):161-186, DOI 10.1214/11-STS376, URL [https://doi.org/10.1214/11-STS376](https://doi.org/10.1214/11-STS376)
* Davison et al. (2019) Davison AC, Huser R, Thibaud E (2019) Spatial extremes. In: Gelfand AE, Fuentes M, Smith RL (eds) Handbook of Environmental and Ecological Statistics, CRC Press, pp 711-744, DOI 10.1201/9781315152509, URL [https://doi.org/10.1201/9781315152509](https://doi.org/10.1201/9781315152509)
* Goodfellow et al. (2016) Goodfellow I, Bengio Y, Courville A (2016) Deep Learning. Adaptive computation and machine learning, MIT Press, URL [http://www.deeplearningbook.org](http://www.deeplearningbook.org)
* Goodfellow et al. (2014) Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial networks. URL [http://arxiv.org/abs/1406.2661](http://arxiv.org/abs/1406.2661), cite arxiv:1406.2661
* He et al. (2015) He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. URL [http://arxiv.org/abs/1512.03385](http://arxiv.org/abs/1512.03385), cite arxiv:1512.03385Comment: Tech report Howard J, et al. (2018) fastai. [https://github.com/fastai/fastai](https://github.com/fastai/fastai)
* Huser (2020) Huser R (2020) Editorial: EVA 2019 data competition on spatio-temporal prediction of Red Sea surface temperature extremes. Extremes DOI 10.1007/s10687-019-00369-9, URL [https://doi.org/10.1007/s10687-019-00369-9](https://doi.org/10.1007/s10687-019-00369-9)
* Ioffe and Szegedy (2015) Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. URL [http://arxiv.org/abs/1502.03167](http://arxiv.org/abs/1502.03167), cite arxiv:1502.03167
* Kingma and Ba (2017) Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. URL [http://arxiv.org/abs/1412.6980v9](http://arxiv.org/abs/1412.6980v9);[http://arxiv.org/pdf/1412.6980v9](http://arxiv.org/pdf/1412.6980v9), 1412.6980v9
* Klambauer et al. (2017) Klambauer G, Unterthiner T, Mayr A, Hochreiter S (2017) Self-normalizing neural networks. In: Advances in neural information processing systems, pp 971-980
* Kramer (1991) Kramer M (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal 37:233-243
* Krizhevsky et al. (2012) Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25:1106-1114, URL [https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
* Le et al. (2017) Le T, Nguyen TD, Phung D (2017) Kgan: How to break the minimax game in gan. arXiv preprint arXiv:171101744
* Lee and Park (2015) Lee JW, Park SC (2015) Artificial neural network-based data recovery system for the time series of tide stations. Journal of Coastal Research 32(1):213-224, DOI 10.2112/JCOASTRES-D-14-00233.1, URL [https://doi.org/10.2112/JCOASTRES-D-14-00233.1](https://doi.org/10.2112/JCOASTRES-D-14-00233.1), [https://meridian.allenpress.com/jcr/article-pdf/32/1/213/1648085/jcoastres-d-14-00233_1.pdf](https://meridian.allenpress.com/jcr/article-pdf/32/1/213/1648085/jcoastres-d-14-00233_1.pdf)
* Li et al. (2019) Li SC, Jiang B, Marlin BM (2019) MisGAN: Learning from incomplete data with generative adversarial networks. CoRR abs/1902.09599, URL [http://arxiv.org/abs/1902.09599](http://arxiv.org/abs/1902.09599), 1902.09599
* Liu et al. (2020) Liu L, Jiang H, He P, Chen W, Liu X, Gao J, Han J (2020) On the variance of the adaptive learning rate and beyond. URL [http://arxiv.org/abs/1908.03265v3](http://arxiv.org/abs/1908.03265v3);[http://arxiv.org/pdf/1908.03265v3](http://arxiv.org/pdf/1908.03265v3), 1908.03265v3
* Nitish et al. (2014) Nitish S, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(56):1929-1958, URL [http://jmlr.org/papers/v15/srivastava14a.html](http://jmlr.org/papers/v15/srivastava14a.html)
* Paszke et al. (2019) Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S (2019) PyTorch: An imperative style, high-performance deep learning library. In: Wallach HM, Larochelle H, Beygelzimer A, d'Alche-Buc F, Fox EB, Garnett R (eds) Advances in Neural Information Processing SystemsProcessing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pp 8024-8035, URL [http://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library](http://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library).
* MICCAI 2015, Springer International Publishing, Cham, pp 234-241, DOI 10.1007/978-3-319-24574-4_28
* Rossiev et al. (2002) Rossiev A, Makarenko N, Kuandykov Y, Dergachev V (2002) Recovering data gaps through neural network methods. International Journal of Geomagnetism and Aeronomy 3:191-197
* Schlemper et al. (2017) Schlemper J, Caballero J, Hajnal JV, Price A, Rueckert D (2017) A deep cascade of convolutional neural networks for MR image reconstruction. URL [http://arxiv.org/abs/1703.00555v1](http://arxiv.org/abs/1703.00555v1);[http://arxiv.org/pdf/1703.00555v1](http://arxiv.org/pdf/1703.00555v1), 1703.00555v1
* learning rate, batch size, momentum, and weight decay. CoRR abs/1803.09820, URL [http://arxiv.org/abs/1803.09820](http://arxiv.org/abs/1803.09820), 1803.09820
* Vaswani et al. (2017) Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in Neural Information Processing Systems 30, Curran Associates, Inc., pp 5998-6008, URL [https://papers.nips.cc/paper/7181-attention-is-all-you-need](https://papers.nips.cc/paper/7181-attention-is-all-you-need)
* Wei et al. (2018) Wei X, Gong B, Liu Z, Lu W, Wang L (2018) Improving the improved training of Wasserstein GANs: A consistency term and its dual effect. URL [http://arxiv.org/abs/1803.01541v1](http://arxiv.org/abs/1803.01541v1);[http://arxiv.org/pdf/1803.01541v1](http://arxiv.org/pdf/1803.01541v1), 1803.01541v1
* Wright et al. (2019) Wright L, et al. (2019) Ranger. [https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer](https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer)
* Zhang et al. (2019) Zhang MR, Lucas J, Hinton G, Ba J (2019) Lookahead optimizer: k steps forward, 1 step back. URL [http://arxiv.org/abs/1907.08610v2](http://arxiv.org/abs/1907.08610v2);[http://arxiv.org/pdf/1907.08610v2](http://arxiv.org/pdf/1907.08610v2), 1907.08610v2
* Zhang (1988) Zhang W (1988) Shift-invariant pattern recognition neural network and its optical architecture. In: Proceedings of Annual Conference of the Japan Society of Applied Physics | We describe our submission to the Extreme Value Analysis 2019 Data Challenge in which teams were asked to predict extremes of sea surface temperature anomaly within spatio-temporal regions of missing data. We present a computational framework which reconstructs missing data using convolutional deep neural networks. Conditioned on incomplete data, we employ autoencoder-like models as multivariate conditional distributions from which possible reconstructions of the complete dataset are sampled using imputed noise. In order to mitigate bias introduced by any one particular model, a prediction ensemble is constructed to create the final distribution of extremal values. Our method does not rely on expert knowledge in order to accurately reproduce dynamic features of a complex oceanographic system with minimal assumptions. The obtained results promise reusability and generalization to other domains.
Keywords:Convolutional neural network Data reconstruction Deep learning Extreme Value Analysis Conference challenge Ensemble Spatio-temporal extremes | Provide a brief summary of the text. | 176 |
arxiv-format/2305_03323v1.md | # Nonparametric model for the equations of state of neutron star from deep neural network
Wenjie Zhou
School of Physics, Nankai University, Tianjin 300071, China
Jinniu Hu
School of Physics, Nankai University, Tianjin 300071, China Shenzhen Research Institute of Nankai University, Shenzhen 518083, China
Ying Zhang
Shenzhen Research Institute of Nankai University, Shenzhen 518083, China
Hong Shen
Department of Physics, Faculty of Science, Tianjin University, Tianjin 300072, China
## 1 Introduction
Neutron stars, remnants of very massive stars at the end of their lifecycle, are one of the most compact objects in the universe, attracting a lot of attention within the fields of astrophysics and nuclear physics (Oertel et al., 2017). Rapid developments in space observation technologies and gravitation-wave detection have proven advantageous to the measurement of neutron star properties, such as mass, radius, and tidal deformability. Three massive neutron stars with masses of around \\(2M_{\\odot}\\), PSR J1614-2230 (Demorest et al., 2010; Fonseca et al., 2016; Arzoumanian et al., 2018), PSR J0348+0432 (Antoniadis et al., 2013), and PSR J0740+6620 (Cromartie et al., 2020) have been discovered in the past decade. The gravitational wave from a binary neutron star merger, the GW170817 event, was first detected by the LIGO and Virgo collaborations in 2017, providing a constraint on the tidal deformability of a neutron star at \\(1.4M_{\\odot}\\)(Abbott et al., 2017, 2018, 2019). Furthermore, simultaneous measurements of the mass and radius of PSR J0030+0451 and PSR J0740+6620 were recently analyzed by the Neutron Star Interior Composition Explorer (NICER) (Riley et al., 2019; Miller et al., 2019; Riley et al., 2021; Miller et al., 2021).
These studies have improved our knowledge of neutron stars, providing insight into their interior structure and components. A neutron star can be divided into the atmosphere, outer crust, inner crust, outer core, and inner core regions. Its properties are strongly dependent on the equation of state (EOS) of dense nuclear matter (Lattimer & Prakash, 2000; Glendenning, 2001; Weber, 2005; Lattimer & Prakash, 2007; Baym et al., 2018). In the core region, the density approaches 5-10 times the nuclear saturation density. Therefore, in this high-density region, the EOS plays an essential role in investigations, yet it cannot be well determined by current terrestrial methodologies.
Conventionally, the EOS of neutron stars can be extrapolated by the nuclear many-body approaches, such as the density functional theory (Ring, 1996; Bender et al., 2003; Meng et al., 2006; Stone & Reinhard, 2007;Niksic et al., 2011; Dutra et al., 2012, 2014) and _ab initio_ method (Akmal et al., 1998; Van Dalen et al., 2004; Sammarruca, 2010; Sammarruca et al., 2012; Wei et al., 2019; Wang et al., 2020), which can describe the ground-state properties of finite nuclei and nuclear saturation properties very well. However, there are large uncertainties, when these methods are extended to calculate the high-density EOS. They generate many kinds of neutron star mass-radius relations. Furthermore, the isospin dependence of EOS, i.e., the symmetry energy effect, is strongly correlated to the radii of low-mass neutron stars (Li et al., 2008; Bao et al., 2014; Bao and Shen, 2015; Sun, 2016; Ji et al., 2019; Li et al., 2019; Hu et al., 2020). With present observations of neutron stars, a smaller slope of symmetry energy, \\(L\\) is preferred. Meanwhile, several exotic hadronic degrees of freedom and/or hadron-quark transitions may appear in the core region of a neutron star because of the phase diagram of strong interaction (Yang and Shen, 2008; Xu et al., 2010; Chen et al., 2013; Orsaria et al., 2014; Wu and Shen, 2017; Ju et al., 2021; Huang et al., 2022). Hence, it is very difficult to generate a unified EOS in a self-consistent theoretical framework.
Recently, the data-driven methodologies, such as Bayesian inference (Ozel et al., 2010; Raithel et al., 2017; Steiner et al., 2010; Alvarez-Castillo et al., 2016; Miao et al., 2021), deep neural network (DNN) (Fujimoto et al., 2018, 2020, 2021; Farrell et al., 2022; Ferreira et al., 2022), nonparametric EOS representation (Landry and Essick, 2019; Essick et al., 2020, 2020), support vector machines (Murarka et al., 2022; Ferreira and Providencia, 2021), and so on, have been introduced to generate the possible EOSs using the latest observables of neutron stars. In Bayesian inference, the EOS is parameterized and the corresponding parameters are obtained with a marginal likelihood estimation on the posterior probability in terms of model parameters (Ozel et al., 2010). Fujimoto et al. proposed a scheme that can map the finite observation data of neutron stars onto the EOS with a feed-forward DNN. They present the EOS as a polytropic function with different speeds of sound at distinct density segments (Fujimoto et al., 2018). To avoid the limitations of a parametric EOS, Landry _et al._ developed a nonparametric method to generate the EOS from the observables of gravitation waves by combining the Gaussian process and Bayesian inference methods, where the EOS of the neutron star is represented by the Gaussian process with finite points (Landry and Essick, 2019). The matching between the EOS and neutron star observations was carried out by the Bayesian inference. Han et al. also reconstructed the EOS of a neutron star using another Bayesian nonparametric inference method where the EOS was produced by the neural network with a sigmoid type as the activation function (Han et al., 2021).
In this work, a new machine learning methodology is proposed to reconstruct a nonparametric model for the EOSs of neutron stars based on the scheme proposed by Fujimoto et al., where the complete EOS is generated by the Gaussian process regression method with finite data points about the pressure-energy relation. A DNN is trained with the constraints of neutron star mass-radius relations, the masses of the heavy neutron stars, and the measurements of NICER. In Section 2, the framework of the Gaussian process regression method and the construction of the DNN is given in detail. The nonparametric EOS model of neutron stars generated by the DNN is shown in Section 3. A summary is presented in Section 4.
## 2 Gaussian Process Regression and Neural Network
### Gaussian Process Regression
The Gaussian process (GP) (Huang et al., 2022; Williams and Rasmussen, 2006), a random process, is a series of normal distributions of random variables in an index set combination. If the set of random variables \\(\\{f(x):x\\in\\chi\\}\\) is taken from the GP with the mean function \\(m(x)\\) and the covariance function \\(k(x_{1},x_{2})\\), the corresponding random variables \\(f(x_{i})\\) satisfy the multivariate Gaussian distribution for any finite set, \\([x_{1},\\cdots,x_{m}]\\in\\chi\\),
\\[\\left[\\begin{array}{c}f(x_{1})\\\\ \\vdots\\\\ f(x_{m})\\end{array}\\right]\\sim\\ \\ \\mathcal{N}\\left(\\left[\\begin{array}{c}m(x_{1})\\\\ \\vdots\\\\ m(x_{m})\\end{array}\\right],\\left[\\begin{array}{ccc}k(x_{1},x_{1})&\\cdots&k( x_{1},x_{m})\\\\ \\vdots&\\ddots&\\vdots\\\\ k(x_{m},x_{1})&\\cdots&k(x_{m},x_{m})\\end{array}\\right]\\right), \\tag{1}\\]
which can be simply expressed as
\\[f(\\cdot)\\sim GP(m(\\cdot),k(\\cdot,\\cdot)). \\tag{2}\\]
All linear combinations of random variables in GP obey the normal distribution. For each finite-dimensional set, its probability density function on the continuous exponential set is the Gaussian measurement of all random variables. Therefore, it is regarded that the infinite-dimensional set can be generalized by the extension of the multivariate Gaussian distribution.
Hence, the GP can be applied to solve a normal regression problem,
\\[y^{(i)}=f(x^{(i)})+\\epsilon^{(i)}, \\tag{3}\\]
where \\(X\\) is defined as the training set and its components \\((x^{(1)}, ,x^{(m)})\\) are independently and identically distributed with unknown distribution. \\(\\epsilon^{(i)}\\) is an independent noise variable, which is also given by a normal distribution with variance \\(\\sigma^{2}\\), \\(N(0,\\sigma^{2})\\). This scheme is called the Gaussian process regression (GPR) method. Usually, it is assumed that \\(f\\) follows the GP with a mean value of zero for notation simplicity,
\\[f(\\cdot)\\sim GP(0,k(\\cdot,\\cdot)). \\tag{4}\\]
The test set \\(X^{*}=(x^{(1*)}, ,x^{(m*)})\\), has the same independent co-distribution as \\(X\\), marked as \\(X\\to X^{*}\\). Therefore, the posterior distribution \\(p(y^{*}|X,\\;X^{*})\\) is predicted in GPR as the Gaussian distribution of the results, which is different from the general linear regression.
According to the properties of the GP, a joint distribution of the training and test sets is obtained,
\\[\\left[\\begin{array}{c}\\vec{f}\\\\ \\vec{f}^{*}\\end{array}\\right]\\Bigg{|}\\,X,X^{*}\\sim\\mathcal{N}\\left(\\vec{0}, \\left[\\begin{array}{cc}K(X,X)&K(X,X^{*})\\\\ K(X^{*},X)&K(X^{*},X^{*})\\end{array}\\right]\\right) \\tag{5}\\]
where the matrix elements \\(K(X^{A},X^{B})_{i,j}=k(x_{i}^{A},x_{j}^{B})\\). In the GP, the covariance function \\(k_{ij}\\) is also called the kernel function. The standard choice is the squared-exponential kernel,
\\[k_{se}(x_{1},x_{2})=\\sigma^{2}\\exp\\left(-\\frac{||x_{1}-x_{2}||^{2}}{2l^{2}} \\right). \\tag{6}\\]
Meanwhile, their noises obey similar distributions,
\\[\\left[\\begin{array}{c}\\vec{\\epsilon}\\\\ \\vec{\\epsilon^{*}}\\end{array}\\right]\\sim\\mathcal{N}\\left(\\vec{0},\\left[ \\begin{array}{cc}\\sigma_{wn}^{2}I&\\vec{0}\\\\ \\vec{0}^{T}&\\sigma_{wn}^{2}I\\end{array}\\right]\\right). \\tag{7}\\]
Here, \\(\\sigma_{wn}^{2}\\) is the hyper-parameter corresponding to white noise, which is different from the signal variance parameter, \\(\\sigma\\) in \\(k_{se}\\). The summation of two independent multivariate Gaussian variables is still a multivariate Gaussian variable,
\\[\\left[\\begin{array}{c}\\vec{y}\\\\ \\vec{y^{*}}\\end{array}\\right]\\Bigg{|}\\,X,X^{*}=\\left[\\begin{array}{c}\\vec{ \\epsilon}\\\\ \\vec{\\epsilon^{*}}\\end{array}\\right]+\\left[\\begin{array}{c}\\vec{\\epsilon}\\\\ \\vec{\\epsilon^{*}}\\end{array}\\right]\\sim \\tag{8}\\] \\[\\mathcal{N}\\left(\\vec{0},\\left[\\begin{array}{cc}K(X,X)+\\sigma_{wn }^{2}I&K(X,X^{*})\\\\ K(X^{*},X)&K(X^{*},X^{*})+\\sigma_{wn}^{2}I\\end{array}\\right]\\right)\\]
Based on the properties of multivariate Gaussian distribution, the conditional distribution over the unknown \\(y^{*}\\) is,
\\[y^{*}|y,\\;X,\\;X^{*}\\sim\\mathcal{N}\\left(\\mu^{*},\\Sigma^{*}\\right), \\tag{9}\\]
where,
\\[\\mu^{*} = K(X^{*},X)(K(X,X)+\\sigma_{wn}^{2}I)^{-1}\\vec{y},\\] \\[\\Sigma^{*} = K(X^{*},X^{*})-K(X^{*},X)\\] \\[(K(X,X)+\\sigma_{wn}^{2}I)^{-1}K(X,X^{*}).\\]
\\(\\mu^{*}\\) and \\(\\Sigma^{*}\\) are the mean and covariance functions of the probability distribution for our prediction results, respectively. Therefore, given the hyper-parameters \\(\\sigma\\) and \\(l\\) in the kernel function, a probability distribution describing the whole test set by the GPR method can be obtained. In principle, the mean function should be selected as the \"actual data curve\". However, it is strongly dependent on the hyper-parameters, \\(\\sigma\\) and \\(l\\) that are determined by maximizing the marginal log-likelihood, defined as,
\\[\\log p(\\mathbf{y}|\\sigma,l) = \\log\\mathcal{N}(0,K_{yy}(\\sigma,l))\\] \\[= -\\frac{1}{2}\\mathbf{y}^{T}K_{yy}^{-1}\\mathbf{y}-\\frac{1}{2}\\log|K_{yy}|- \\frac{N}{2}\\log(2\\pi), \\tag{11}\\]
where \\(K_{yy}=K(X^{*},X^{*})\\). Therefore, with a small number of data points, a relatively reasonable EOS curve and its confidence range can be predicted in the framework of the GPR method.
The direct matching between the EOS of a neutron star, i.e., the pressure-energy relation, and the observables of a neutron star may generate nonphysical solutions, such as the speed of sound of neutron star matter being less than zero or larger than the speed of light, \\(c_{s}<0\\) or \\(c_{s}>c\\), or the energy density becoming less than zero in some extreme conditions.
Recently, a new intermediate variable \\(\\phi\\) was proposed to construct the EOS of a neutron star (Lindblom, 2010; Landry and Essick, 2019). \\(\\phi\\) is defined as,
\\[\\phi=\\textbf{log}\\left(c^{2}\\frac{d\\epsilon}{dp}-1\\right). \\tag{12}\\]
It avoids the aforementioned weird behaviors, as when \\(\\phi\\in\\textbf{R}\\), the speed of sound obeys \\(0\\leq c_{s}^{2}=dp/d\\epsilon\\leq c^{2}\\), which automatically satisfies the physical requirements. When \\(p>0\\), the \\(\\epsilon>0\\) can be kept. Due to the large pressure magnitude of pressure, the \\(\\phi\\) is regarded as a function of \\(\\log p\\) so that it is easier to determine the hyper-parameters. Therefore, Eq. (12) will be expressed as,
\\[\\phi=\\textbf{log}\\left(\\partial\\textbf{log}\\epsilon\\frac{e^{\\textbf{log} \\epsilon}}{p}c^{2}-1\\right), \\tag{13}\\]
where \\(\\partial\\textbf{log}\\epsilon=\\left.\\frac{\\partial\\log\\epsilon}{\\partial\\log p }\\right|_{p=p_{i}}\\).
In the training set, \\(n\\) data points (\\(\\phi_{i},\\log p_{i}\\)) are randomly chosen. Once the optimal hyper-parameters are obtained by GPR, the continuum \\(\\phi-\\log p\\) curve can be generated. The corresponding EOS of the neutron star, \\(\\epsilon(p)\\) is provided by numerically integrating
\\[\\frac{\\partial\\epsilon}{\\partial p}=\\frac{1+e^{\\phi}}{c^{2}}. \\tag{14}\\]
### DNN method
In the available investigations of the structure of neutron stars, the EOS of neutron star matter was first calculated by either the nuclear many-body method or the parameterization function under the conditions of \\(\\beta\\)-equilibrium and charge neutrality. The EOS was then input to the Tolman-Oppenheimer-Volkoff (TOV) equation (Tolman, 1939; Oppenheimer and Volkoff, 1939), which describes a spherically symmetric and isotropic star in a static gravitational field with general relativity.
\\[\\begin{split}\\frac{dp}{dr}=&-\\frac{G\\epsilon(r)m(r )}{c^{2}r^{2}}\\left[1+\\frac{p(r)}{\\epsilon(r)}\\right]\\\\ &\\times\\left[1+\\frac{4\\pi r^{3}p(r)}{m(r)c^{2}}\\right]\\left[1- \\frac{2Gm(r)}{c^{2}r}\\right]^{-1}\\\\ \\frac{dm}{dr}=&\\frac{4\\pi r^{2}\\epsilon(r)}{c^{2}}, \\end{split} \\tag{15}\\]
where \\(r\\) is the radial coordinate, representing the distance to the center of the star. The functions \\(p(r)\\) and \\(\\epsilon(r)\\) are pressure and energy density (i.e., mass density), respectively. We can easily integrate these differential equations starting at \\(r=0\\), with the initial condition \\(p(r=0)=p_{c}\\). When it is integrated into the surface of the neutron star, i.e., the radius \\(R\\) and \\(p(r=R)=0\\), then \\(M=m(R)\\) corresponds to the total mass of the neutron star. Therefore, a continuum mass-radius (\\(M\\)-\\(R\\)) relation of a neutron star can be generated by the TOV equation.
A functional mapping between the EOS space and \\(M\\)-\\(R\\) space is constructed through the above framework, in a process called \"TOV mapping\". In principle, such mapping is invertible; thus, there should be a relevant inverse mapping (Lindblom, 1992), where the EOS can be uniquely reconstructed from the observed \\(M\\)-\\(R\\) relationship of the neutron star. However, in actuality, the complete \\(M\\)-\\(R\\) curve cannot be directly obtained from the observed data due to the discontinuities and uncertainties inherent in neutron star observations (Fujimoto et al., 2021). Therefore, a more likely EOS can be inferred from the neutron star observations with uncertainties.
The DNN is a powerful machine learning method to connect the EOS with observed data, following the idea of Fujimoto et al. (Fujimoto et al., 2021). The neural network (NN) is a representation of the fitting parameters of a function. Deep learning, e.g., the machine learning method using a DNN, is a process of optimizing the parameters contained in the function represented by an NN. Deep learning can be divided into supervised learning and unsupervised learning. The supervised learning that we adopted needs to have specific inputs and outputs before it can complete the fitting process with the training data (i.e., regression).
Compared with general fitting methods, the advantage of deep learning lies in the generalization properties of NNs. It does not need to rely on any prior knowledge about the proper form of the fitting function. Due to a large number of neurons (and neuron layers) and fitting parameters, an NN with a sufficient number of neurons can generate any continuous function (Cybenko, 1989; Hornik, 1991).
The model function of a feed-forward NN can be expressed as,
\\[\\begin{split}\\mathbf{y}=f(\\mathbf{x}|&\\{W^{(1)},b^{(1)}, \\cdots,W^{(l)},b^{(l)},\\cdots,\\\\ &\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\
of \\(\\mathcal{B}\\), and the approximate derivative is,
\\[\\frac{\\partial\\mathcal{L}(W^{(l)})}{\\partial W^{(l)}}\\approx\\frac{1}{|\\mathcal{B }|}\\sum_{n=1}^{|\\mathcal{B}|}\\frac{\\partial\\ell\\left(\\mathbf{y}_{n},f\\left(\\mathbf{x}_ {n}|\\left\\{W^{(l)},b^{(l)}\\right\\}_{l}\\right)\\right)}{\\partial W^{(l)}}, \\tag{18}\\]
where, the batch size \\(|\\mathcal{B}|\\) represents the number of sample points in \\(\\mathcal{B}\\). Since each optimal choice varies from case to case, its error will be shown later as a part of our estimations on the EOS confidence. The epoch denotes the number of scans of the entire training data set \\(\\mathcal{D}\\). Parameters are updated with each small batch, so an epoch is equivalent to iterating \\(|\\mathcal{D}|/|\\mathcal{B}|\\) small batches of data until all iterations are completed. In addition, the derivative \\(\\frac{\\partial\\ell}{\\partial W}\\) that appeared in Eq. (18) was calculated by the back-propagation method.
In this training process, the mean square logarithmic error (msle) is regarded as the loss \\(\\ell(\\mathbf{y},\\mathbf{y}^{\\prime})\\) in Eq. (17),
\\[\\ell_{\\text{msle}}(\\mathbf{y},\\mathbf{y}^{\\prime})\\equiv|\\log\\mathbf{y}-\\log\\mathbf{y}^{\\prime }|^{2}. \\tag{19}\\]
With a loss function, our NN can begin the basic training. The parameter initialization of NN will be discussed later, in detail.
Therefore, it is useful to compare our method with other methods proposed to generate the EOS of neutron star. In the present framework, the fitted EOSs are obtained by the DNN. The neutron star observation data is chosen as the input layer, while the constraint EOS is set up as the output layer. The training process is finished with the observations' likelihoods and the EOS priors generated by the theoretical model. The EOSs in the priors and the output layer are presented by several discretized points in \\(\\phi\\)-function to satisfy the constraint of the speed of sound and are smoothly connected by GP. On the other hand, the EOSs in the work of Fujimoto et al. were parameterized as a polytrope function dependent on the speeds of the sound of neutron star matter. Furthermore, the fitted EOSs in the work of Landry and Essick were produced by Bayesian inference with a set of nuclear-theoretic models.
## 3 The Numerical Details and Results
To prepare the training data set, the EOSs from relativistic mean-field (RMF) models were used to obtain the generation interval of GPR fitting data points. Nine RMF parameterizations were selected: BigApple, DD2, DDLZ1, DDME1, DDME2, DDMEX, NL3, PKDD, and TW99 (Fattoyev et al., 2020; Typel et al., 2010; Wei et al., 2020; Niksic et al., 2002; Lalazissis et al., 2005; Taninah et al., 2020; Lalazissis et al., 1997; Long et al., 2004). All of these RMF parameter sets can provide neutron stars, whose maximum masses are larger than \\(2.0M_{\\odot}\\)(Huang et al., 2020). The EOS from the NL3 set generated a maximum mass of neutrons star around \\(2.78M_{\\odot}\\).
The \\(\\epsilon\\)-\\(p\\) relation in the EOS was transferred into the \\(\\phi\\)-\\(\\ln p\\) function, where \\(\\ln p\\) is the natural logarithm of pressure. After calculating the means and variances of the \\(\\phi\\)-\\(\\ln p\\) relations from the above nine EOSs, it was found that their mean value is very close to the EOS from the DDME1 set (Niksic et al., 2002). To investigate the stability of initial values in the present framework, two schemes were adopted to generate the fitting interval with the GPR method:
1. _Scheme 1_ - After obtaining the mean and variance of \\(\\phi\\)-\\(\\ln p\\) functions from nine RMF parameter sets, the 95% confidence interval of the variance was selected as the generation range of \\(\\phi_{i}\\). As shown in panel (a) of Fig. 2, this interval encloses all EOSs from the RMF model.
2. _Scheme 2_ - The \\(\\phi\\)-\\(\\ln p\\) function provided by DDME1 set was regarded as the standard, and \\(\\phi\\pm 0.3\\phi\\) are chosen as the upper and lower bounds of the generation range of \\(\\phi_{i}\\). Such an interval is consistent with the one obtained by scheme 1, to a large extent.
In Fig. 3, the corresponding \\(\\epsilon\\)-\\(p\\) relations of scheme 1 and 2 are compared to the model-informed and model-agnostic priors in the Bayesian inference method by Landry and Essick (Landry and Essick, 2019). The \\(\\epsilon\\)-\\(p\\) relations from scheme 1 and scheme 2 in the present work are almost identical, which are also consistent with the model-informed prior. Since all of them are more strictly constrained by the theoretical EOSs. On the contrary,
Figure 1: The NN flow chart of present framework.
the model-agnostic prior has a loose boundary. It may consider more range of plausible EOSs.
To produce an EOS of neutron stars (including the high-density region) with the GPR method and aforementioned schemes, seven pressure points \\(\\ln p_{i}\\) (\\(i=1,~{}2,~{}\\cdots,~{}7\\)) were selected, with the same interval, in the range \\(\\ln p\\in[1,7]\\). \\(\\phi_{i}\\) was randomly generated in the training interval at each \\(\\ln p_{i}\\) point as an initial data set (\\(\\phi_{i},~{}\\ln p_{i}\\)). The EOS below nuclear saturation density was chosen as the one from the SLy4 set. A smooth and continuous \\(\\phi(\\ln p)\\) function is fitted by the GPR method, where the hyper-parameters, \\(l\\) and \\(\\sigma\\) are obtained by maximizing the marginal log-likelihood, as shown in Eq. (11). Furthermore, the star point, \\(\\phi_{1}=\\phi(\\ln p=1)\\) was fixed as the magnitude from the DDME1 parameter set.
The \\(M\\)-\\(R\\) relation of a neutron star can be calculated using the EOS from the GPR method by solving the TOV equation. In the present framework, the training data set of the DNN should assemble the points on the \\(M\\)-\\(R\\) curve, which correspond to the observables. The method proposed by Fujimoto et al. (Fujimoto et al., 2021) is used in this work to generate training data. Firstly, the maximum masses of neutron stars less than \\(2.2M_{\\odot}\\) and the \\(M\\)-\\(R\\) relations that did not satisfy the radii constraints of PSR J0740+6620 and PSR J0030+0451 (Miller et al., 2019, 2021) were excluded from the training data. Then, 14 points in the mass regions, \\([M_{\\odot},M_{max}]\\) on the \\(M\\)-\\(R\\) curve were randomly chosen as \"the original data points\" \\((M_{i},R_{i})\\) to simulate the real observations of the 14 available neutron stars. To consider the errors in the observations, the variances of the Gaussian distributions about the mass and radius, \\(\\sigma_{M_{i}}\\) and \\(\\sigma_{R_{i}}\\), were randomly taken from the uniform distribution in the ranges, \\([0,M_{\\odot}]\\) and \\([0,5\\mathrm{km}]\\). The deviations of mass and radius \\((\\Delta M_{i},\\Delta R_{i})\\) were calculated by the Gaussian distribution with the variances of \\(\\sigma_{M_{i}}\\) and \\(\\sigma_{R_{i}}\\). Finally the \"real data point\" \\((M_{i}+\\Delta M_{i},R_{i}+\\Delta R_{i})\\) was obtained. The set \\((M_{i}+\\Delta M_{i},~{}R_{i}+\\Delta R_{i},~{}\\sigma_{M_{i}},~{}\\sigma_{R_{i}})\\) can be compared to the observational data of neutron stars.
A group of \\(i=14\\) data points \\((M_{i},~{}R_{i})\\) was selected from the \\(M\\)-\\(R\\) curve generated by each EOS, and \\(j=100\\) groups of different variances \\((\\sigma_{M_{ij}},\\sigma_{R_{ij}})\\) were randomly sampled for each \\(M_{i}\\)-\\(R_{i}\\) data point. Later, \\(k=100\\) groups of deviations, \\(\\Delta M_{ijk}\\) and \\(\\Delta R_{ijk}\\) were provided by each variance set, \\((\\sigma_{M_{ij}},\\sigma_{R_{ij}})\\). In this way, \\(100\\times 100\\) sets of data for each EOS were prepared and 14 data points were sampled. The above process was repeated by 500 times to include as wide a range as possible, resulting in \\(500\\times 100\\times 100=5,000,000\\) sets, where one set includes 14 data points.
Figure 3: The corresponding \\(\\epsilon-p\\) relations of scheme 1 and 2 in Figure 2 and the model-informed and model-agnostic priors in the Bayesian inference method by Landry and Essick (Landry and Essick, 2019).
Figure 2: The generation range of \\(\\phi\\)-\\(\\ln p\\). We will randomly select points within this range and then use GPR method to generate EOS. In panel (a ) the nine EOSs are treated to obtain the mean \\(\\mu\\) and variance \\(\\sigma\\), whose 95% confidence interval is taken to obtain the fitting range. In panel (b), the generation range is based on DDME1 curve, with a fluctuation of 0.3.
For the architecture of the NN, the Python library, Keras (Chollet et al., 2015) was employed, with TensorFlow (Abadi et al., 2016) as the backend. The number of NN layers, their corresponding neurons, and the activation functions are shown in Table 1. The hyperbolic tangent function of the output layer makes the results fall between \\((-1,1)\\), speeding up the training. The msle is chosen as the loss function, given in Eq. (19). The optimization method was Adam (Kingma and Ba, 2014) by taking the batch size as \\(1000\\). The default initialization NN argument was the Glorot Uniform distribution (Glorot and Bengio, 2010).
The DNN models for a full training set of \\(5,000,000\\) data were compare with a random sampling of \\(1,000,000\\) data in the training set, giving similar results, but with the latter greatly improving the training efficiency. In addition, for all models, the changes in loss functions for the training of epoch were almost identical. The loss functions estimated for the validation data and training data are shown as an example in Fig. 4. When the epoch \\(>10\\), the verification loss is consistent with the training loss, whereas when the epoch \\(>100\\), the verification loss is stable. Therefore, each DNN model was trained with \\(1,000,000\\) data. The validation set was taken as the \\(10,000\\) sets from the rest \\(4,000,000\\) sets to check the convergence. Once the epoch \\(=100\\), the model was considered finished.
Due to the differences in initial input and training data, there was some uncertainty about the output results of the DNN. Therefore, the process was repeated \\(100\\) times to generate \\(100\\) independent DNN models. The uncertainties in the training results were estimated from the fitted \\(100\\) EOSs. In Fig. 5, \\(200\\) relations about \\(\\phi\\)-\\(\\ln p\\) from scheme 1 in panel (a) and scheme 2 in panel (b) are reconstructed through the training data of the DNN. Each curve is smoothly connected with seven output points by the GPR method, as shown in the inserts. It was found that most of these curves have similar pressure-dependence behaviors. Their differences increase in the high-density region due to the observation discrepancies associated with the \\(14\\) neutron stars.
The \\(\\phi\\)-\\(\\ln p\\) relations must be converted to the \\(\\epsilon\\)-\\(p\\) function by integrating the Eq. (14) to obtain the EOS of the neutron star. In Fig. 6, the neutron star EOSs with the \\(68\\%\\) and \\(95\\%\\) confidence levels from the DNN with scheme 1 in panel (a) and scheme 2 in panel (b) are shown and compared to those joint constraints from the GW170817 and GW190814 events (Abbott et al., 2020) and the EOS from DDME1. In the inserts, the original \\(200\\) EOSs from the DNN training are plotted. To analyze the uncertainties of the EOSs, it was assumed that the pressures at each energy density from the machine learning model satisfy the Gaussian distribution. Therefore, the mean EOS was obtained as the dashed curve with the dark blue shadow representing the \\(68\\%\\) confidence level and the light blue shadow, the \\(95\\%\\), respectively. In the low-density region, our estimations are consistent with the joint constraints on the EOS from the GW170817 and GW190814 events. With density increasing, present EOSs are softer than the joint constraints, since the maximum masses of the \\(14\\) neutron stars are just around \\(2M_{\\odot}\\). Furthermore, the fitted EOS differs slightly from the EOS of DDME1 in scheme 2, despite this being regarded as the mean value of the training data. In the mediate region of energy density, the EOS generated by the DDME1 is harder than the fitted one, since the radius of the neutron star from DDME1
\\begin{table}
\\begin{tabular}{c|c|c} \\hline \\hline Layer & Number of neurons & Activation function \\\\ \\hline \\(1\\)(Input) & \\(56\\) & N/A \\\\ \\hline \\(2\\) & \\(60\\) & ReLU \\\\ \\hline \\(3\\) & \\(40\\) & ReLU \\\\ \\hline \\(4\\) & \\(40\\) & ReLU \\\\ \\hline \\(5\\)(Output) & \\(6\\) & tanh \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: The setup of present DNN. The number of input and output neurons can be modified according to different network conditions. Here, the number of neurons at output layer is \\(6\\), because \\(\\phi(\\ln p=1)\\) has been fixed as the value obtained from DDME1 set.
Figure 4: The Loss probabilities as functions of epoch with the training data and validation data.
is a little larger when compared with the observations of the 14 neutron stars, as shown later. These results demonstrate that the EOS of the present framework is independent of the initial input of the training set.
Here, it must be emphasized that the inconsistencies in EOSs fitted by LIGO-Virgo-KAGRA (LVK) collaborations from GW170817 and GW190814 events, and present work are generated by the different theoretical frameworks and priors. In the LVK analysis, the EOSs in the priors were given by the spectral representation and are determined by the adiabatic index \\(\\Gamma\\) as shown in Refs. (Read et al., 2009) and (Lindblom, 2010). The EOS parameters of the prior ranges in LVK were choices from the 34-neutron star matter EOSs, including the PAL6, APR1-4, WFF1-3, MS1-2, and so on (Read et al., 2009). The maxim masses of the neutron star from these EOSs are in the range of \\(1.47\\sim 2.78M_{\\odot}\\) and the radii at \\(1.4M_{\\odot}\\) are \\(9.36\\sim 15.47\\) km. Correspondingly, the prior of EOSs space in the present framework is taken from the 9 RMF parameter sets, which only can generate the maximum masses of the neutron stars from \\(2.0\\sim 2.4M_{\\odot}\\). Therefore, the harder EOSs were fitted by LVK at high-density regions.
Once the EOS of the neutron star were determined, its \\(M\\)-\\(R\\) relation was obtained by solving the TOV equation. The \\(M\\)-\\(R\\) relations from our deduced EOSs are plotted in Fig. 7, with 68% (dark blue) and 95% (light blue) confidence levels. The corresponding \\(M\\)-\\(R\\) distributions of the observed 14 neutron stars are given as contour plots. The masses of massive neutron stars, PSR J0348+0432, PSR J0740+6620, and PSR J1614-2230; the secondary compact object of the GW190814 event; and the radii of PSR J0030+0451 and PSR J0740+6620 from the NICER are given and compared. The fitted EOSs from schemes 1 and 2 nicely reproduce the neutron star observations and are able to generate massive neutron stars. Their radii are consistent with the results of the 14 observed neutron stars and the mass-radius simultaneous measurements from NICER. Furthermore, the \\(M\\)-\\(R\\) relation from the DDME1 set is shown as a solid line, which was chosen as the mean value to generate the training data set in scheme 2. Its
Figure 5: The 200 DNN models about \\(\\phi\\)-\\(\\ln p\\) from schemes 1 and 2.
Figure 6: The EOSs from the nonparametric machine learning methods with scheme 1 and 2 and comparing to those from the joint constraints from GW170817 and GW190814 events, and from the DDME1 set.
radius at the mediate mass region is a little larger when compared with the 14 observed neutron stars. The output EOSs of the DNN from scheme 1 provide smaller radii, which coincide with the distribution of observables. This shows that the final results of present framework is independent of the generating scheme for the training data.
In a binary neutron star merger, one neutron star will be deformed by the external gravitational field of another star. The magnitude of deformation is denoted as the tidal deformability, which is dependent on the EOS of the neutron star and can be extracted from the gravitational wave provided by the binary neutron star. In the GW170817 event, the dimensionless tidal deformability at \\(1.4M_{\\odot}\\) was inferred as \\(\\Lambda_{1.4}=190^{+390}_{-120}\\)(Abbott et al., 2018). In Fig. 8, the dimensionless tidal deformabilities as functions of neutron star masses from schemes 1 and 2, with 68% and 95% confidence levels, are plotted and compared to the constraint from the GW170817 event and the results from the DDME1 set. The \\(\\Lambda\\) decreases with the neutron star mass since it is proportional to \\(R^{5}/M^{5}\\) of the neutron star. Therefore, the \\(\\Lambda\\) from the DDME1 is relatively larger. The \\(\\Lambda_{1.4}\\) from the reported machine learning framework completely satisfies the measurements from the gravitational wave detection.
Table 2 lists the properties of neutron stars fitted by the DNN with nonparametric training data: namely, the maximum masses of neutrons stars, the corresponding radii, the radii at \\(1.4M_{\\odot}\\) and \\(2.08M_{\\odot}\\), and the dimensionless tidal deformability at \\(1.4M_{\\odot}\\) with 68% and 95% confidence levels in schemes 1 and 2. These variables were compared to the results from the DDME1 parameter set. Both of these two schemes can generate the massive neutron star with a mass close to \\(2.55M_{\\odot}\\). The radius of the \\(1.4M_{\\odot}\\) neutron star is around 12.30 km, which is consistent with the value extracted from the GW170817 of \\(R_{1.4}=11.9\\pm 1.4\\) km (Abbott et al., 2019). The radius of \\(2.08M_{\\odot}\\) neutron star is fitted around 12.0 km now. The radius and mass of PSR J0740+6620 were analyzed as \\(12.39^{+1.30}_{-0.98}\\) km and \\(2.072^{+0.067}_{-0.066}M_{\\odot}\\), from
Figure 8: \\(\\Lambda\\)-\\(M\\) relation, generated by the fitted EOSs and compared to that from DDME1 and the values extracted from GW170817 events.
Figure 7: The mass-radius relation of neutron star from the nonparametric machine learning method, the observation distributions from 14 neutron stars, the masses of massive neutron stars, and the radii constraints from the NICER.
NICER, by Riley et al. (Riley et al., 2021). The results from the two schemes are similar, with differences are less than 2%.
It can be found that present fits about the properties of the neutron are comparable with those generated by model-informed priors in the works of Landry and Essick (Landry and Essick, 2019), while they are much more constrained than the ones from model-agnostic prior. It is because our training data is just prepared to reproduce the theoretical EOSs, while the possibility that the EOS might be quite different from current theoretical fits was considered in model-agnostic prior.
Finally, the \\(M\\)-\\(R\\) relations from the two schemes to generate the training set, were compared and given in Fig. 9. Their behaviors are quite similar. The only difference is that the radii of the neutron stars and the uncertainties from scheme 2 are a little larger than those of scheme 1 because of the influence of the DDME1 set. This demonstrates that the fitted EOSs in the present framework is strongly independent of the choice of initial training data values using the GPR method.
## 4 Summaries and perspectives
A nonparametric methodology has been proposed to infer the EOSs of neutron star matter from recent observations of the neutron stars. A DNN was designed to map the mass-radius observables to the energy-pressure relation of dense matter. The GPR method was applied to construct the EOSs, and this method was completely independent of any apparent function form.
To generate the training data set, two schemes of the example data were adopted to provide the initial EOS. The mean values and variances of EOSs from nine successful relativistic mean-field model parameter sets were considered in the first scheme; whereas in the second, the mean value was chosen from the DDME1 set and the derivation was fixed as 0.3. A 5-million training data set was constructed by including the uncertainties in the mass and radius of neutron stars. Furthermore, in the training set, the constraints of the massive neutron star and the mass-radius simultaneous measurements were also taken into account in the training set.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline & C. L. & \\(M_{max}[M_{\\odot}]\\) & \\(R_{max}\\) [km] & \\(R_{1.4}\\) [km] & \\(R_{2.08}\\) [km] & \\(\\Lambda_{1.4}\\) \\\\ \\hline DDME1 & & 2.45 & 11.83 & 12.99 & 12.98 & 692 \\\\ \\hline \\multirow{4}{*}{scheme 1} & \\multirow{2}{*}{68\\%} & \\multirow{2}{*}{2.38\\({}^{+0.07}_{-0.07}\\)} & \\multirow{2}{*}{11.07\\({}^{+0.16}_{-0.17}\\)} & \\multirow{2}{*}{12.31\\({}^{+0.15}_{-0.16}\\)} & \\multirow{2}{*}{11.95\\({}^{+0.23}_{-0.23}\\)} & \\multirow{2}{*}{459\\({}^{+37}_{-46}\\)} \\\\ \\cline{2-2} \\cline{5-7} & & & & & & \\\\ \\cline{2-2} \\cline{5-7} & \\multirow{2}{*}{95\\%} & \\multirow{2}{*}{2.38\\({}^{+0.15}_{-0.13}\\)} & \\multirow{2}{*}{11.07\\({}^{+0.34}_{-0.32}\\)} & \\multirow{2}{*}{12.31\\({}^{+0.29}_{-0.31}\\)} & \\multirow{2}{*}{11.95\\({}^{+0.44}_{-0.47}\\)} & \\multirow{2}{*}{459\\({}^{+82}_{-81}\\)} \\\\ \\hline \\multirow{4}{*}{scheme 2} & \\multirow{2}{*}{68\\%} & \\multirow{2}{*}{2.41\\({}^{+0.08}_{-0.07}\\)} & \\multirow{2}{*}{11.15\\({}^{+0.21}_{-0.20}\\)} & \\multirow{2}{*}{12.30\\({}^{+0.17}_{-0.19}\\)} & \\multirow{2}{*}{12.03\\({}^{+0.27}_{-0.27}\\)} & \\multirow{2}{*}{448\\({}^{+55}_{-43}\\)} \\\\ \\cline{2-2} \\cline{5-7} & & & & & & \\\\ \\cline{1-1} \\cline{2-2} \\cline{5-7} & \\multirow{2}{*}{95\\%} & \\multirow{2}{*}{2.41\\({}^{+0.15}_{-0.14}\\)} & \\multirow{2}{*}{11.15\\({}^{+0.41}_{-0.39}\\)} & \\multirow{2}{*}{12.30\\({}^{+0.35}_{-0.37}\\)} & \\multirow{2}{*}{12.03\\({}^{+0.53}_{-0.54}\\)} & \\multirow{2}{*}{448\\({}^{+110}_{-86}\\)} \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: The maximum masses of neutrons star, the corresponding radii at \\(1.4M_{\\odot}\\) and \\(2.08M_{\\odot}\\), and the dimensionless tidal deformability at \\(1.4M_{\\odot}\\) from the nonparametric EOS models with 68% and 95% confidence levels in scheme 1 and 2 and compared to those from DDME1.
Figure 9: The \\(M\\)-\\(R\\) relation comparisons between two schemes with 95% confidence interval and the constraints from the massive neutron star and NICER.
One hundred independent NN models were generated with different training data sets, producing one hundred EOSs of a neutron star. These were analyzed with the standard statistical method and EOSs with the 68% and 95% confidence levels were obtained. They were softer when compared with the join constraints from the GW170817 and GW190814 events. The mass-radius relations from our fitted EOSs fully satisfy the present various astronomical observations of neutron stars. The dimensionless tidal deformability at \\(1.4M_{\\odot}\\) was also consistent with the data extracted from the GW170817. Finally, concerning the creation of training data, the results from both schemes were almost identical. This shows that the present fitted EOSs are strongly independent of the initial set of training data set.
Our nonparametric NN framework can be naturally extended to other supervised learning fields to avoid the limitations of specific function forms. In the future, the original data on the gravitational wave from the binary neutron star will be included in the input layer to simulate the observations more realistically. The hadron-quark phase transition was excluded in the present training data set, and this too will be considered in future work.
## 5 Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 11775119 and 12175109), and the Natural Science Foundation of Tianjin (Grant No: 19JCYBJC30800). We are grateful to the referee for his constructive comments and suggestions.
## References
* Abadi et al. (2016) Abadi, M., Barham, P., Chen, J., et al. 2016, arXiv:1605.08695
* Abbott et al. (2017) Abbott, B. P., Abbott, R., Abbott, T., et al. 2017, Phys. Rev. Lett. 119, 161101
* Abbott et al. (2018) --. 2018, Phys. Rev. Lett. 121, 161101
* Abbott et al. (2019) --. 2019, Phys. Rev. X, 9, 011001
* Abbott et al. (2020) Abbott, R., Abbott, T., Abraham, S., et al. 2020, ApJ, 896, L44
* Akmal et al. (1998) Akmal, A., Pandharipande, V. R., & Ravenhall, D. G. a. 1998, Phys. Rev. C, 58, 1804
* Alvarez-Castillo et al. (2016) Alvarez-Castillo, D., Ayriyan, A., Benic, S., et al. 2016, EPJA, 52, 1
* Antoniadis et al. (2013) Antoniadis, J., Freire, P. C., Wex, N., et al. 2013, Sci., 340, 1233232
* Arzoumanian et al. (2018) Arzoumanian, Z., Brazier, A., Burke-Spolaor, S., et al. 2018, ApJS, 235, 37
* Bao et al. (2014) Bao, S., Hu, J., Zhang, Z., & Shen, H. 2014, Phys. Rev. C, 90, 045802
* Bao & Shen (2015) Bao, S., & Shen, H. 2015, Phys. Rev. C, 91, 015807
* Baym et al. (2018) Baym, G., Hatsuda, T., Kojo, T., et al. 2018, RePP, 81, 056902
* Bender et al. (2003) Bender, M., Heenen, P.-H., & Reinhard, P.-G. 2003, RvModPh, 75, 121
* Chen et al. (2013) Chen, H., Burgio, G., Schulze, H.-J., & Yasutake, N. 2013, A&A, 551, A13
* Chollet et al. (2015) Chollet, F., et al. 2015, URL: https://keras. io/k, 7, T1
* Cromartie et al. (2020) Cromartie, H. T., Fonseca, E., Ransom, S. M., et al. 2020, NatAs, 4, 72
* Cybenko (1989) Cybenko, G. 1989, MCGS, 2, 303
* Demorest et al. (2010) Demorest, P. B., Pennucci, T., Ransom, S., Roberts, M., & Hessels, J. 2010, Nature, 467, 1081
* Dutra et al. (2012) Dutra, M., Lourenco, O., Martins, J. S. S., et al. 2012, Phys. Rev. C, 85, 035201
* Dutra et al. (2014) Dutra, M., Lourenco, O., Avancini, S., et al. 2014, Phys. Rev. C, 90, 055203
* Essick et al. (2020) Essick, R., Landry, P., & Holz, D. E. 2020a, Phys. Rev. D, 101, 063007
* Essick et al. (2020) Essick, R., Tews, I., Landry, P., Reddy, S., & Holz, D. E. 2020b, Phys. Rev. D, 102, 055803
* Farrell et al. (2022) Farrell, D., Baldi, P., Ott, J., et al. 2022, arXiv:2209.02817
* Fattoyev et al. (2020) Fattoyev, F., Horowitz, C., Piekarewicz, J., & Reed, B. 2020, Phys. Rev. C, 102, 065805
* Ferreira et al. (2022) Ferreira, M., Carvalho, V., & Providencia, C. 2022, arXiv:2209.09085
* Ferreira & Providencia (2021) Ferreira, M., & Providencia, C. 2021, JCAP, 2021, 011
* Fonseca et al. (2016) Fonseca, E., Pennucci, T. T., Ellis, J. A., et al. 2016, ApJ, 832, 167
* Fujimoto et al. (2018) Fujimoto, Y., Fukushima, K., & Murase, K. 2018, Phys. Rev. D, 98, 023019
* Fujimoto et al. (2020) --. 2020, Phys. Rev. D, 101, 054016
* Fujimoto et al. (2021) --. 2021, JHEP, 2021, 1
* Glendenning (2001) Glendenning, N. K. 2001, Phys. Rev. D, 342, 393
* Glorot & Bengio (2010) Glorot, X., & Bengio, Y. 2010, JMLR Workshop and Conference Proceedings (Proceedings of the thirteenth international conference on artificial intelligence and statistics), 249-256
* Han et al. (2021) Han, M.-Z., Jiang, J.-L., Tang, S.-P., & Fan, Y.-Z. 2021, ApJ, 919, 11
* Hornik (1991) Hornik, K. 1991, Neural Netw., 4, 251
* Hu et al. (2020) Hu, J., Bao, S., Zhang, Y., et al. 2020, PTEP, 2020, 043D01
* Huang et al. (2020) Huang, K., Hu, J., Zhang, Y., & Shen, H. 2020, ApJ, 904,* (2022) --. 2022, ApJ, 935, 88
* Ji et al. (2019) Ji, F., Hu, J., Bao, S., & Shen, H. 2019, PhRvC, 100, 045801
* Ju et al. (2021) Ju, M., Wu, X., Ji, F., Hu, J., & Shen, H. 2021, PhRvC, 103, 025809
* Kingma & Ba (2014) Kingma, D. P., & Ba, J. 2014, arXiv:1412.6980
* Lalazissis et al. (1997) Lalazissis, G., Konig, J., & Ring, P. 1997, PhRvC, 55, 540
* Lalazissis et al. (2005) Lalazissis, G., Niksic, T., Vretenar, D., & Ring, P. 2005, PhRvC, 71, 024312
* Landry & Essick (2019) Landry, P., & Essick, R. 2019, PhRvD, 99, 084049
* Lattimer & Prakash (2000) Lattimer, J. M., & Prakash, M. 2000, PhR, 333, 121
* Lattimer & Prakash (2007) --. 2007, PhR, 442, 109
* Li et al. (2008) Li, B.-A., Chen, L.-W., & Ko, C. M. 2008, PhR, 464, 113
* Li et al. (2019) Li, B.-A., Krastev, P. G., Wen, D.-H., & Zhang, N.-B. 2019, EPJA, 55, 1
* Lindblom (1992) Lindblom, L. 1992, ApJ, 398, 569
* Lindblom (2010) --. 2010, PhRvD, 82, 103011
* Long et al. (2004) Long, W., Meng, J., Van Giai, N., & Zhou, S.-G. 2004, PhRvC, 69, 034319
* Meng et al. (2006) Meng, J., Toki, H., Zhou, S.-G., et al. 2006, PPNuPh, 57, 470
* Miao et al. (2021) Miao, Z., Jiang, J.-L., Li, A., & Chen, L.-W. 2021, ApJL, 917, L22
* Miller et al. (2019) Miller, M. C., Lamb, F. K., Dittmann, A. J., et al. 2019, ApJL, 887, L24
* Miller et al. (2021) --. 2021, ApJL, 918, L28
* Murarka et al. (2022) Murarka, U., Banerjee, K., Malik, T., & Providencia, C. 2022, JCAP, 2022, 045
* Niksic et al. (2002) Niksic, T., Vretenar, D., Finelli, P., & Ring, P. 2002, PhRvC, 66, 024306
* Niksic et al. (2011) Niksic, T., Vretenar, D., & Ring, P. 2011, PPNuPh, 66, 519
* Oertel et al. (2017) Oertel, M., Hempel, M., Klahn, T., & Typel, S. 2017, RvModPh, 89, 015007
* Oppenheimer & Volkoff (1939) Oppenheimer, J. R., & Volkoff, G. M. 1939, PhRe, 55, 374
* Orsaria et al. (2014) Orsaria, M., Rodrigues, H., Weber, F., & Contrera, G. 2014, PhRvC, 89, 015806
* Ozel et al. (2010) Ozel, F., Baym, G., & Guver, T. 2010, PhRvD, 82, 101301
* Raithel et al. (2017) Raithel, C. A., Ozel, F., & Psaltis, D. 2017, ApJ, 844, 156
* Read et al. (2009) Read, J. S., Lackey, B. D., Owen, B. J., & Friedman, J. L. 2009, Phys. Rev. D, 79, 124032, doi: 10.1103/PhysRevD.79.124032
* Riley et al. (2019) Riley, T. E., Watts, A. L., Bogdanov, S., et al. 2019, ApJL, 887, L21
* Riley et al. (2021) Riley, T. E., Watts, A. L., Ray, P. S., et al. 2021, ApJL, 918, L27
* Ring (1996) Ring, P. 1996, PPNuPh, 37, 193
* Sammarruca (2010) Sammarruca, F. 2010, IJModPhE, 19, 1259
* Sammarruca et al. (2012) Sammarruca, F., Chen, B., Coraggio, L., Itaco, N., & Machleidt, R. 2012, PhRvC, 86, 054317
* Steiner et al. (2010) Steiner, A. W., Lattimer, J. M., & Brown, E. F. 2010, ApJ, 722, 33
* Stone & Reinhard (2007) Stone, J. R., & Reinhard, P.-G. 2007, PPNuPh, 58, 587
* Sun (2016) Sun, B. 2016, Sci. Sin. Phys., Mech. Astron., 46, 012018
* Taninah et al. (2020) Taninah, A., Agbemava, S., Afanasjev, A., & Ring, P. 2020, PhLB, 800, 135065
* Tolman (1939) Tolman, R. C. 1939, PhRe, 55, 364
* Typel et al. (2010) Typel, S., Ropke, G., Klahn, T., Blaschke, D., & Wolter, H. 2010, PhRvC, 81, 015803
* Van Dalen et al. (2004) Van Dalen, E., Fuchs, C., & Faessler, A. 2004, NuPhA, 744, 227
* Wang et al. (2020) Wang, C., Hu, J., Zhang, Y., & Shen, H. 2020, ApJ, 897, 96
* Weber (2005) Weber, F. 2005, PPNuPh, 54, 193
* Wei et al. (2020) Wei, B., Zhao, Q., Wang, Z.-H., et al. 2020, CPhC, 44, 074107
* Wei et al. (2019) Wei, J. B., Figura, A., Burgio, G. F., Chen, H., & Schulze, H. 2019, JPhG, 46, 034001
* Williams & Rasmussen (2006) Williams, C. K., & Rasmussen, C. E. 2006 (MIT press Cambridge, MA)
* Wu & Shen (2017) Wu, X., & Shen, H. 2017, PhRvC, 96, 025802
* Xu et al. (2010) Xu, J., Chen, L.-W., Ko, C. M., & Li, B.-A. 2010, PhRvC, 81, 055803
* Yang & Shen (2008) Yang, F., & Shen, H. 2008, PhRvC, 77, 025801 | It is of great interest to understand the equation of state (EOS) of the neutron star (NS), whose core includes highly dense matter. However, there are large uncertainties in the theoretical predictions for the EOS of NS. It is useful to develop a new framework, which is flexible enough to consider the systematic error in theoretical predictions and to use them as a best guess at the same time. We employ a deep neural network to perform a non-parametric fit of the EOS of NS using currently available data. In this framework, the Gaussian process is applied to represent the EOSs and the training set data required to close physical solutions. Our model is constructed under the assumption that the true EOS of NS is a perturbation of the relativistic mean-field model prediction. We fit the EOSs of NS using two different example datasets, which can satisfy the latest constraints from the massive neutron stars, NICER, and the gravitational wave of the binary neutron stars. Given our assumptions, we find that a maximum neutron star mass is \\(2.38^{+0.15}_{-0.13}M_{\\odot}\\) or \\(2.41^{+0.15}_{-0.14}\\) at 95% confidence level from two different example datasets. It implies that the \\(1.4M_{\\odot}\\) radius is \\(12.31^{+0.29}_{-0.31}\\) km or \\(12.30^{+0.35}_{-0.37}\\) km. These results are consistent with results from previous studies using similar priors. It has demonstrated the recovery of the EOS of NS using a nonparametric model.
Neutron Star, Deep neural network, Gaussian process regression | Summarize the following text. | 355 |
arxiv-format/2402_17171v1.md | # LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation
in Free Environment
Yiming Ren\\({}^{1}\\), Xiao Han\\({}^{1}\\), Chengfeng Zhao\\({}^{1}\\), Jingya Wang\\({}^{1}\\), Lan Xu\\({}^{1}\\), Jingyi Yu\\({}^{1}\\), Yuexin Ma\\({}^{1,\\dagger}\\)
\\({}^{1}\\) ShanghaiTech University
{renym2022,mayuexin}@shanghaitech.edu.cn
## 1 Introduction
Human pose and shape estimation (HPS) is aimed at reconstructing 3D digital representations of human bodies, such as SMPL [35], using data captured by sensors. It is significant for two primary applications: one in motion capture for the entertainment industry, including film, augmented reality, virtual reality, mixed reality, etc.; and the other in behavior understanding for the robotics industry, covering domains like social robotics, assistive robotics, autonomous driving, human-robot interaction, and beyond.
While optical-based methods [18, 19, 29, 31, 44] have seen significant advancements in recent years, their efficacy is limited due to the camera sensor's inherent sensitivity to variations in lighting conditions, rendering them impractical for use in uncontrolled environments. In contrast, inertial methods [39, 58, 63, 64] utilize body-mounted inertial measurement units (IMUs) to derive 3D poses, which is independent of lighting and occlusions. However, these methods necessitate the use of wearable devices, struggle with drift issues over time, and fail to capture human body shapes and precise global translations.
LiDAR is a commonly used perception sensor for robots and autonomous vehicles [11, 69, 70] due to its accurate depth sensing without light interference. Recent advances [45] in HPS are turning to utilize LiDAR for capturing high-quality SMPLs in the wild. LiDARCap [33] proposes a GRU-based approach for estimating only human pose parameters from LiDAR point clouds. MOVIN [23] uses a CVAE framework to link point clouds with human poses for both human pose and global translation estimation. Nevertheless, these approaches lack the capability to estimate body shapes, and further, they disregard the challenging characteristics of LiDAR point clouds, leading to an unstable performance in real-world scenarios. First, the distribution and pattern of LiDAR point clouds vary across different capture distances and devices. Second, the view-dependent nature of LiDAR results in incomplete point clouds of the human body, affected by self-occlusion or external obstruction. Third, real-captured LiDAR point clouds invariably contain noise in complex scenarios, caused by the reflection interference or carry-on objects. These properties all bring challenges for accurate and robust HPS in extensive, uncontrolled environments.
Considering above intractable problems of LiDAR point cloud, we introduce LiveHPS, a novel single-LiDAR-based approach for capturing high-quality human pose, shape, and global translation in large-scale free environment, as shown in Fig. 1. The deployment-friendly single-LiDAR setting is unrestricted in acquisition sites, light conditions, and wearable devices, which can benefit many practical applications. In order to improve the robustness for tackling point distribution variations, we design an **Adaptive Vertex-guided Distillation** module to make diverse point distributions align with the regular SMPL mesh vertex distribution in high-level feature space by a prior consistency loss. Moreover, to reduce the influence of occlusion and noise, we propose a **Consecutive Pose Optimizer** to explore the geometric and dynamic information existing in temporal and spatial spaces for pose refinement by attention-based feature enhancement. In addition, a **Skeleton-aware Translation Solver** is also presented to eliminate the effect of incomplete and noised point cloud on accurate estimation for human global translation. In particular, we introduce the scene-level unidirectional Chamfer distance (SUCD) from the input point cloud to the estimated human mesh vertex in global coordinate system as a new evaluation measurement for LiDAR-based HPS, which can reflect the fine-grained geometry error and translation error between the prediction and the ground truth.
It is worth noting that we also introduce **FreeMotion**, a novel huge motion dataset captured in diverse large-scale real scenarios with multiple persons, which contains multi-modal data(LiDAR point clouds, RGB image and IMUs), multi-view data(front, back and side), and comprehensive SMPL parameters(pose, shape and global translation). Through extensive experiments and ablation studies on FreeMotion and other public datasets, our method outperforms others by a large margin.
Our main contributions can be summarized as follows:
* We present a novel single-LiDAR-based method for 3D HPS in large-scale free environment, which achieves state-of-the-art performance.
* We propose an effective vertex-guided adaptive distillation module, consecutive pose optimizer, and skeleton-aware translation solver to deal with the distribution-varied, incomplete, and noised LiDAR point clouds.
* We present a new motion dataset captured in diverse real scenarios with rich modalities and annotations, which can facilitate further research of in-the-wild HPS.
## 2 Related Work
### Optical-based Methods
Optical motion capture technology has advanced from initial marker-based systems [40, 52, 53] that rely on camera-tracked markers to reconstruct a 3D mesh, to markerless systems [2, 7, 14, 24, 36, 41, 46, 47, 50] which track human motion using cameras through feature detection and matching algorithms. Despite they can get high-accuracy results, these systems are often expensive and require elaborate setup and calibration. To mitigate these challenges, monocular mocap methods using optimization [6, 20, 30, 32] and regression [25, 26, 28, 67], along with template-based, probabilistic [17, 18, 19, 59, 60], and semantic-modeling techniques [29], have emerged to address monocular system limitations. Nonetheless, these approaches still suffer from inherent light sensitivity and depth ambiguity. Some strategies[3, 16, 48, 57, 66] incorporate depth cameras to resolve depth ambiguity, yet these cameras have a limited sensing range and are ineffective in outdoor scenes.
### Inertial-based Methods
Unlike optical systems, inertial motion capture systems [58] are not affected by light conditions and occlusions. They generally need numerous IMUs attached to form-fitting suits, a setup that can be heavy and inconvenient, motivating interest in more sparse configurations, such as six-IMU setting [21, 54, 63, 64] and four-IMU setting [45]. However, these methods suffer from drift errors over time, cannot provide precise shape and global translation, and require wearable devices, not practical for daily-life scenarios.
### LiDAR-based Methods
With precise long-range depth-sensing ability, LiDAR has emerged as a key sensor in robotics and autonomous vehicles [10, 42, 65, 69, 70]. LiDAR can provide precise depth information and global translation in expansive environments, remaining uninfluenced by lighting conditions, enabling robust 3D HPS. Recently, PointHPS [9] provides a cascaded network architecture for pose and shape estimation from point clouds. However, it is applicable for dense point clouds rather than sparse LiDAR point clouds. LiDARCap [33] employs a graph-based convolutional network to predict daily human poses in LiDAR-captured large-scale scenes. MOVIN [23] presents a generative method for estimating both pose and global translation. However, these methods cannot predict full SMPL parameters (pose, shape, and global translation) and are fragile for complex real scenarios with occlusion and noise.
### 3D Human Motion Datasets
Data-driven 3D HPS have gained traction in recent years benefiting from extensive labeled datasets. Indoor marker-based datasets like Human3.6M [22] and HumanEva [49] use multi-view camera systems to record daily motions. AMASS [37] unifies these datasets, providing a standardized benchmark for network training. Marker-less datasets such as MPI-INF-3DHP [38] and AIST++ [34] capture more complex poses without constraint of the wearable devices, all above datasets are still confined to indoor settings. Outdoor motion capture datasets like PedX [27] and 3DPW [55] capture motions in the wild but lack accurate depth information, hindering scene-level human motion research. HuMMan [8] constitutes a mega-scale database that offers high-resolution scans for subjects, and MOVIN [23] provides motion data from multi-camera capture system with point clouds, but both datasets are limited in short-range scenes. [12, 13, 62] are proposed for human motion capture in large-scale scenes using environment-involved optimization, but they are limited in a single-person setting. Recently, LiDARHuman26M [33] and LIP [45] provide LiDAR-captured motion dataset in large scenes, but both datasets exclusively provide pose parameters of SMPL in single-person scenarios. In contrast, we propose a large-scale LiDAR-based motion dataset with full SMPL parameter annotations. It comprises a variety of challenging scenarios with occlusions and interactions among multiple persons and objects, which has great practical significance.
## 3 Methodology
We propose a single-LiDAR-based approach named LiveHPS for scene-level 3D human pose and shape estimation in large-scale free environments. The overview of our pipeline is shown in Fig. 2. We take consecutive 3D single-person point clouds as input and aim to acquire consistently accurate local pose, human shape, and global translation without any limitation of acquisition sites, light conditions, and wearable devices. There are three main procedures in our network, including point-based body tracker (Sec. 3.2), consecutive pose optimizer (Sec. 3.3), and attention-based multi-head SMPL solver (Sec. 3.4). First, we utilize the point-based body tracker to extract point-wise features and predict the human body joint positions. Second, we propose the attention-based temporal-spatial feature enhancement mechanism to acquire refined joint positions using joint-wise geometric and relationship features. Finally, we design an attention-based multi-head solver to regress the human SMPL parameters including human local pose, shape and global translation from the refined body skeleton.
### Preliminaries
LiveHPS takes a consecutive sequence of single-person point clouds with \\(T\\) frames as input. As raw point clouds have various numbers of points at different times \\(t\\), we implement normalization process by resampling each frame to a fixed \\(N_{fps}=256\\) points utilizing the farthest point sampling algorithm (FPS) and subtracting the average locations \\(\\mathbf{loc}(t)\\in\\mathbb{R}^{3}\\) of the raw data. \\(\\mathbf{P}(t)\\in\\mathbb{R}^{3N_{fps}}\\) denotes the pre-processed input at time \\(t\\).
We define \\(N_{J}\\) as the number of body joints and \\(N_{V}\\) as the number of body vertices on SMPL mesh; \\(\\mathbf{\\hat{J}}(t),~{}\\mathbf{\\hat{J}}^{GT}(t)\\in\\mathbb{R}^{3N_{J}}\\) as predicted and ground-truth root-relative joint positions at time \\(t\\), repectively; \\(\\mathbf{\\hat{V}}(t),\\mathbf{\\hat{V}}^{GT}(t)\\in\\mathbb{R}^{3N_{V}}\\) as predicted and ground-truth vertex positions. Our network prediction consists of \\(\\hat{\\theta}(t)\\in\\mathbb{R}^{6N_{J}},\\hat{\\beta}\\in\\mathbb{R}^{10}\\) and \\(Tr(t)\\in\\mathbb{R}^{3}\\), the pose, shape, and global translation parameters of SMPL. \\(\\theta^{GT}(t),\\beta^{GT}\\) and \\(Tr^{GT}(t)\\) are corresponding ground truth. We use 6D-rotation-based pose representation.
### Point-based Body Tracker
For the input of our pre-processed consecutive point clouds, we extract the point-wise feature following the PointNet-GRU structure proposed by LIP [45] and regress the human body joint positions with an MLP decoder. Considering the irregular distribution of LiDAR point clouds vary across different capture distances and devices, and are also effected by occlusion and noise (Fig. 1), we design a **Vertex-guided Adaptive Distillation (VAD)** mechanism to unify the point distribution to facilitate the training of the network and improve the robustness. Because the vertices of SMPL mesh have relatively regular representation, we make diverse point distributions aligned with the mesh vertex distribution in high-level feature space by distillation, as Fig. 2 shows.
Firstly, we use the global translation \\(Tr^{GT}(t)\\) to align the LiDAR point cloud \\(\\mathbf{P}(t)\\) with the ground truth mesh vertex \\(\\mathbf{\\tilde{V}}^{GT}(t)\\) and utilize k-Nearest-Neighbours (kNN) algorithm to sample the corresponding vertices, defined as \\(\\mathbf{\\tilde{V}}_{pc}^{GT}(t)\\). Then, we use \\(\\mathbf{\\tilde{V}}_{pc}^{GT}(t)\\) to pre-train a vertex body tracker to regress the joint positions \\(\\mathbf{\\hat{J}}_{v}(t)\\). We use the mean squared error (MSE) loss \\(L_{mse}(\\mathbf{\\hat{J}}_{v})\\) for supervision:
\\[\\mathcal{L}_{mse}(\\mathbf{\\hat{J}}_{v})=\\sum_{t}\\parallel\\mathbf{\\hat{J}}_{v}( t)-\\mathbf{\\tilde{J}}^{GT}(t)\\parallel_{2}^{2}. \\tag{1}\\]
Subsequently, we input sequential point clouds \\(\\mathbf{P}(t)\\) and their corresponding vertex data \\(\\mathbf{\\tilde{V}}_{pc}^{GT}(t)\\) into two independent body trackers to obtain point-wise features \\(F_{p}(t)\\in\\mathbb{R}^{k}\\) and \\(F_{v}(t)\\in\\mathbb{R}^{k}\\), respectively, where \\(k=1024\\). Notably, two point-based body tracker networks share distinct weights and we freeze the pre-trained parameters of the vertex body tracker during training. To align real point distributions with the regular vertex distribution, we employ a pose-prior consistency loss \\(\\mathcal{L}_{pc}\\) to minimize the high-level feature distance between LiDAR point clouds and guided vertices. The distillation procedure enables our feature extractor to own the ability to maintain insensitivity under vastly differentiated data distribution. Finally, we leverage an MLP decoder to predict the joint positions \\(\\hat{\\mathbf{J}}_{p}\\). A combined loss \\(\\mathcal{L}_{prior}\\) consisting of \\(\\mathcal{L}_{mse}(\\tilde{\\mathbf{J}}_{p})\\) and \\(\\mathcal{L}_{pc}\\) is utilized to train the network, which is formulated as below
\\[\\mathcal{L}_{mse}(\\tilde{\\mathbf{J}}_{p})=\\sum_{t}\\parallel\\hat{\\mathbf{J}}_{p }(t)-\\widetilde{\\mathbf{J}}^{GT}(t)\\parallel_{2}^{2}, \\tag{2}\\]
\\[\\mathcal{L}_{pc}=\\sum_{t}F_{v}(t)\\log(\\frac{F_{v}(t)}{F_{p}(t)}), \\tag{3}\\]
\\[\\mathcal{L}_{prior}=\\lambda_{1}\\mathcal{L}_{mse}(\\tilde{\\mathbf{J}}_{p})+ \\lambda_{2}\\mathcal{L}_{PC}, \\tag{4}\\]
where \\(\\lambda_{1}\\) and \\(\\lambda_{2}\\) are hyper-parameters, and we set \\(\\lambda_{1}=1\\) and \\(\\lambda_{2}=10^{3}\\) in our experiments. During inference, the VAD process is not required.
### Consecutive Pose Optimizer
We have already obtained the joint positions of human poses from the point-based body tracker. Considering that human motions are coherent at time sequence and different joints of the human body usually execute the action with relative dynamic constraints, we propose a **Consecutive Pose Optimizer (CPO)** (Fig. 3) to refine the body skeleton using consecutive joint-wise geometry features and relationship features in temporal and spatial spaces, which can further reduce the effect of incomplete and noised point clouds. Specifically, we utilize the concatenation of point-wise feature \\(F_{p}(t)\\in\\mathbb{R}^{k}\\) and the predicted joint positions \\(\\hat{\\mathbf{J}}_{p}(t)\\) as the initial joint-wise feature input. To capture the motion consistency in sequence, we use linear transformations to generate \\(Q(t)\\), \\(K(t)\\), and \\(V(t)\\) in each frame and conduct temporal interaction to learn the motion-consistent feature \\(F_{t}(t)\\in\\mathbb{R}^{k_{2}}\\) for each joint, where \\(k_{2}=256\\). This temporal interaction process guides the estimation of more reasonable continuous human motions, especially for occluded situations. Then, we use the dynamic and geometric constraints among joints to further enhance the joint feature via spatial feature interaction. The input \\(F_{j}(n_{j}\\in N_{J})\\in\\mathbb{R}^{k+k_{2}+3}\\) consists of the point-wise feature \\(F_{p}(t)\\in\\mathbb{R}^{k}\\), temporal interaction feature \\(F_{t}(t)\\in\\mathbb{R}^{k_{2}}\\), and each joint feature \\(\\hat{\\mathbf{J}}_{p}(n_{j}\\in N_{J})\\in\\hat{\\mathbf{J}}_{p}(t)\\). We generate \\(Q(n_{j})\\), \\(K(n_{j})\\) and \\(V(n_{j})\\) with linear mapping for each joint and conduct the spatial joint-to-joint interaction to get the enhanced feature \\(F_{ts}(t)\\in\\mathbb{R}^{k_{3}}\\), where \\(k_{3}=512\\). The feature interaction matrix can be formulated as:
\\[\\mathcal{F}_{interaction}=\\mathrm{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_{k}}} \\right)V. \\tag{5}\\]
Finally, we regress the refined joint positions \\(\\hat{\\mathbf{J}}_{refine}(t)\\) from the enhanced feature and the loss function is
\\[\\mathcal{L}_{mse}(\\hat{\\mathbf{J}}_{refine})=\\sum_{t}\\parallel\\hat{\\mathbf{J} }_{refine}(t)-\\widetilde{\\mathbf{J}}^{GT}(t)\\parallel_{2}^{2}. \\tag{6}\\]
### Multi-head SMPL Solver
In the last stage, we propose an attention-based multi-head solver to regress the SMPL [35] parameters \\(\\hat{\\theta}(t)\\), \\(\\hat{\\beta}\\), \\(\\hat{T}r(t)\\) from refined joint positions and the input point cloud. Be
Figure 3: The detailed feature interaction mechanism in CPO. The same network architecture is applied in both consecutive pose optimizer and multi-head solver(pose and shape) except the decoder. Here we take the consecutive pose optimizer as the reference.
Figure 2: The pipeline of LiveHPS. With sequential LiDAR point clouds as input, LiveHPS consists of three critical modules to obtain human SMPL parameters, including a point-based body tracker to distill the pose-prior information, a consecutive pose optimizer to refine the pose via utilizing joint-wise features, and a multi-head SMPL solver to regress parameters of human models.
cause the pose and the shape reflect local geometry of human body, they can be determined by root-relative joint features obtained in last stage. We utilize the same network structure as CPO as the pose solver and shape solver to get \\(\\hat{\\theta}(t)\\) and \\(\\hat{\\beta}\\). However, the global translation could be obtained only from the root-relative local geometry features. Previous methods [33, 45] usually take the average position of the body point cloud as the global location or directly regress the global translation. However, due to the interference of occlusion and noise, their predicted results are unstable in consecutive frames. In contrast, we simplify the task of predicting global translation to predict the bias between the average position of point cloud and the real 3D location. Thus, we propose a **Skeleton-aware Translation Solver** underpinned by a cross-attention architecture, which intelligently integrates skeletal and original point cloud data to get more accurate translation estimation. We employ point cloud \\(\\mathbf{P}(t)\\) and refined root-relative joint positions \\(\\hat{\\mathbf{J}}_{refine}(t)\\) as the input, utilizing the cross-attention to match the geometric information of joints with the point cloud. We generate the \\(Q(t)\\) from refined joint positions and \\(K(t)\\), \\(V(t)\\) from point cloud. The feature interaction matrix can be formulated as Eq. 5. The decoder outputs the bias, which can be added to the average location \\(\\mathbf{loc}(t)\\) of raw point cloud to get the global translation \\(\\hat{Tr}(t)\\). Finally, we use SMPL model to generate the human skeleton joint positions and mesh vertex positions as below.
\\[\\hat{\\mathbf{J}}_{smpl}(t),\\hat{\\mathbf{V}}_{smpl}(t)=\\mathrm{SMPL}(\\hat{ \\theta}(t),\\hat{\\beta},\\hat{Tr}(t)). \\tag{7}\\]
The loss function for the multi-head solver is formulated as:
\\[\\mathcal{L}_{solver}= \\lambda_{3}\\mathcal{L}_{mse}(\\hat{\\mathbf{J}}_{smpl})+\\lambda_{4 }\\mathcal{L}_{mse}(\\hat{\\mathbf{V}}_{smpl}) \\tag{8}\\] \\[+ \\lambda_{5}\\mathcal{L}_{mse}(\\hat{\\theta}(t))+\\lambda_{6}\\mathcal{ L}_{mse}(\\hat{\\beta}\\] \\[+ \\lambda_{7}\\mathcal{L}_{mse}(\\hat{Tr}(t))+\\lambda_{8}\\mathcal{L}_ {SUCD},\\]
where \\(\\lambda_{3}\\), \\(\\lambda_{4}\\), \\(\\lambda_{5}\\), \\(\\lambda_{6}\\), \\(\\lambda_{7}\\) are hyper-parameters with \\(\\lambda_{3}=\\frac{100}{N_{j}}\\), \\(\\lambda_{4}=\\frac{100}{N_{v}}\\), \\(\\lambda_{5}=\\frac{1}{5}\\), \\(\\lambda_{6}=1\\), \\(\\lambda_{7}=1\\) and \\(\\lambda_{8}=10^{3}\\).
Because the raw point cloud contains the real pose, shape, and global translation information, it can be taken as an extra supervision which is ignored by previous methods. In particular, we introduce a novel scene-level unidirectional Chamfer distance (SUCD) loss by calculating the unidirectional Chamfer distance from the raw point cloud to the predicted mesh vertices. It provides a comprehensive evaluation for all predicted SMPL parameters, denoted as
\\[\\mathcal{L}_{SUCD}=\\sum_{t}\\frac{1}{|\\mathbf{P}(t)|}\\sum_{x\\in\\mathbf{P}(t)} \\min_{y\\in\\hat{\\mathbf{V}}_{smpl}(t)}|x-y|_{2}^{2}, \\tag{9}\\]
## 4 FreeMotion Dataset
Previous LiDAR-related human motion datasets typically involve a single performer carrying out common actions with incomplete SMPL parameters, which have limitations in evaluating the generalization capability and robustness of HPS methods when being applied in daily-life complex scenarios. To facilitate the research of high-quality human motion capture in large-scale free environment, we provide FreeMotion, the first motion dataset with multi-view and multi-modal visual data with full-SMPL annotations, captured in diverse real-life scenarios with natural occlusions and noise. It contains 578,775 frames of data and annotations and contains \\(1\\sim 7\\) performers in each scene.
### Data Acquisition
Considering that the indoor multi-camera panoptic studio can provide high-precision full SMPL parameter annotations (pose, shape and global translation) and outdoor scenes are large-scale and suitable for real applications, we have two capture systems as shown in Fig. 4. For the first one, we set up a 76-Z-CAM system to obtain SMPL ground truth and three OUSTER-1 LiDARs at varied distances to get LiDAR data. Notably, we arrange other performers outside the studio to simulate occlusions in real-life scenarios. For the second one, we built three sets of LiDAR-camera capture devices, including a 128-beam OUSTER-1 LiDAR and a monocular Canon camera for each, in different locations to capture multi-view and multi-range visual data, and provide the global translation ground truth. The performer is equipped with a full set of Notiom equipment (17 IMUs) to obtain the pose ground truth. Particularly, the shape parameters of outdoor performers are captured in panoptic studio in advance. The capture frequencies for the LiDAR, Z-CAM, Canon camera, and IMU are set at 10Hz, 25Hz, 60Hz, and 60Hz, respectively. All the data are calibrated and synchronized.
Figure 4: The capture systems of FreeMotion. In (a), we use a dense-camera capture system with LiDARs for accurate pose and shape capture. In (b), we set LiDARs and cameras at three views to capture human motions in large-scale multi-person scenes.
### Dataset Characteristics
The detailed comparison with existing public datasets is presented in Tab. 1. FreeMotion has several distinctive characteristics and we summarize three main highlights below.
**Free Capture Scenes.** Diverging from previous datasets focused on single-person HPS, FreeMotion is captured in real unconstrained environments, which involves diverse capture scales, multi-person activities (sports, dancing shows, fitness exercises, etc.), and human-object interaction scenarios. The large-scale human trajectories, occlusions, and noise in our data all bring challenges for precise human global pose and shape estimation, thereby promoting the envelope of HPS technology for real-life applications.
**Diverse Data Modalities and Views.** FreeMotion offers multi-view and multi-modal capture data, including LiDAR point clouds, RGB images, and IMU measurements, providing rich resources for the exploration of single-modal, multi-modal, single-view, multi-view HPS solutions.
**Complete Scene-level SMPL Annotations.** Existing LiDAR-based motion datasets usually provide pose annotations using dense IMUs and lack annotations for accurate human shape and global translation. FreeMotion remedies this by providing full SMPL parameters annotations(pose, shape, translation), as shown in Fig. 4. We capture a variety of natural human motions across long distances, involving 20 individuals with varying body types engaging in 40 types of actions. Details are in appendix. Accurate and complete annotations in rich scenarios can comprehensive evaluation for algorithms and benefit many downstream applications.
### Data Extension
To enrich the dataset with various poses and shapes for pretraining, we follow LIP [45] to create synthetic point clouds from SURREAL [51], AIST++ [34], and portions of AMASS [37], including ACCAD and BMLMovic. We simulate occlusions by randomly cropping body parts of point clouds. It consists of \\(2,378\\)k frames and \\(3,118\\) body shapes. Note that statistics in Tab. 1 do not include the synthetic data. _Detailed process is shown in appendix._
### Privacy Preservation
We adhere to privacy guidelines in our research. The LiDAR point clouds naturally protect privacy by omitting texture or facial details. Additionally, we mask faces in RGB images to uphold ethical standards in our dataset.
## 5 Experiments
In this section, we compare our method with current SOTA methods on FreeMotion and various public datasets qualitatively and quantitatively, demonstrating our method's superiority and generalization capability. We also present detailed ablation studies for our network's modules to validate their effectiveness. Our evaluation metrics include 1) J/V Err(P/PS/PST) \\(\\downarrow\\): mean per joint/vertex position error in millimeters, where we generate joint/vertex from SMPL model by Pose/Pose-Shape/Pose-Shape-Translation parameters; 2) Ang Err \\(\\downarrow\\): mean per global joint rotation error in degrees to evaluate local pose; 3) SUCD \\(\\downarrow\\): scene-level unidirectional Chamfer distance in millimeters.
### Implementation Details
We build our network on PyTorch 1.8.1 and CUDA 11.1, trained over 200 epochs with batch size of 32 and sequence length of 32, using an initial learning rate of \\(10^{-3}\\), and AdamW optimizer with weight decay of \\(10^{-4}\\). The process was run on a server equipped with an Intel(R) Xeon(R) E5-2678 CPU and 8 NVIDIA RTX3090 GPUs. For training, we used clustered and manually annotated human point cloud sequences from raw data, while for testing, we employ sequential point clouds of human instances processed by a pre-trained segmentation model [56]. As for the dataset splitting, we take training set of FreeMotion, Sloper4D, and synthetic dataset including training set of SURREAL, AIST++, ACCAD, and BMLMovic for training.
\\begin{table}
\\begin{tabular}{c|c c|c c c|c c c c|c c c} \\hline \\hline \\multirow{2}{*}{Dataset} & \\multicolumn{3}{c|}{Statistics} & \\multicolumn{3}{c|}{Scenarios} & \\multicolumn{3}{c|}{Data} & \\multicolumn{3}{c}{SMPL annotation} \\\\ \\cline{2-13} & Frame & Capture distance(m) & Multi-person & In the wild & HOI & Point cloud & IMU & Image & Multi-view & Pose & Shape & Translation \\\\ \\hline AMASS [37] & 16M & 3.42 & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ \\\\ HuMan [8] & 60M & 3.00 & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ \\\\ SURREAL [51] & 6M & N/A & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ \\\\ AIST++ [34] & 10M & 4.23 & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ \\\\
3DPW [55] & 51k & N/A & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✗ & ✓ & ✓ \\\\ LiDARHuman26M [33] & 184k & 28.05 & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ \\\\ LIPD [45] & 62k & 30.04 & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ \\\\ MOYN [23] & 161k & N/A & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ \\\\ SloperdD [13] & 100k & N/A & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\\\ CIM4D [62] & 180k & 16.61 & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\\\ \\hline
**FreeMotion** & 578k & 39.85 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Comparison with public human motion datasets from four different aspects. “Capture distance” means the maximum distance between performer and capture device, which is approximately calculated with the data published. “Multi-person” indicates the capture scenes involve multiple persons. “HOI” denotes the human-object interaction scenarios.
### Comparison
We evaluate LiveHPS against other state-of-the-art (SOTA) LiDAR-related methods [23, 33, 45] on FreeMotion and several public datasets [4, 13, 33, 45, 51, 61, 62] to demonstrate its superiority in capturing human global poses and shapes in large-scale free environment, even with severe occlusions and noise. Our LiveHPS achieves SOTA performance as shown in Tab. 2. The J/V Err(P) and Ang Err only relate to the pose parameter estimation, we surpass LiDARCap [33], LIP [45] and MOVIN [23] by an obvious margin. For fair comparison, we only use the LiDAR branch of LIP. As the pioneer to fully estimate SMPL parameters (pose, shape, global translation) for LiDAR-based HPS, we develop a shape regression head with the same architecture of their pose regression head for fair comparison with other methods [23, 33, 45], the translation prediction of LiDARCap is the average of point cloud. Visual comparisons in Fig. 5 further highlight our method's superiority in global pose and shape estimations, yielding results that closely mirror ground truth. Other methods struggle in situations with occlusions and noise, as exemplified in challenging scenes from Sloper4D [13] and FreeMotion in Fig. 5. MOVIN [23] estimate translation based on velocity regression, it is not applicable on synthetic data SURREAL without real trajectories. Our LiveHPS demonstrates robust
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c c c c c} \\hline \\hline & \\multicolumn{3}{c|}{SUDEARL[1]} & \\multicolumn{3}{c|}{AlpDARCap} & \\multicolumn{3}{c|}{Clipped [13]} & \\multicolumn{3}{c|}{CompHand} & \\multicolumn{3}{c|}{SemanticKITTI [4]} & \\multicolumn{3}{c}{HeConalLite [61]} \\\\ \\cline{2-15} & \\(J\\
performance against noise like carried objects, as demonstrated in FreeMotion's left case in Fig. 5.
Tab. 3 illustrates our cross-dataset evaluation to validate the generalization capability of LiveHPS by directly testing on other datasets. LiDARHuman26M [33] and LIPD [45] only offer pose parameters. CIMI4D [62] provides pose, shape, and translation, but the translation is not that precise as shown in the third row of Fig. 6. SemanticKITTI [4] and HuCenLife [61] are large-scale datasets for 3D perception, not providing SMPL annotations. Thanks to our VAD module's ability to harmonize diverse human point cloud distributions and CPO module's ability to model geometric and dynamic human features, our method can achieve SOTA performance on these cross-domain datasets, even in challenging cases with extreme occlusions, as Fig. 6 shows.
### Ablation Study
We first validate the effectiveness of each module in LiveHPS. Then, we evaluate inner designs of each module to verify the effectiveness of detailed structures.
Tab. 4 shows the performance of our method with different network modules, demonstrating the necessity of our vertex-guided adaptive distillation (VAD) and consecutive pose optimizer (CPO) modules. We also illustrate ablation details of attention-based temporal and spatial feature enhancement in CPO, showing that the combination of temporal and spatial feature interaction performs best. We also conduct experiments to validate our attention-based multi-head SMPL solver. Our pose and shape solver, using the same network as CPO, outperforms ST-GCN from LiDARCap [33] and GRU from LIP [45] by fully utilizing the global temporal context and local spatial relationship existing in consecutive body joints. For the translation solver, the average of point cloud can reflect the coarse translation but it is very unstable with the distribution of point cloud changes. Compared with global velocity estimation utilized in MOVIN [23], our skeleton-aware translation solver directly estimates translations without error accumulation. Moreover, unlike GRU-based pose-guided corrector in LIP [45] which overlooks relationship between the skeleton and point cloud, our approach performs better by considering the relationship and more spatial information.
### Generalization Capability Test
We assess the generalization capability of LiveHPS across varying lengths of input point cloud sequences and across different point numbers on human body in each frame, as Tab. 5 shows. Our method performs better with increasing sequence length but maintains good accuracy even with short inputs. In addition, our method performs relatively robust even for the situation with 100 points on the human body, which means far distance (about 15 meters) to LiDAR or severe occlusion. Fig. 1 and Fig. 7 show our method is practical for in-the-wild scenarios, capturing human motion in large-scale scenes day and night with real-time performance up to 45 fps. This strongly demonstrates the feasibility and superiority of our method in real-life applications.
_More application results are in appendix._
## 6 Conclusion
In this paper, we propose a novel single-LiDAR-based approach for predicting human pose, shape, and translations in large-scale free environment. To solve the occlusion and noise interference, we design a novel distillation mechanism and temporal-spatial feature interaction optimizer. Importantly, we propose a huge multi-person human motion dataset, which is significant for future in-the-wild HPS research. Extensive experiments on diverse datasets demonstrate the robustness and effectiveness of our method.
**Limitations** When human is static in the large-scale
\\begin{table}
\\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \\hline \\hline & \\multicolumn{2}{c|}{Network Module} & \\multicolumn{2}{c|}{Connecutive Pose Optimizer} & \\multicolumn{2}{c|}{Multi-head SMPL(Pose and Shape) Solver} & \\multicolumn{2}{c|}{Skeleton-aware Translation Solver} & \\multicolumn{1}{c|}{Ours} \\\\ \\cline{2-9} & \\multicolumn{1}{c|}{w/o VAD} & \\multicolumn{1}{c|}{w/o CPD} & \\multicolumn{1}{c|}{w/o Temporal} & \\multicolumn{1}{c|}{w/o Spatial} & \\multicolumn{1}{c|}{ST-GCN} & GRU & \\multicolumn{1}{c|}{Average} & \\multicolumn{1}{c|}{MOVIN} & \\multicolumn{1}{c|}{LIP} \\\\ \\hline \\hline \\(J\\)/V Err(PST) & 129.19740.42 & 127.44140.87 & 121.93/135.56 & 120.20179.51 & 124.037135.38 & 120.63/130.83 & 177.66/184.57 & 129.48/1310.95 & 165.04/172.36 & **119.27/128.61** \\\\ Ang Err & 16.95 & 25.20 & 27.58 & 18.09 & 19.40 & 18.34 & - & **15.79** \\\\ SUCD & 3.17 & 4.08 & 3.51 & 2.95 & 3.28 & 3.08 & 4.25 & 8569.68 & 3.07 & **2.85** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Ablation studies for our network modules. We also evaluate the internal details of each module.
\\begin{table}
\\begin{tabular}{c|c|c|c|c|c} \\hline \\hline Frames & 1 & 4 & 8 & 16 & 32 \\\\ \\hline \\hline \\(J\\)/V Err(PST) & 14.248.88155.58 & 130.73414.10 & 126.23/135.66 & 123.08713.24 & **119.27/128.61** \\\\ Aug Err & 19.22 & 17.31 & 18.63 & 16.05 & **18.79** \\\\ SUCD\\({}_{i}\\) & 5.22 & 3.03 & 3.01 & 3.02 & **2.85** \\\\ \\hline \\hline Points & \\(0-100\\) & \\(100-200\\) & \\(200-300\\) & \\(300-1000\\) & \\(>1000\\) \\\\ \\hline \\hline \\(J\\)/V Err(PST) & 1550.16768.42 & 160.687114.40 & 106.8871113.98 & 105.967110.37 & 510.1877.70 \\\\ Aug Err & 16.78 & 16.34 & 15.77 & 13.64 & 12.84 \\\\ SUCD\\({}_{i}\\) & 4.54 & 2.31 & 2.25 & 2.52 & 2.63 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: More results on different lengths of input sequence and different point numbers on humans on FreeMotion dataset.
Figure 7: Performance of LiveHPS on real-time-captured scenes.
scene for a long time, our model can not fully utilize the dynamic information in consecutive frames and cause the misjudged orientation of human global orientations, opposite to the ground-truth pose.
## 7 Acknowledgements
This work was supported by NSFC (No.62206173), Shanghai Sailing Program (No.22YF1428700), Natural Science Foundation of Shanghai (No.22dz1201900), MoE Key Laboratory of Intelligent Perception and Human-Machine Collaboration (ShanghaiTech University), Shanghai Frontiers Science Center of Human-centered Artificial Intelligence (ShanghaiAI).
## References
* make human motion capture easier. Github, 2021.
* [2] Sikander Amin, Mykhaylo Andriluka, Marcus Rohrbach, and Bernt Schiele. Multi-view pictorial structures for 3D human pose estimation. In _BMVC_, 2009.
* [3] Andreas Baak, Meinard Muller, Gaurav Bharaj, Hans-Peter Seidel, and Christian Theobalt. A data-driven approach for real-time full body pose reconstruction from a depth camera. In _ICCV_, 2011.
* [4] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyrill Stachniss, and Jurgen Gall. Semantickit: A dataset for semantic scene understanding of lidar sequences. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 9297-9307, 2019.
* [5] Paul J Besl and Neil D McKay. Method for registration of 3-d shapes. In _Sensor fusion IV: control paradigms and data structures_, pages 586-606. Spie, 1992.
* [6] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In _Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14_, pages 561-578. Springer, 2016.
* [7] Magnus Burenius, Josephine Sullivan, and Stefan Carlsson. 3D pictorial structures for multiple view articulated pose estimation. In _CVPR_, 2013.
* [8] Zhongang Cai, Davuan Ren, Ailing Zeng, Zhengyu Lin, Tao Yu, Wenjia Wang, Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, et al. HumanMan: Multi-modal 4d human dataset for versatile sensing and modeling. In _European Conference on Computer Vision_, pages 557-577. Springer, 2022.
* [9] Zhongang Cai, Liang Pan, Chen Wei, Wanqi Yin, Fangzhou Hong, Mingyuan Zhang, Chen Change Loy, Lei Yang, and Ziwei Liu. Pointhps: Cascaded 3d human pose and shape estimation from point clouds. _arXiv preprint arXiv:2308.14492_, 2023.
* [10] Peishan Cong, Xinge Zhu, Feng Qiao, Yiming Ren, Xidong Peng, Yuenan Hou, Lan Xu, Ruigang Yang, Dinesh Manocha, and Yuexin Ma. Stcrowd: A multimodal dataset for pedestrian perception in crowded scenes. In _CVPR_, pages 19608-19617, 2022.
* [11] Peishan Cong, Xinge Zhu, Feng Qiao, Yiming Ren, Xidong Peng, Yuenan Hou, Lan Xu, Ruigang Yang, Dinesh Manocha, and Yuexin Ma. Stcrowd: A multimodal dataset for pedestrian perception in crowded scenes. In _CVPR_, pages 19608-19617, 2022.
* [12] Peeish Cong, Xinge Zhu, Feng Qiao, Yiming Ren, Xidong Peng, Yuenan Hou, Lan Xu, Ruigang Yang, Dinesh Manocha, and Yuexin Ma. Stcrowd: A multimodal dataset for pedestrian perception in crowded scenes. _arXiv preprint arXiv:2204.01026_, 2022.
* [13] Yudi Dai, Yitai Lin, Chenglu Wen, Siqi Shen, Lan Xu, Jingyi Yu, Yuexin Ma, and Cheng Wang. Hsc4d: Human-centered 4d scene capture in large-scale indoor-outdoor space using wearable imus and lidar. In _CVPR_, pages 6792-6802, 2022.
* [14] Yudi Dai, Yitai Lin, Xiping Lin, Chenglu Wen, Lan Xu, Hongwei Yi, Siqi Shen, Yuexin Ma, and Cheng Wang. Sloper4d: A scene-aware dataset for global 4d human pose estimation in urban environments. _arXiv preprint arXiv:2303.09095_, 2023.
* [15] Ahmed Elhayek, Edilson de Aguiar, Arjun Jain, Jonathan Tompson, Leonid Pishchulin, Mykhaylo Andriluka, Chris Bregler, Bernt Schiele, and Christian Theobalt. Efficient ConvNet-based marker-less motion capture in general scenes with a low number of cameras. In _CVPR_, 2015.
* [16] Martin Ester, Hans-Peter Kriegel, Jorg Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In _kdd_, pages 226-231, 1996.
* [17] Kaiwen Guo, Jonathan Taylor, Sean Fanello, Andrea Tagliasacchi, Mingsong Dou, Philip Davidson, Adarsh Kowdle, and Shahram Izadi. Twinfusion: High framerate nonrigid fusion through fast correspondence tracking. In _3DV_, pages 596-605, 2018.
* [18] Marc Habermann, Weipeng Xu, Michael Zollhofer, Gerard Pons-Moll, and Christian Theobalt. Livecap: Real-time human performance capture from monocular video. _ACM Transactions on Graphics (TOG)_, 38(2):14:1-14:17, 2019.
* [19] Marc Habermann, Weipeng Xu, Michael Zollhofer, Gerard Pons-Moll, and Christian Theobalt. Deepcap: Monocular human performance capture using weak supervision. In _CVPR_, 2020.
* [20] Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, and Lan Xu. Challencap: Monocular 3d capture of challenging human performances using multi-modal references. In _CVPR_, pages 11400-11411, 2021.
* [21] Y. Huang, F. Bogo, C. Lassner, A. Kanazawa, P. V. Gehler, J. Romero, I. Akhter, and M. J. Black. Towards accurate marker-less human shape and pose estimation over time. In _3DV_, pages 421-430, 2017.
* [22] Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J Black, Otmar Hilliges, and Gerard Pons-Moll. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. _ACM Transactions on Graphics (TOG)_, 37(6):1-15, 2018.
* [23] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. _TPAMI_, 36(7):1325-1339, 2013.
* [23] Deok-Kyeong Jang, Dongseok Yang, Deok-Yun Jang, Byeoli Choi, Taeil Jin, and Sung-Hee Lee. Movin: Real-time motion capture using a single lidar. _arXiv preprint arXiv:2309.09314_, 2023.
* [24] Hanbyul Joo, Hao Liu, Lei Tan, Lin Gui, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh. Panoptic studio: A massively multiview system for social motion capture. In _ICCV_, 2015.
* [25] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In _CVPR_, 2018.
* [26] Angjoo Kanazawa, Jason Y. Zhang, Panna Felsen, and Jitendra Malik. Learning 3d human dynamics from video. In _CVPR_, 2019.
* [27] Wonhui Kim, Manikandasriram Srinivasan Ramanagopal, Charles Barto, Ming-Yuan Yu, Karl Rosaen, Nick Goumas, Ram Vasudevan, and Matthew Johnson-Roberson. Pedx: Benchmark dataset for metric 3-d pose estimation of pedestrians in complex urban intersections. _IRAL_, 4(2):1940-1947, 2019.
* [28] Muhammed Kocabas, Nikos Athanasiou, and Michael J. Black. Vibe: Video inference for human body pose and shape estimation. In _CVPR_, 2020.
* [29] Muhammed Kocabas, Chun-Hao P. Huang, Otmar Hilliges, and Michael J. Black. Pare: Part attention regressor for 3d human body estimation. In _ICCV_, pages 11127-11137, 2021.
* [30] Nikos Kolotouros, Georgios Pavlakos, and Kostas Daniilidis. Convolutional mesh regression for single-image human shape reconstruction. In _CVPR_, 2019.
* [31] Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In _ICCV_, pages 11605-11614, 2021.
* [32] Christoph Lassner, Javier Romero, Martin Kiefel, Federica Bogo, Michael J Black, and Peter V Gehler. Unite the people: Closing the loop between 3d and 2d human representations. In _CVPR_, pages 6050-6059, 2017.
* [33] Jialian Li, Jingyi Zhang, Zhiyong Wang, Siqi Shen, Chenglu Wen, Yuexin Ma, Lan Xu, Jingyi Yu, and Cheng Wang. Lidarcap: Long-range marker-less 3d human motion capture with lidar point clouds. _arXiv preprint arXiv:2203.14698_, 2022.
* [34] Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++, 2021.
* [35] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. Smpl: A skinned multi-person linear model. _ACM Trans. Graph._, 34(6):248:1-248:16, 2015.
* [36] Zhengyi Luo, Ryo Hachiuma, Ye Yuan, and Kris Kitani. Dynamics-regulated kinematic policy for egocentric pose estimation. _Advances in Neural Information Processing Systems_, 34, 2021.
* [37] Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. Amass: Archive of motion capture as surface shapes. In _ICCV_, 2019.
* [38] Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation in the wild using improved cnn supervision. In _3DV_, pages 506-516. IEEE, 2017.
* [39] Noitom. Noitom Motion Capture Systems. [https://www.noitom.com/](https://www.noitom.com/), 2015.
* [40] OptiTrack. OptiTrack Motion Capture Systems. [https://www.optitrack.com/](https://www.optitrack.com/), 2009.
* [41] Georgios Pavlakos, Xiaowei Zhou, Konstantinos G Derpanis, and Kostas Daniilidis. Harvesting multiple views for marker-less 3d human pose annotations. In _CVPR_, 2017.
* [42] Xidong Peng, Xinge Zhu, and Yuexin Ma. Cl3d: Unsupervised domain adaptation for cross-lidar 3d detection. _AAAI_, 2023.
* [43] RealityCapture. Capturing Reality. [https://www.capturingreality.com/](https://www.capturingreality.com/), 2023.
* [44] Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J. Guibas. Humor: 3d human motion model for robust pose estimation. In _ICCV_, pages 11488-11499, 2021.
* [45] Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, and Yuexin Ma. Lidar-aid inertial poser: Large-scale human motion capture by sparse inertial and lidar sensors. _TVCG_, 2023.
* [46] Helge Rhodin, Nadia Robertini, Christian Richardt, Hans-Peter Seidel, and Christian Theobalt. A versatile scene model with differentiable visibility applied to generative pose estimation. In _ICCV_, 2015.
* [47] Nadia Robertini, Dan Casas, Helge Rhodin, Hans-Peter Seidel, and Christian Theobalt. Model-based outdoor performance capture. In _3DV_, 2016.
* [48] Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, and Andrew Blake. Real-time human pose recognition in parts from single depth images. In _CVPR_, 2011.
* [49] Leonid Sigal, Alexandru O. Balan, and Michael J. Black. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. _IJCV_, 2010.
* [50] Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. Hand keypoint detection in single images using multiview bootstrapping. In _CVPR_, 2017.
* [51] Gul Varol, Javier Romero, Xavier Martin, Naureen Mahmood, Michael J Black, Ivan Laptev, and Cordelia Schmid. Learning from synthetic humans. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 109-117, 2017.
* [52] Vicon. Vicon Motion Capture Systems. [https://www.vicon.com/](https://www.vicon.com/), 2010.
* [53] Daniel Vlasic, Rolf Adelsberger, Giovanni Vannucci, John Barnwell, Markus Gross, Wojciech Matusik, and Jovan Popovic. Practical motion capture in everyday surroundings. _TOG_, 26(3):35-es, 2007.
* [54] Timo Von Marcard, Bodo Rosenhahn, Michael J Black, and Gerard Pons-Moll. Sparse inertial poser: Automatic 3d human pose estimation from sparse imus. In _Computer Graphics Forum_, pages 349-360. Wiley Online Library, 2017.
* [55] Timo Von Marcard, Roberto Henschel, Michael J Black, Bodo Rosenhahn, and Gerard Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In _ECCV_, pages 601-617, 2018.
* [56] Thang Vu, Kookhoi Kim, Tung M Luu, Thanh Nguyen, and Chang D Yoo. Softgroup for 3d instance segmentation on point clouds. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2708-2717, 2022.
* [57] Xiaolin Wei, Peizhao Zhang, and Jinxiang Chai. Accurate realtime full-body motion capture using a single depth camera. _SIGGRAPH Asia_, 31(6):188:1-12, 2012.
* [58] XSENS. Xsens Technologies B.V. [https://www.xsens.com/](https://www.xsens.com/), 2011.
* [59] Lan Xu, Weipeng Xu, Vladislav Golyanik, Marc Habermann, Lu Fang, and Christian Theobalt. Eventcap: Monocular 3d capture of high-speed human motions using an event camera. In _CVPR_, 2020.
* [60] Weipeng Xu, Avishek Chatterjee, Michael Zollhofer, Helge Rhodin, Dushyant Mehta, Hans-Peter Seidel, and Christian Theobalt. Monoperfcap: Human performance capture from monocular video. _ACM Transactions on Graphics (TOG)_, 37(2):27:1-27:15, 2018.
* [61] Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge Zhu, Xuming He, Jingyi Yu, and Yuexin Ma. Human-centric scene understanding for 3d large-scale scenarios. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 20349-20359, 2023.
* [62] Ming Yan, Xin Wang, Yudi Dai, Siqi Shen, Chenglu Wen, Lan Xu, Yuexin Ma, and Cheng Wang. Cimi4d: A large multimodal climbing motion dataset under human-scene interactions. _arXiv preprint arXiv:2303.17948_, 2023.
* [63] Xinyu Yi, Yuxiao Zhou, and Feng Xu. Transpose: Real-time 3d human translation and pose estimation with six inertial sensors. _ACM Transactions on Graphics (TOG)_, 40(4):1-13, 2021.
* [64] Xinyu Yi, Yuxiao Zhou, Marc Habermann, Soshi Shimada, Vladislav Golyanik, Christian Theobalt, and Feng Xu. Physical inertial poser (pip): Physics-aware real-time human motion tracking from sparse inertial sensors. In _CVPR_, 2022.
* [65] Tianwei Yin, Xingyi Zhou, and Philipp Krahenbuhl. Center-based 3d object detection and tracking. _CVPR_, 2021.
* [66] Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, and Yebin Liu. Doublefusion: Real-time capture of human performances with inner body shapes from a single depth sensor. _TPAMI_, 2019.
* [67] Andrei Zanfir, Eduard Gabriel Bazavan, Mihai Zanfir, William T Freeman, Rahul Sukthankar, and Cristian Sminchisescu. Neural descent for visual 3d human pose and shape. _arXiv preprint arXiv:2008.06910_, 2020.
* [68] Zhengyou Zhang. A flexible new technique for camera calibration. _IEEE Transactions on pattern analysis and machine intelligence_, 22(11):1330-1334, 2000.
* [69] Xinge Zhu, Yuexin Ma, Tai Wang, Yan Xu, Jianping Shi, and Dahua Lin. Ssn: Shape signature networks for multi-class object detection from point clouds. In _ECCV_, pages 581-597. Springer, 2020.
* [70] Xinge Zhu, Hui Zhou, Tai Wang, Fangzhou Hong, Wei Li, Yuexin Ma, Hongsheng Li, Ruigang Yang, and Dahua Lin. Cylindrical and asymmetrical 3d convolution networks for lidar-based perception. _TPAMI_, 2021.
**LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation**
**in Free Environment**
Supplementary Material
In the supplementary material, we first provide a detailed explanation of our data processing procedures. Our capture system for the newly collected dataset encompasses multiple sensors, such as LiDARs, cameras, and IMUs. In Section. 8, we meticulously present the details of data processing. In order to enhance the dataset's diversity of actions and shapes for pretraining, we generate a large amount of synthetic data. Section. 9 elaborates on the methodology employed for generating synthetic point cloud data. Additionally, we emphasize that the FreeMotion dataset encompasses various shapes and actions with occlusions and human-object interactions. In Section. 10, we showcase nearly twenty human scans and forty distinct action types in FreeMotion. Moreover, in Section. 11, we show more experiment and qualitative evaluations of our ablation study. Lastly, in Section. 12, we present multi-frame results of LiveHPS in diverse scenarios, containing indoor, outdoor, and night scenes, to further demonstrate the robustness and effectiveness of our method for free HPS in arbitrary scenarios.
## 8 Details of Data Processing
FreeMotion is collected in two capture systems, the first capture system is the indoor multi-camera panoptic studio and the second capture system is in various challenging large-scale scenarios with multiple types of sensors. In this section, we provide more details about our data acquisition, data pre-processing, multi-sensor synchronization and calibration. The whole FreeMotion dataset will be made publicly available.
### Data Acquisition
When we capture data in both capture systems, we divide the capture processing into three stages. In the first stage, the actor performs different kinds of actions alone without occlusion and noise. In the second stage, we arrange other persons to interact with main actor and perform specific actions depending on the scene characteristics, such as basketball court, meeting room, etc. The interactions bring real occlusions. In the last stage, the main actor interacts with some objects, such as the balls, package skateboard, etc. The objects bring real noise.
### Data Pre-Processing
**Point Cloud.** The raw point clouds are captured from 128-beam OUSTER-1 LiDAR. The LiDAR features a 360\\({}^{\\circ}\\) horizon field of view (FOV) \\(\\times\\) 45\\({}^{\\circ}\\) vertical FOV. For the training dataset, we first record the point cloud of static background, and then set a threshold to delete the background points to get the point clouds of actors. For the processed point clouds, we use DBSCAN [15] cluster to get instance point cloud of each person. Then we employ the Hungarian matching algorithm for point cloud-based 3D tracking. Moreover, to promise high-quality annotations of our dataset, we review the segmentation point cloud sequence of each person. For the challenge and complex scene, such as the meeting room, which contains many extra objects, we use manually annotation for each point cloud sequence. For the testing dataset, we train the instance segmentation model [56] in our training set and inference in our testing set while the tracking processing is the same by above methods.
**SMPL annotations.** For the first capture system, we use multi-camera method [1] to generate the ground-truth SMPL parameters(Poses and translations). The predicted shape parameters is not accurate by this method. As the Figure 8 shows, we use the multi-camera method [43] to reconstruct the human mesh and fit the human mesh to the SMPL model in template pose for more accurate shape parameters. For the second capture system, we use full set of Noitom [39] equipment(17 IMUs) to get the SMPL pose parameters and the shape parameters of performer are captured in the panoptic studio in advance. In outdoor capture scenes, we follow [13] to fit the SMPL mesh to the point cloud for accurate translation parameters.
### Multi-sensor System Synchronization and Calibration
**Synchronization** The synchronization among various-view and modal sensors is accomplished by detecting the jumping peak shared across multiple devices. Thus, actors are asked to perform jumps before the capture. We subsequently manually identify the peaks in each capture device and use this timestamp as the start for time synchronization.
**Calibration** Our capture systems contain LiDARs, Cameras and IMUs. For the three LiDARs calibration, we select the static background point clouds in the same timestamp, and manually to do the point cloud registration. For multi-camera calibration, we use [43] to calibrate all cameras. The calibration of IMUs is the algorithm of Noitom [39]. For calibration of each sensor, in the first capture system, we generate the SMPL human mesh from multi-camera data, the coordinate of human mesh is the same with the multi-camera coordinate, then we follow LIP [45] to utilize the ICP [5] method between the segmentation human point cloud and the human mesh vertices for LiDAR-camera calibration. In the second capture system, we utilize the Zhang's Camera Calibration Method [68] for LiDAR-camera calibration, and the human SMPL mesh's coordinate is the same with the coordinate of IMUs. We use the above ICP method for LiDAR-IMU calibration.
## 9 Data Synthesis
In order to ensure the diversity of training data, we synthesize point cloud data on public datasets [34, 37, 51], which do not contain point cloud as input. In this section, we will explain our detailed implementations of data synthesis. LiDAR works in a time-of-flight way with simple principles of physics, which can be easily simulated with a small gap to real data. We generate simulated LiDAR point cloud by emitting regular lights from the LiDAR center according to its horizontal resolution and vertical resolution. The light can be reflected back when encountering obstacles, generating a point at the intersection on the surface of obstacles. We conduct the simulation according to the parameters of Ouster(OS1-128), which is also the device used in collecting FreeMotion. Its horizontal resolution is 2048 and its vertical resolution is 128 lines. Each emission direction is described by unit vector in spherical coordinate system \\(d=[\\cos\\varphi\\sin\\theta,\\cos\\varphi\\cos\\theta,\\sin\\varphi]\\), where \\(\\varphi\\) represents the angle between the emission direction and the plane XY, \\(\\theta\\) indicates the azimuth, and \\(c=[0,0,2]\\) is the LiDAR center. The intersection point \\(p=[p_{x},p_{y},p_{z}]\\) is calculated by
\\[p=c+d\\frac{n^{T}(q-c)}{n^{T}d}, \\tag{10}\\]
where \\(n\\) represents the normal vector of corresponding mesh and \\(q\\) denotes any vertex point of the mesh.
For the process of calculating the intersection of LiDAR and mesh surface, there are mainly three steps. The first step is to calculate the intersection of light and triangular patch, and the second step is to judge whether the intersection of light and plane is inside the triangle. Due to occlusions, one light should only have the intersection with the first touched mesh of object. Finally, we filter the intersections to only keep the ones first occurred in the LiDAR view.
Since the above datasets focus on single-person scenarios, we randomly crop some body parts of the point clouds to simulate the occlusions in real scenes:
\\[index =dis(p-o)>r, \\tag{11}\\] \\[pc_{crop} =pc[index],\\]
where \\(o\\) represents a random point position in point cloud \\(pc\\) as the round dot, and \\(r\\) denotes the radius of the crop area. The function \\(dis()\\) means the distance between the two point positions. The \\(pc_{crop}\\) is the synthetic point cloud with occlusion.
## 10 Details of Dataset
We collect FreeMotion in diverse multi-person scenarios, including various sports venues as well as daily scenes. We calculate the types of action and the shapes of human covered in different scenarios. As the Figure. 8 shows, FreeMotion contains diverse human shapes, and we use large synthetic data to obtain more human with different shapes, such as SURREAL [51]. As the Figure. 9 shows, FreeMotion comprises nearly forty distinct action types, providing point cloud data from three different views and corresponding ground-truth mesh. The action types are arranged from top to bottom and from left to right, we begin by showing some common warm-up actions performed in daily routines, such as the leg pressing, kick or lumbar movement, etc. These fundamental motions are essential for sports activities and represent simple motions with minimal self-occlusion. However, they are still affected by external occlusion, as depicted in the left point cloud of the \"Lumbar Movement\" and the right point cloud of the \"Side Stretch\". The more challenging daily motion type for point cloud is the squatting, which involves serious self-occlusion, such as the bend, crawling, and sitting down. Moreover, squatting motions are prone to lose information due to external occlusion, as shown in the middle point clouds of \"Crawl\" and \"Long Jump\". Furthermore, we present sports motions captured in large-scale multi-person sports scenes. These motions have greater challenges due to more self-occlusion, as defined by the LIP [45] benchmark. In FreeMotion, our sports motions exhibit more severe issues of external occlusion, resulting in point clouds with only a few points or even no points due to occlusion caused by other actors. This can be observed in the middle point clouds of \"Football\" and \"Ping-Pong\". Lastly, we also capture data in various daily walkway scenes, including crossing obstacles, carrying shoulder bags and backpacks, etc. These types of motion frequently occur in daily life, and the most challenging issues are lots of human-irrelevant noise and occlusion caused by the interactive objects.
## 11 More experiments
In this section, we show more experiments and more qualitative results of our ablation study.
\\begin{table}
\\begin{tabular}{l c c c c c} \\hline \\hline & \\multicolumn{5}{c}{LiveHPS} \\\\ \\cline{2-6} & JV En(P) & \\(1V\\) En(P) & \\(1V\\) En(P) & \\(1V\\) En(P) & \\(1V\\) En(P) & \\(1V\\) En(P) \\\\ \\hline LiDAR-SLAM [33] & 65.3481.17 & - & - & 18.69 & 2.62 \\\\ LiDAR [35] & 65.7470.753 & - & - & 13.97 & 1.75 \\\\ CMHQ [62] & 85.827(105.77) & 86.33706.14 & - & 29.02 & 3.68 \\\\ \\hline Fre FreMotion & 66.3880.10 & 66.68800.52 & 116.727125.46 & 15.59 & 2.71 \\\\ \\hline \\hline \\end{tabular}
* our LiveHPS can provide high-quality results, notably in
* our LiveHPS can provide high-quality results, notably in
datasets like CIMI4D, where our results often align more closely with point clouds than global ground truth mesh. With SUCD metric, we can sift through a bunch of high-quality estimations as pseudo-labels to supplement lacking SMPL annotations in LiDARHuman26M [33], LIPD [45] and parts of our FreeMotion, particularly in outdoor multi-person scenes lacking dense IMU annotation. These pseudo-labels are then incorporated into our training set to fine-tune our LiveHPS. Tab.6 displays the enhanced performance post fine-tuning, affirming LiveHPS's effectiveness and robustness, and its potential as an automated human pose and shape annotation tool.
We show the qualitative evaluation of the ablation study, the red box means the incorrect local motion. As the Figure. 10 shows, the network without consecutive pose optimizer module is unable to capture coherence features in temporal and spatial, which causes the big mistake in rotation estimation even the joint positions are basically correct. And the network without vertex-guided adaptive distillation module also performs bad in local motions when the input point cloud is occlusion, such as the foots and hands. As shown in Figure. 11, the network without temporal information also cause the same mistake in rotation estimation. Our method performs better especially in local motions, such as the head and hands. As shown in Figure. 12, when the point cloud is occlusion, our method can maintain stability, while others clearly affected by the occlusion of point clouds, such as the right hands in first row, the legs of STGCN result and left hand of GRU result in second row. Finally, we evaluate the translation estimation, the MOVIN
Figure 8: Diverse shapes of human in FreeMotion and synthetic dataset. We display the human reconstruction meshes and the process of generating shape parameters.
provides the velocity estimation method for translation estimation, which causes severe cumulative errors, and other methods all can provide a basically correct result. In order to observe the accuracy of translation prediction more accurately, we subtract the predicted translation from the original point cloud to obtain the point cloud in the origin point. As the Figure. 13 shows, our method still maintain stability when the actor is squatting.
## 12 More Results of LiveHPS
In this section, we present additional multi-frame panoptic results of LiveHPS, which further illustrate the effectiveness and generalization capability of our method for estimating the accurate local pose in diverse challenging scenarios, encompassing indoor, outdoor, and night scenes.
**Indoor scene.** Most existing LiDAR-based mocap datasets [33, 45] primarily focus on capturing data in outdoor scenes to leverage the wide coverage capabilities of LiDAR sensors. Moreover, indoor data often present challenges such as lots of noise and occlusion in the point cloud caused by surrounding complex furniture in limited ranges. As shown in Figure. 14, three actors are engaged in indoor speech motions, involving tasks such as cleaning the desktop, adjusting the screen, giving a presentation, and writing on a whiteboard. We also display image reference in three views, in some case, actors may fall outside the camera's range. External occlusions arise due to the presence of other actors (purple actor in \"Adjust the Screen\") and indoor facilities (cyan actor in \"Write on Whiteboard\"). Despite these challenges, our method demonstrates reliable performance in handling occluded point clouds, showing its robustness in complex indoor environments.
**Outdoor scene.** Benefiting from long-range sensing properties of LiDAR, FreeMotion can capture multi-person motions in various large-scale scenes. As shown in Figure. 15, we collect the data in the basketball court measuring 28 meters in length and 15 meters in width. The sequence shows a fast-break tactic involving three actors, and external occlusion primarily occurs during instances when the actors scramble for the basketball, as observed in the right part of the sequence. Despite the challenging conditions, our method exhibits excellent performance even in situations where the point cloud becomes extremely sparse due to actors being distant from the LiDAR sensor or being occluded by others. These results highlight the potential applicability of LiveHPS in sports events.
**Night scene.** LiDAR sensors operate independently of environmental lighting conditions, which can work in the completely dark environments. In Figure. 16, we present the sequential results of LiveHPS captured on the square at night. The scene depicts a densely populated area with complex terrain. The LiDAR can capture seventeen persons in the region measuring 24 meters in length and 20 meters in width. In contrast, the image reference is significantly unclear due to the absence of adequate lighting. Nevertheless, our method demonstrates reliable performance in this challenging scene, demonstrating that LiveHPS can be applied for arbitrary real-world scenes.
Figure 9: Different types of action in FreeMotion. We display the point cloud from three different views and corresponding ground-truth mesh for each type of action. The left point cloud is the main view, which is matching with the ground-truth mesh, the middle point cloud is the back view and the right point cloud is the side view.
Figure 11: Qualitative evaluation on different optimization configurations of CPO module.
Figure 12: Qualitative evaluation on our attention-based Inverse Kinematic Solver.
Figure 10: Qualitative evaluation on our network modules.
Figure 13: Qualitative evaluation on our network modules. The blue line means the height of point cloud with ground-truth translation, the green line means the height of point cloud with our predicted translation, and the red line means the point cloud with LIP predicted translation or average locations.
Figure 14: Sequential result of LiveHPS in indoor scene. We select four time points to show the indoor speech motion. The ”T” means the timestamp, and the time unit is seconds. For each timestamp, we show point cloud, the corresponding result of LiveHPS and the image reference, ”V-x” with same color means the three views of camera in the same timestamp. Digital labels with different colors on the image corresponding the point clouds with the same color. Note, for showing more details, we zoom in the results.
Figure 15: Sequential result of LiveHPS in outdoor scene. We show the sequential point cloud and results with time step of 10 seconds. For showing the image reference in three views, we select three different timestamp for each view, the ”V-x” with different colors means different timestamp in view-x(x=1,2,3). Digital labels with different colors on the image corresponding the point clouds with the same color
Figure 16: Sequential result of LiveHPS in night scene. The time step of the sequential point cloud and results is 10 seconds. The ”V-x” with different colors means different timestamp in view-x(x=1,2,3). T means the timestamp of the image and the time unit is seconds. Digital labels with different colors on the image corresponding the point clouds with the same color. | For human-centric large-scale scenes, fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications. In this paper, we present **LiveHPS,** a novel single-**LiDAR**-based approach for scene-level **H**uman **P**ose and **S**hape estimation without any limitation of light conditions and wearable devices. In particular, we design a distillation mechanism to mitigate the distribution-varying effect of LiDAR point clouds and exploit the temporal-spatial geometric and dynamic information existing in consecutive frames to solve the occlusion and noise disturbance. LiveHPS, with its efficient configuration and high-quality output, is well-suited for real-world applications. Moreover, we propose a huge human motion dataset, named **FreeMotion**, which is collected in various scenarios with diverse human poses, shapes and translations. It consists of multi-modal and multi-view acquisition data from calibrated and synchronized LiDARs, cameras, and IMUs. Extensive experiments on our new dataset and other public datasets demonstrate the SOTA performance and robustness of our approach. We will release our code and dataset soon. | Condense the content of the following passage. | 239 |
cambridge_university_press/be6b34ee_96d1_4e05_ae1f_7eb08636a7e8.md | # Annual net balance of North Cascade glaciers, 1984-94
Maui S. Pelto
######
Cross-correlation of annual net balance for eight glaciers ranges from 0.83 to 0.97. This indicates the mass balances of the eight glaciers have been responding similarly to climate conditions despite their range of topographic and geographic characteristics. Annual net balance of individual glaciers was correlated with climate records. The highest ablation-season correlation coefficient is mean May-August temperature, ranging from 0.63 to 0.84. The highest accumulation-season correlation coefficient is total accumulation-season precipitation, ranging from 0.35 to 0.59.
## 1 Mass-Balance Measurement
The North Cascades mountains are host to approximately 725 glaciers (Fig. 1)(Post and others, 1971; Pelto, 1993). The North Cascade Glacier Climate Project (NGGCP) was founded to identify the response of North Cascade glaciers to regional climate change. Ebbesmeyer and others (1991) noted the broad impact of a regional climate change occurring in 1976, identifying a significant shift in 40 environmental factors that are sensitive to climate. Glacier response to alpine weather patterns and climate is complicated by local effects. Thus, to understand the causes of and nature of changes in glacier mass balance, it was necessary to monitor a significant number of glaciers. Since 1984, the North Cascade Glacier Climate Project has monitored annual net balance on at least eight North Cascade glaciers (Pelto, 1988, 1993). To monitor this number of glaciers in the United States is not feasible financially or logistically using standard mass-balance methods. A modified stratigraphic method based on an observable summer surface is used to reduce logistical costs.
Annual balance is the difference between annual snow accumulation and snow-firm-ice melt (ablation). Mass-balance measurements are made on the same date each year in August and again in late September close to the end of the ablation season. The late September date is considered the end of the hydrologic year for that glacier. Any mass-balance changes occurring before the actual accumulation season begins is a measured mass loss or gain for the next hydrologic year. Annual ice and firm ablation (firm and ice net balance: Mayo and others, 1972) is determined using ablation stakes drilled into the glacier surface and simultaneously checked on the same date in late September. Residual snow accumulation (final late snow balance: Mayo and others, 1972) at the end of the ablation season is determined using probing and crevasse stratigraphy on the same date as ablation measurements are completed. The methods used are patterned after mass-balance studies on Lemon Creek Glacier, Alaska, and Blue Glacier, Washington (Heusser and Marcus, 1964; LaChapelle, 1965; Armstrong, 1989). The only difference between the methods used on Blue Glacier (LaChapelle, 1965; Armstrong, 1989) and in this study are: (1) snow density is not measured; (2) an order of magnitude higher measurement density is used.
Only accumulation measurements are made above the snow line in the accumulation zone. In the accumulation zone, annual accumulation-layer thickness is determined using crevasse stratigraphy and probing. Measurements are made in August and again in late September. The August measurements are made to determine snowmelt run-off for the late summer period and are not used in the final annual balance assessment. The average density of measurements utilized in this study is 290 points km\\({}^{-2}\\), while the average density of measurements used in assessing the mass balance in the accumulation zone of other Canadian, Norwegian, Swiss and United States glaciers is 33 points km\\({}^{-2}\\)(Pytte, 1969; Meier, and others, 1971) (Table 1). This higher density is achieved by focusing on time-efficient measurement methods.
The accumulation-layer thickness is measured at each point to the nearest 0.01 m. Crevasse stratigraphy measurements are conducted only in vertically walled crevasses with distinguishable dirt bands. Crevasse lacking vertical walls yield inaccurate depth measurements. In the North Cascades the ablation surface of the previous year is always marked by a 2-5 cm thick band of dirty firm or glacier ice. The depth to the top of this dirty band is measured at several points on each crevasse wallwithin a distance of several meters. The average thickness of the several points is taken to be the accumulation-layer thickness at that location.
We completed more than 100 snow pits from 1984 to 1986, the range in mean accumulation-layer density observed was 0.58-0.63 Mg m\\({}^{3}\\). This narrow range indicates that late in the ablation season the density of snowpack on North Cascade glaciers is uniform and need not be measured to determine mass balance. For this reason, snow pits are no longer utilized. The lack of density variation has been observed in the two other mass-balance programs in Washington on Blue Glacier and South Cascade Glacier (Meier and Tangborn, 1965; Armstrong, 1989). Of equal importance is that the range of density variation is of the same order as the density-measurement error, determined through repeat measurements. Since North Cascade glaciers rarely have ice lenses (an indicator of little internal accumulation), probing is an accurate method of measuring accumulation-layer thickness. The lack of ice lenses is also crucial to having a constant snow density. The probe is driven through the snowpack until the previous ablation surface is reached.
Figure 1: Location of the nine glaciers on which annual balance measurements have been made by, XCGCP from 1984 to 1993.
C, Columbia; D, Daniels; F, Foss; I, Ice Wom; L, Locer Curtis; P, Tawning; R, Rainbow; S, Spider; T, Lynch.
this surface of glacier ice or dirty hard firn cannot be penetrated. The probing instrument is a 1/2 in [1.27 cm] thick type L copper tube which is driven through the snowpack using a 1 kg weight.
The accuracy of excessive stratigraphy and probing measurements is cross-checked by comparing for consistency, On each glacier, at least 25% of the accumulation area is a zone of overlap where both probing and crevasse stratigraphy are used. This cross-checking identifies measurement points that either represent an ice lens and not the previous summer surface in the case of probing or areas where crevasse do not yield representative accumulation depth in the case of crevasse stratigraphy.
The standard deviation in snow depth obtained in cross-checking and duplicate measurements is smallest for crevasse stratigraphy, 0.02 m, and 0.03 m for probing. The narrow range of deviation in vertically walled crevasse indicates that they do yield consistent and representative accumulation depths late in the summer. After a decade of predominantly negative mass balance, the number of crevasse that are sufficiently open to complete measurements has declined significantly.
To ensure that mass-balance measurements are consistent from year to year, measurements are made at a spatially fixed network of points using the same methods on the same date each year. The network is fixed spatially with respect to the surrounding bedrock walls. A typical measurement network is shown in Figure 2. Each measurement network covers a glacier's entire accumulation zone with a reasonably consistent density of measurements. Because all of the sites are accessed by backpacking, it has not been a problem to reach each glacier at the right date.
Mass-balance measurements are in water equivalent. The product of the accumulation-layer thickness and density of the snowpack (0.60 Mg m\\({}^{3}\\)) yields the water equivalent. Errors in depth measurement are _los than_\\(\\pm\\)0.05 m, and \\(\\pm\\)0.02-0.03 Mg m\\({}^{-3}\\) is the error due to density variation. The resulting error in annual assessment for the accumulation zone ranges from \\(\\pm\\)0.10 to \\(\\pm\\)0.15 m.
Below the snow line, ablation stakes emplaced in a triangular pattern are used to determine annual ablation. An ablation triangle consists of three stakes driven or drilled into the ice at 3 m spacing, forming an equilateral triangle. Three to four triangles are emplaced on each glacier. Ablation stakes are white wooden poles 3.3 m long. This length was chosen as longer stakes are too cumbersome to transport and emplace, and shorter stakes tend to melt out. Ablation measurements are made at nine points on the triangle periphery from a cross bar resting upon two of the ablation stakes to the firn or ice surface. Measurements are made in late. July and early August, recording the ablation during the first 3 months of the ablation season. After re-drilling if necessary in August, ablation measurements are repeated in late September at the designated conclusion of the hydrologic year.
Ablation triangles are placed in a sequence from regions that first lose their snow cover to regions where snow cover persists for a significant part of the ablation season. Each ablation triangle is then representative of ablation for other parts of the glacier that lose their snowpack simultaneously. Ablation variation over the entire ablation season at the different points on the periphery of a single ablation triangle are insignificant. Thus, individual ablation stakes will be used in future years. If a stake melts out, it is not utilized in assessing ablation. Stake melt-out has been infrequent; two stakes melted out in 1987 on Lower Curtis Glacier and in 1992
\\begin{table}
\\begin{tabular}{l c c c c c} \\hline _Glater_ & _Ablation area_ & _Accumulation area_ & _Source_ \\\\ & _Sites_ & _Density_ & _Sites_ & _Density_ & \\\\ \\hline Limmern & 12 & 8 & 17 & 2 & A \\\\ Silveretta & 18 & 10 & 21 & 13 & A \\\\ Greis & 27 & 9 & 28 & 9 & A \\\\ Gulkana & 20 & 3 & 68 & 6 & U \\\\ South Cascade & 20 & 17 & 81 & 50 & U \\\\ Wolverine & 15 & 1 & 105 & 6 & U \\\\ Alötbetween & 5 & 4 & 126 & 35 & N \\\\ Austre Memurubre & 8 & 3 & 316 & 52 & N \\\\ Grasubreen & 9 & 8 & 125 & 50 & N \\\\ Hellstuguhren & 13 & 7 & 216 & 144 & N \\\\ Nigardshreen & 10 & 4 & 237 & 5 & N \\\\ Veste Memurubre & 6 & 4 & 88 & 10 & N \\\\ Columbia & 3 & 10 & 165 & 250 & P \\\\ Daniels & 4 & 12 & 115 & 280 & P \\\\ Foss & 3 & 15 & 110 & 240 & P \\\\ Lower Curtis & 3 & 10 & 185 & 280 & P \\\\ Lynch & 3 & 10 & 125 & 250 & P \\\\ Rainbow & 3 & 6 & 200 & 180 & P \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: The number of measurement sites used and their density in km\\({}^{3}\\) in mass-balance studies on selected glaciers: in Switzerland (A)/(Adell, 1982), by the United States Geological Surrey (U) (Meier and others, 1971), by the.Norsges Fassdrags-Elektrissson (N) (Pittle, 1969) and by the North Casside Glacier Climate Project (P)on Rainbow Glacier.
The error in annual ablation measurements is _estimated at_\\(\\pm\\)0.25 m, due to ice-density variations, low sampling density and stake settling. This estimate is based on the standard deviation in ablation along 50 m long transects with ablation takes placed 5 m apart. Two of these transects were employed on Columbia Glacier in 1987 and on Rainbow Glacier in 1991. There are at least three ablation-measurement sites on each glacier. The sampling density is low at 6-20 points km\\({}^{-2}\\), but comparable to the mean density of 3-17 points km\\({}^{-2}\\) used by the USGS and NVE (Table 1).
A mass-balance map for the entire glacier is then compiled for each glacier. The mass balance for the entire glacier is calculated by summation of the product of glacier area within each 0.10 m mass-balance contour and the net balance of that interval. The error in mass-balance calculation for the entire glacier is \\(\\pm\\)0.17-0.22 m. The annual balance from 1984 to 1993 for the nine North Cascade glaciers and in 1994 for eight glaciers is shown in Table 2. Spider Glacier will no longer be monitored; it could not be reached in 1994 because of forest fires in the
\\begin{table}
\\begin{tabular}{l r r r r r r r r r r r} \\hline \\hline _Glicater_ & _I991_ & _I985_ & _I996_ & _I987_ & _I988_ & _
watershed. The mean annual balance of -0.38 m a \\({}^{1}\\) for the eight glaciers during the 1984-94 period is a mean loss of 3.5-5.0 m of glacier thickness during the last 11 years. This is a significant amount given the thin nature of North Cascade glaciers estimated to range from 30-60 m (Post and others, 1971).
## Cross-correlation of net annual balance
Table 3 contains the cross-correlation of annual balance for the eight North Cascade glaciers. The high cross-correlations (0.83-0.97) indicate the similarity of each glacier's annual balance response to the annual climate conditions. Figure 3 displays the annual balance of the eight North Cascade glaciers observed by XCGCP from 1984 to 1994. The trend from year to year is quite consistent, illustrating the high cross-correlations. The actual range in annual balance between the glaciers for any given year is significant (Fig. 3).
The individual glaciers were selected to represent a range of geographic and topographic characteristics. Geographic characteristics of each glacier are given in Table 4.
The moderate range of variation in annual balance makes distinguishing which geographic characteristics are most important in determining climate sensitivity difficult. This tendency for small alpine glaciers in the Pacific Northwest to have different mass-balance histories, yet high cross-correlation coefficients, was previously noted by Lereguilly and Reynaud (1989). Lereguilly and Reynaud (1989) compared the mean annual balance histories from 1936 to 1983 of Blue Glacier, Olympic Mountains (+0.35 m a\\({}^{1}\\)) and South Cascade Glacier, North Cascades (-0.45 m a\\({}^{1}\\)), which were very different. However, their sensitivity to specific climate conditions, as indicated by a cross-correlation coefficient of 0.69, was quite high for two glaciers in different, though adjacent, mountain ranges.
Pelto (1988), in examining the first several years of mass-balance data, postulated that the variation in annual balance between glaciers is due to their different geographic characteristics. The high cross-correlation of annual balance for each glacier suggests that the geographic characteristics are of secondary importance to
Figure 3: The annual balance of eight North Cascade glaciers. Note that there is a significant range in annual balance between the glaciers and that the pattern of change from year to year is similar for all of the glaciers.
\\begin{table}
\\begin{tabular}{l c c c c c c c c} \\hline & _Col._ & _Dan._ & _Foss_ & _LW_ & _LaC_ & _Iym._ & _Rboc._ & _Ting._ \\\\ \\hline Columbia & 1.00 & 0.96 & 0.95 & 0.88 & 0.94 & 0.94 & 0.90 & 0.97 \\\\ Daniels & 0.96 & 1.00 & 0.95 & 0.94 & 0.98 & 0.93 & 0.95 & 0.94 \\\\ Foss & 0.95 & 0.95 & 1.00 & 0.87 & 0.92 & 0.92 & 0.90 & 0.94 \\\\ Ice Worm & 0.88 & 0.94 & 0.87 & 1.00 & 0.91 & 0.83 & 0.86 & 0.85 \\\\ Lower Curtis & 0.94 & 0.98 & 0.92 & 0.91 & 1.00 & 0.90 & 0.95 & 0.96 \\\\ Lynch & 0.94 & 0.93 & 0.92 & 0.83 & 0.90 & 1.00 & 0.94 & 0.90 \\\\ Rainbow & 0.90 & 0.93 & 0.90 & 0.86 & 0.95 & 0.94 & 1.00 & 0.91 \\\\ Yawning & 0.97 & 0.94 & 0.94 & 0.83 & 0.96 & 0.90 & 0.91 & 1.00 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Cross-correlation table of annual net balance for eight North Cascade glaciers for the 1984–93 period actual climate conditions but does not suggest that geographic characteristics are unimportant. The mean annual balance differences between glaciers do reflect important differences that are probably the result of changing geographic characteristics. The annual balance record is insufficient to assess fully this hypothesis yet.
## Net Annual Balance Climate Correlation
A comparison of the long-term and short-term mean for monthly precipitation and temperature from the eight NOAA State of Washington Division 5 Weather Stations (Cascade Mountains) illustrates three important climate changes in the North Cascades for the 1984-94 period. (1) Mean ablation-season temperature has been 1.1\\({}^{\\circ}\\)C above the long-term mean (1950-80), (2) Winter precipitation has been 11% below the long-term mean. (3) Mean April-June temperature has been 1.3\\({}^{\\circ}\\)C above the long-term mean. All three of these changes lead to more negative balances and have been the cause of the rapid glacial retreat that has occurred in the North Cascades during the last 5-7 years (Pelto, 1993).
The four primary climatic variables affecting North Cascade glaciers are ablation-season temperature, accumulation-season precipitation, summer cloud cover and May and October freezing levels (Tangborn, 1980; Pelto, 1988). Since summer cloud cover is not monitored in the North Cascade region, this parameter cannot be examined. Porter (1977) and Tangborn (1980) demonstrated that summer cloud cover is highly correlated with summer temperature, is inherently included in the temperature record and is not an independent variable. Freezing-level elevations are incorporated by including only May and October precipitation occurring when Stevens Pass temperature is below 7\\({}^{\\circ}\\)C as accumulation-season precipitation.
Comparison between net annual balance for each glacier and accumulation-season and ablation-season conditions at NOAA Washington State Division 5 weather stations is presented in Table 5. Four different measurement methods of accumulation-season precipitation (ppt.) are used: (1) October-March ppt., (2) October-April ppt., (3) November-March ppt., (4) all precipitation from October to May that falls when the temperature at Stevens Pass is below 7\\({}^{\\circ}\\)C. Precipitation data used is the monthly mean for Division 5 weather stations. Weather records from 11 individual weather stations were also correlated with annual balance but each yielded lower correlation coefficients than the Cascade Mountain Division record, probably due to the significant local changes in precipitation for many storm events.
The highest correlation coefficients were for measurement method 4 (all precipitation from October to May that falls when the temperature at Stevens Pass is below 7\\({}^{\\circ}\\)C) ranging from 0.36 to 0.59. Only on Yawning Glacier was another method as accurate: method 1 (October-March ppt.) with an identical correlation coefficient. This demonstrates the robustness of method 4.
During the ablation season, four climate variables were used: (1) May-August mean temperature, (2) May-September mean temperature, (3) June-August mean temperature, (4) June-September mean temperature. Temperatures used were monthly means from Division 5 weather stations. Method 1 (May-August mean temperature) proved to be the most accurate with correlation coefficients ranging from 0.63 to 0.84. This was true on all but Lynch Glacier and Rainbow Glacier. Lynch Glacier and Rainbow Glacier have the least negative mean annual balance and highest mean accumulation-zone altitude of the eight glaciers. These two glaciers were more closely related to June-September temperature. Except for Lynch and Rainbow Glaciers, method 1 yielded correlation coefficients between 0.75 and 0.84.
The correlation between annual net balance is higher for ablation-season temperature than for accumulation-season precipitation. This does not demonstrate that the glaciers are more sensitive to ablation-season conditions. It is more likely a result of temperature being a better measure of ablation than precipitation of actual accumulation.
## Conclusion
The annual balance of North Cascade glaciers between 1984 and 1994 has been moderately negative at \\(-0.38\\,\\mathrm{m}\\,\\mathrm{a}^{-1}\\). Crevassing on all nine glaciers where annual
\\begin{table}
\\begin{tabular}{l c c c c} \\hline _Glavier_ & _Orientation_ & _Accumulation sources_ & _Distance from Dibide_ & _Eleation range_ \\\\ & & & & m \\\\ \\hline Columbia & SSE & DS, WD, AV & 15 km west & 1750–1450 \\\\ Daniels & E & DS,WD & 1 km east & 2230–1970 \\\\ Foss & NE & DS & At Divide & 2100–1840 \\\\ Ice Worm & SE & DS, AV & 1 km east & 2100–1900 \\\\ Lower Curtis & S & DS, WD & 53 km west & 1850–1460 \\\\ Lynch & N & DS, WD & At Divide & 2200–1950 \\\\ Rainbow & ENE & DS, AV & 70 km west & 2040–1310 \\\\ Yawning & N & DS & At Divide & 2100–1880 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: The geographic characteristics of eight glaciers where annual balance has been monitored annually since 1984 and will continue to be monitored. Accumulation sources: wind drifting (WD), avalanche accumulation (AV), direct snowfall ( DS)balance measurements have been made has also diminished significantly. A loss of 4.2 m of ice thickness on glaciers with an estimated mean thickness of 30 - 50 m (Post and others, 1971) is significant.
The result of negative annual balance has been glacier retreat. In 1985, of the 47 glaciers we observed 38 were retreating. In 1994, 46 of 47 glaciers observed were retreating (Pelto, 1993). Lewis Glacier with an area of 0.09 km\\({}^{2}\\) was initially selected for annual mass-balance measurements. After completing measurements from 1984 to 1988, Lewis Glacier melted away in 1989 and 1990, leaving relic ice with an area of 0.03 km\\({}^{2}\\) by the end of 1990, In 1992, David Glacier near Glacier Peak ceased to exist. In 1993, Milk Lake Glacier disappeared. Each of these three glaciers has low winter accumulation, Glaciers with geographic factors increasing mean net annual accumulation have had less negative annual balances and slower glacier-retreat rates (Pelto, 1993). During the 1984-94 period, glacier-retreat rates have increased substantially due to negative annual balances (Pelto, 1993). The high negative annual balance of recent years combined with the small size of the glaciers will ensure a continued retreat for the next several years.
## Acknowledgements
This project has received essential support from the Foundation for Glacier and Environmental Research. The manuscript was greatly improved by K. Reed, Washington, Division of Geology and Earth Resources, D. MacAyeal and two anonymous reviewers. Field assistants were K. Baresay, Z. Barcay, J. Brownlee, R. Campbell, M. Carver, J. Drumhelter, A. Fitzpatrick, M. Gowan, J. Harper, C. Hedlund, M. Hylland, D. Kaplinski, D. Klinger, D. Knoll, B. Long, J. Maggiore, C. Mitchell, B. Prater, D. Sayeghi, L. Scheper and J. Turner.
[MISSING_PAGE_POST] | Annual net balance on eight North Cascade glaciers during the 1981-94 period has been determined by measurement of total mass loss from firm and ice melt and, of residual snow depth at the end of the summer season. Overall spatial density of measurement points is 200 points km\\({}^{-2}\\). Mean annual balance of North Cascade glaciers from 1984 to 1994 has been -0.38 m a\\({}^{-1}\\). The resulting 4.2 m loss in water-equivalent thickness is significant, since North Cascade glaciers have an average thickness of 30 -50 m. | Summarize the following text. | 124 |
arxiv-format/2405_00121v1.md | **Indoor Synthetic Aperture Radar Measurements of Point-Like Targets Using a Wheeled Mobile Robot**
Yuma E. Ritterbusch\\({}^{\\text{a}}\\), Johannes Fink\\({}^{\\text{a}}\\), and Christian Waldschmidt\\({}^{\\text{b}}\\)
\\({}^{\\text{a}}\\)Robert Bosch GmbH, Corporate Sector Research and Advance Engineering, 71272 Renningen, Germany
\\({}^{\\text{b}}\\)Institute of Microwave Engineering, Ulm University, 89081 Ulm, Germany
## 1 Introduction
Low-cost automotive radar sensors have been successfully applied in mobile robotics research to aid in indoor and outdoor navigation using radar-inertial odometry (RIO) [1]. These sensors have also been used for simultaneous localization and mapping (SLAM) based on radar point clouds created using digital beamforming (DBF) [2]. The achievable angular resolution of DBF is directly related to the physical extent of the multiple-input multiple-output (MIMO) antenna aperture of the sensor. The physical antenna aperture size can only be increased by designing a larger sensor with many channels, thus increasing manufacturing costs and power consumption. SAR imaging techniques offer an attractive digital signal processing alternative to achieve high angular resolution using a small antenna aperture with only a few channels. SAR processing has previously been applied in the automotive context to image the driving environment [3]. Another key benefit of the coherent processing of multiple radar measurements is an improvement of signal-to-noise ratio (SNR) in the image which can help to identify objects with smaller radar cross-section (RCS) and improve the accuracy of high-resolution probabilistic maps in an automotive setting [4]. Our focus is to bring the benefits of SAR imaging to the field of mobile robotics in indoor scenarios. Indoor mobile robots need to perform localization without relying on highly accurate external reference systems such as a global navigation satellite system (GNSS), which limits the accuracy of the estimated robot trajectory. Furthermore radar-inertial sensor fusion approaches [1] rely on odometry measurements only and hence suffer from the trajectory estimate drifting away from the ground truth over time. This imperfect trajectory estimate presents a key limiting factor to the maximum achievable synthetic aperture length. This paper aims to quantify this limitation in our experimental setup shown in **figure 1**. To this end we analyze the achieved cross-range resolution of SAR images computed based on a radar-inertial odometry (RIO) trajectory estimate [1]. This contrasts with the contributions [3, 4, 5] which do not explicitly mention achieved SAR resolution for non-simulated datasets. The remainder of this paper is organized as follows. The system model and experimental setup are described in sections 2 and 3 respectively. Results are presented and discussed in section 4 while section 5 provides a summary of the findings.
## 2 System model
### Signal model
The radar sensors considered in this work use a chirp-sequence (CS) FMCW time-division multiplexing (TDM) MIMO modulation scheme, in which radar frames consisting of a sequence of \\(N_{\\text{c}}\\) linear frequency chirps with duration \\(T_{\\text{c}}\\) and bandwidth \\(B\\) are transmitted at a period \\(T_{\\text{f}}\\). During transmission of a frame, the time division multiplexing scheme cycles through \\(N_{\\text{T}}\\) transmit channels at an interval \\(T_{\\text{r}}\\) creating the interleaved transmit pattern shown in **figure 2**. The \\(N_{\\text{R}}\\) receive channels are always
Figure 1: Remote-controlled mobile robot with radar sensors, IMU and mini-PC for data recording.
active. Overall this results in \\(N_{\\mathrm{T}}N_{\\mathrm{R}}\\) virtual channels with \\(N_{\\mathrm{c}}/N_{\\mathrm{T}}\\) slow-time and \\(T_{\\mathrm{c}}f_{\\mathrm{s}}\\) fast-time samples available for baseband signal processing [6]. The equivalent complex baseband signal of a single point-target resulting from one chirp-sequence after downconversion and low-pass filtering is
\\[\\forall i\\in\\left[0,1,\\ldots,N_{\\mathrm{T}}-1\\right],j\\in\\left[0,1, \\ldots,N_{\\mathrm{R}}-1\\right],\\] \\[n_{\\mathrm{s}}\\in\\left[0,1,\\ldots,N_{\\mathrm{c}}/N_{\\mathrm{T} }-1\\right],n_{\\mathrm{f}}\\in\\left[0,1,\\ldots,T_{\\mathrm{c}}f_{\\mathrm{s}}-1 \\right]:\\] \\[s_{ij}\\left(n_{\\mathrm{s}},n_{\\mathrm{f}}\\right)=A_{ij}\\exp \\left\\{j\\frac{2\\pi}{c}\\left(f_{0}+\\gamma\\frac{n_{\\mathrm{f}}}{f_{\\mathrm{s}}} \\right)r_{ij}\\left(n_{\\mathrm{s}},n_{\\mathrm{f}}\\right)\\right\\} \\tag{1}\\]
with amplitude \\(A_{ij}\\), chirp-rate \\(\\gamma=B/T_{\\mathrm{c}}\\), and the propagation speed of electromagnetic waves \\(c\\) in the considered medium. The two-way target range for the pair of transmitter \\(i\\) and receiver \\(j\\) is
\\[r_{ij}\\left(n_{\\mathrm{s}},n_{\\mathrm{f}}\\right)= \\|_{\\mathrm{n}}\\mathbf{p}_{i}\\left(n_{\\mathrm{s}},n_{\\mathrm{f}} \\right)-{}_{\\mathrm{n}}\\mathbf{p}_{\\mathrm{t}}\\|_{2}\\] \\[\\quad+\\|_{\\mathrm{n}}\\mathbf{p}_{j}\\left(n_{\\mathrm{s}},n_{ \\mathrm{f}}\\right)-{}_{\\mathrm{n}}\\mathbf{p}_{\\mathrm{t}}\\|_{2}, \\tag{2}\\]
where the vectors \\({}_{\\mathrm{n}}\\mathbf{p}_{i},{}_{\\mathrm{n}}\\mathbf{p}_{j},{}_{\\mathrm{n}} \\mathbf{p}_{\\mathrm{t}}\\in\\mathbb{R}^{3}\\) denote the position vectors of transmit channel \\(i\\), receive channel \\(j\\) and the target respectively, expressed in common reference frame n. \\(\\|\\cdot\\|_{2}\\) denotes the Euclidean distance.
### SAR image formation
The SAR images are computed using a time-domain back-projection (TDBP) algorithm [5], which first performs a range-compression of the baseband signal by applying a window function \\(w_{\\mathrm{r}}\\left(n_{\\mathrm{f}}\\right)\\) for sidelobe control and DFT along the fast-time index \\(n_{\\mathrm{f}}\\) to obtain the range profiles
\\[S_{ij}\\left(n_{\\mathrm{s}},r\\right)=\\mathrm{DFT}\\left\\{w_{ \\mathrm{r}}\\left(n_{\\mathrm{f}}\\right)s_{ij}\\left(n_{\\mathrm{s}},n_{\\mathrm{ f}}\\right)\\right\\}\\] \\[\\approx A_{ij}\\exp\\left\\{j\\frac{2\\pi f_{0}}{c}r_{ij}\\left(n_{ \\mathrm{s}}\\right)\\right\\}\\] \\[\\quad\\quad\\quad\\quad\\cdot W_{\\mathrm{r}}\\left(\\frac{2\\pi\\gamma}{ cf_{\\mathrm{s}}}\\left(r-r_{ij}\\left(n_{\\mathrm{s}}\\right)\\right)\\right), \\tag{3}\\]
where the range-change during the short chirp duration \\(T_{\\mathrm{c}}\\) is assumed to be sufficiently small such that the fast-time index \\(n_{\\mathrm{f}}\\) in the exponential can be omitted. \\(W_{\\mathrm{r}}\\left(\\Omega\\right)\\) is the DTFT of the window function \\(w_{\\mathrm{r}}\\left(n_{\\mathrm{f}}\\right)\\).
The range-resolution after range-compression is [7]
\\[\\delta_{\\mathrm{r}}=\\eta_{\\mathrm{r}}\\frac{c}{2B}, \\tag{4}\\]
with the window-dependent main lobe-broadening factor \\(\\eta_{\\mathrm{r}}\\). The second step of SAR image formation is the azimuth-compression. The complex amplitudes of each range profile \\(S_{ij}\\left(n_{\\mathrm{s}},r\\right)\\) are interpolated onto an image grid \\(\\mathcal{G}\\subset\\mathbb{R}^{3}\\) and a matched filter
\\[H\\left(r\\right)=\\exp\\left\\{-j\\frac{2\\pi f_{0}}{c}r\\right\\} \\tag{5}\\]
is applied for every grid point. Finally all available range-profile samples are coherently integrated to yield the image
\\[I\\left(\\mathcal{G}\\right)=\\sum_{i=0}^{N_{\\mathrm{T}}-1}\\sum_{j=0}^{N_{ \\mathrm{R}}-1}\\sum_{n_{\\mathrm{s}}=0}^{\\frac{N_{\\mathrm{c}}}{N_{\\mathrm{T}}}-1} S_{ij}\\left(n_{\\mathrm{s}},\\tilde{r}_{ij}\\left(n_{\\mathrm{s}}\\right) \\right)H\\left(\\tilde{r}_{ij}\\left(n_{\\mathrm{s}}\\right)\\right), \\tag{6}\\]
where
\\[\\forall_{\\mathrm{n}}\\mathbf{g}\\in\\mathcal{G}:\\quad\\tilde{r}_{ij} \\left(n_{\\mathrm{s}}\\right)=\\|_{\\mathrm{n}}\\mathbf{p}_{i}\\left(n_{\\mathrm{s}} \\right)-{}_{\\mathrm{n}}\\mathbf{g}\\|_{2}\\] \\[\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad+\\|_{ \\mathrm{n}}\\mathbf{p}_{j}\\left(n_{\\mathrm{s}}\\right)-{}_{\\mathrm{n}}\\mathbf{g} \\|_{2} \\tag{7}\\]
is the two-way range from each transmitter/receiver pair \\(ij\\) to every point in the image grid. **Figure 3** shows the SAR imaging geometry for a synthetic aperture of length \\(L_{\\mathrm{sa}}\\) and a range of closest approach to the target \\(r_{0}\\). The cross-range resolution \\(\\delta_{\\mathrm{c}}\\) of the resulting SAR image after azimuth-compression is [8]
\\[\\delta_{\\mathrm{c}}=\\frac{\\lambda_{\\mathrm{c}}}{2L_{\\mathrm{sa}}}r_{0}, \\tag{8}\\]
where \\(\\lambda_{\\mathrm{c}}\\) is the wavelength at the chirp center frequency. The expression (8) gives the distance between the main lobe peak and first null, whose position is difficult to determine in noisy measurements. Instead we use the half-power main lobe width
\\[\\delta_{\\mathrm{c},3\\,\\mathrm{dB}}=0.88448\\,\\delta_{\\mathrm{c}} \\tag{9}\\]
to determine the achieved resolution, where the constant factor is due to the implicit rectangular window used during azimuth compression [9].
Figure 3: Side-looking imaging geometry showing the synthetic aperture highlighted in red. The image grid surrounding the target of interest is defined by the basis vectors \\({}_{\\mathrm{n}}\\mathbf{u}_{x}\\) and \\({}_{\\mathrm{n}}\\mathbf{u}_{y}\\).
Figure 2: Chirp-sequence FMCW TDM timing diagram for \\(N_{\\mathrm{T}}=2\\) transmitters (solid and dashed lines).
The integration gain \\(G_{\\rm int}\\) describes the SNR increase in the SAR image achieved by coherent processing. When integrating \\(N_{\\rm f}\\) samples, the integration gain can be expressed as [7]
\\[G_{\\rm int}=N_{\\rm f}^{\\alpha}. \\tag{10}\\]
The exponent \\(\\alpha\\) is a measure of the integration efficiency. In the case of coherent integration \\(\\alpha=1\\). In case of non-coherent integration the integration efficiency is reduced such that \\(0.5\\leq\\alpha<1\\)[7].
## 3 Experimental setup
The experimental setup consists of four chirp-sequence FMCW MIMO radar sensors operating between \\(76\\,\\mathrm{GHz}\\) and \\(81\\,\\mathrm{GHz}\\) which is the frequency range defined by ETSI for automotive radar applications [10, 11]. **Table 1** summarizes the important waveform parameters. The radar sensors are mounted onto a remote controlled mobile robot facing outward and perpendicular to the direction of movement. This enables SAR imaging of the surrounding scene. In addition to the radar sensors, an inertial measurement unit (IMU) is also attached to the center of rotation of the robot. The start of the IMU sample recording is synchronized to the first chirp sequence of a data collection run by a hardware trigger signal. The chirp-sequences of all radar sensors are synchronized via Ethernet using the precision time protocol (PTP). To avoid interference between the simultaneously operating sensors, their start frequencies were offset relative to each other by \\(100\\,\\mathrm{MHz}\\). Figure 1 shows the mobile robot with the mounted radar sensors used to record the dataset.
Two point-like reference targets were placed in the scene to determine the achieved SAR image resolution based on their main lobe width. The first reference target is a metal rod with a diameter of \\(10\\,\\mathrm{mm}\\) and a length of \\(2\\,\\mathrm{m}\\), held vertically by a wooden block which was covered in RF absorbers to suppress strong unwanted reflections as shown in **figure 4**. The second reference target is a tribhedral corner reflector (TCR) which was directly placed on top of pyramidal RF absorbers. During data collection the robot was remotely controlled to maintain a straight line path at constant speed \\(v_{\\rm ego}\\) while passing the reference targets. This results in the side-looking imaging geometry shown in figure 3. After data collection the robot trajectory is estimated using radar-inertial odometry (RIO) as described in [1]. The transmit and receive channel positions \\({}_{\\rm n}\\mathbf{p}_{i}\\) and \\({}_{\\rm n}\\mathbf{p}_{j}\\) are computed from the estimated robot trajectory using the known mounting position, orientation, and antenna array geometry of each radar for every sample of the coherent processing interval. For simplicity, the SAR images used for the performance evaluation were computed from a single radar sensor only, while data from all four radars was used for trajectory estimation.
## 4 Results
### Range resolution
**Figure 5** shows the range profile of the metal rod, trihedral corner reflector and a point-target simulation using the radar waveform parameters from table 1. The vertical dashed lines mark the half-power main lobe width as predicted by (4) for the Hann window used during range compression (\\(\\eta_{\\rm r}=1.4381\\)[9]), indicating that the computed SAR image has achieved the range resolution to be expected from the used modulation bandwidth.
### Cross-range resolution
In order to determine the achievable cross-range resolution, the same dataset was used to compute SAR images
\\begin{table}
\\begin{tabular}{l c c} \\hline \\hline
**Parameter** & **Symbol** & **Value** \\\\ \\hline Start frequency & \\(f_{0}\\) & \\(76\\,\\mathrm{GHz}\\) \\\\ Modulation bandwidth & \\(B\\) & \\(4\\,\\mathrm{GHz}\\) \\\\ Chirp duration & \\(T_{\\rm c}\\) & \\(180\\,\\mathrm{\\SIUnitSymbolMicro s}\\) \\\\ Chirp repetition interval & \\(T_{\\rm r}\\) & \\(200\\,\\mathrm{\\SIUnitSymbolMicro s}\\) \\\\ Chirps per frame & \\(N_{\\rm c}\\) & \\(256\\) \\\\ Frame repetition interval & \\(T_{\\rm f}\\) & \\(52\\,\\mathrm{ms}\\) \\\\ Baseband sample rate & \\(f_{\\rm s}\\) & \\(977\\,\\mathrm{kHz}\\) \\\\ Number of transmit channels & \\(N_{\\rm T}\\) & \\(4\\) \\\\ Number of receive channels & \\(N_{\\rm R}\\) & \\(4\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Radar waveform parameters.
Figure 4: Vertical metal rod (inside ellipse), RF absorber and side-looking SAR imaging geometry.
Figure 5: Measured range peak profiles of the metal rod, trihedral corner reflector (TCR) and simulated point-target for a Hann window (\\(\\eta_{\\rm r}=1.4381\\)[9]).
with successively greater synthetic aperture lengths \\(L_{\\rm sea}\\). **Figures 6 and 8** show the resulting achieved cross-range resolution for the reference targets metal rod and trihedral corner reflector. Each diagram depicts data from two different measurement passes performed at robot speeds \\(v_{\\rm ego}\\) of \\(0.4\\,\\mathrm{m/s}\\) and \\(1.0\\,\\mathrm{m/s}\\). The dashed line indicates the cross-range half-power main lobe width \\(\\delta_{\\rm c,3\\,dB}\\) as predicted by (9) for both scenarios.
#### 4.2.1 Metal rod
In case of the metal rod a good agreement between expected and achieved main lobe width is apparent for synthetic aperture lengths \\(L_{\\rm sea}\\leq 0.34\\,\\mathrm{m}\\). Because the range of closest approach \\(r_{0}\\) for the depicted passes is \\(2\\,\\mathrm{m}\\), synthetic apertures longer than \\(0.34\\,\\mathrm{m}\\) lead to an expected cross-range resolution that is smaller than the diameter of the metal rod. As the apertures become larger a splitting of the single target peak into two peaks of almost equal power can be observed. An example of this phenomenon is shown with the black dotted line in **figure 7**. This is likely caused by the scattering center on the surface of the metal rod shifting location as the radar moves past. It seems unlikely that this is caused by a creeping wave around the surface of the metal rod, as its received power is much smaller than that of the primary reflection. Furthermore, the additional propagation delay due to the small rod diameter is not resolvable at the used modulation bandwidth.
When the cross-range resolution approaches the physical target size, the point-target assumption is violated and the half-power main lobe criterion no longer seems suitable for determining the achieved cross-range resolution.
The red triangles in figure 6 show the achieved resolution for a robot speed of \\(0.4\\,\\mathrm{m/s}\\). One thing to note there is an outlier around \\(L_{\\rm sea}=0.14\\,\\mathrm{m}\\), while the samples immediately preceding and following agree well with the theory. Figure 7 shows the cross-range profiles for the three circled neighboring samples in figure 6. In case of the shortest aperture \\(L_{\\rm sea}=0.102\\,\\mathrm{m}\\), the cross-range spectrum exhibits a well defined peak. The outlier sample on the other hand has a much broader peak and a shoulder that hints at a side lobe, causing the inflated estimate for achieved cross-range resolution. The third sample with \\(L_{\\rm sea}=0.178\\,\\mathrm{m}\\) again exhibits a well defined peak, while a side lobe only about \\(3.5\\,\\mathrm{dB}\\) below the main peak has become visible. This new side lobe is located around the same cross-range position as the shoulder in the previous profile. Because there are no targets located close to the metal rod in the experimental setup, we can conclude that this is in fact a side lobe and not an actual target reflection. Possible sources for these side lobes with significantly higher level than the \\(-13.3\\,\\mathrm{dB}\\) to be expected from a rectangular window [9] will be discussed in section 4.3.
The blue triangles in figure 7 show the achieved resolution for a robot speed of \\(1\\,\\mathrm{m/s}\\). Similar to the lower speed case the results follow the expected resolution until a synthetic aperture length of \\(L_{\\rm sea}\\leq 0.34\\,\\mathrm{m}\\). For longer synthetic apertures the deviation from the theory becomes quite large, again indicating problems related to a violation of the single point-target assumption.
#### 4.2.2 Trihedral corner reflector
The results for the trihedral corner reflector shown in figure 8 are very similar to those of the metal rod previously discussed. For a robot speed of \\(0.4\\,\\mathrm{m/s}\\) depicted by the magenta diamonds, a good agreement with the expected cross-range half-power main lobe width can be observed until a synthetic aperture length of \\(L_{\\rm sea}=0.26\\,\\mathrm{m}\\). For larger aperture lengths a strong deviation from the expected main lobe width is visible. **Figure 9** shows the cross-range profile for the highlighted aperture lengths in figure 8. In case of \\(L_{\\rm sea}=0.14\\,\\mathrm{m}\\) there is a well-defined main peak, albeit with an already high side lobe level of \\(-4.5\\,\\mathrm{dB}\\). As the aperture length increases the main lobe width decreases while side lobe level increases. In case of \\(L_{\\rm sea}=0.37\\,\\mathrm{m}\\), the main lobe appears to have split apart and the half-power main lobe width is much wider than expected.
For a robot speed of \\(1\\,\\mathrm{m/s}\\), a slightly longer aperture of \\(L_{\\rm sea}=0.34\\,\\mathrm{m}\\) can be synthesized before the achieved cross-range resolution no longer tracks the expected behavior. Looking at the side lobe structure in figure 9 it seems like the side lobes are the major limiting factor in our SAR imaging setup.
Figure 6: Achieved cross-range half-power main lobe width for metal rod.
Figure 7: Measured metal rod cross-range peak profiles for \\(v_{\\rm ego}=0.4\\,\\mathrm{m/s}\\).
### Sensor vibration
The high side lobe level can be plausibly explained by low-frequency vibration of the radar sensor during data recording, which is known to cause side lobes in the Doppler-spectrum. The Doppler-spectrum resulting from a sinusoidal motion of the radar sensor during the coherent processing interval can be written in terms of an infinite series of Bessel-functions [12]. **Figure 10** shows the measured trihedral corner reflector cross-range profile for \\(L_{\\rm sa}=0.14\\,\\mathrm{m}\\) and the results of a point-target simulation with a sinusoidal vibration of the radar sensor along the \\(y\\)-direction (sensor boresight) during data recording. The simulated robot speed was \\(v_{\\rm ego}=0.368\\,\\mathrm{m/s}\\) to achieve the same aperture length \\(L_{\\rm sa}\\) as in the measurement. In order to parametrize the simulation, a single dominant vibration component was assumed to be present. The vibration frequency \\(f_{\\rm vib}\\) was chosen based on the expected Doppler-frequency of a target at side lobe position \\(x_{\\rm vib}\\)
\\[f_{\\rm vib}=\\frac{2v_{\\rm ego}}{\\lambda_{\\rm c}}\\frac{x_{\\rm vib}}{\\sqrt{x_{ \\rm vib}^{2}+r_{0}^{2}}}, \\tag{11}\\]
which for \\(x_{\\rm vib}=0.04\\,\\mathrm{m}\\), \\(r_{0}=2\\,\\mathrm{m}\\) yields \\(f_{\\rm vib}=3.83\\,\\mathrm{Hz}\\). The vibration amplitude \\(A_{\\rm vib}\\) was chosen based on the side lobe-level \\(L_{\\rm vib}\\) such that it fulfills
\\[L_{\\rm vib}=20\\log_{10}\\frac{J_{1}\\left(a\\right)}{J_{0}\\left(a\\right)}, \\tag{12}\\]
where
\\[a=2\\pi\\frac{2A_{\\rm vib}}{\\lambda_{\\rm c}} \\tag{13}\\]
is derived in [12] and \\(J_{i}\\) is the \\(i\\)-th order Bessel-function of the first kind. For \\(L_{\\rm vib}=-4.3\\,\\mathrm{dB}\\) this yields \\(A_{\\rm vib}=0.319\\,\\mathrm{mm}\\). As to be expected, the highly simplified simulated vibration does not perfectly match the measurement result, it does however cause the same characteristic paired and slightly asymmetric side lobes to appear next to the main lobe. Based on this simulation it seems plausible that vibration of the radar sensor in our measurement setup causes the image degradation in terms of side lobe level. The primary focus of this paper is to quantify the performance limitations of our specific mobile robot-based SAR system. Therefore low-vibration measurement setups, such as step-motor driven linear rail, were not investigated in this study. Furthermore it was not investigated whether the vibration-induced sidelobe level could be reduced using autofocus algorithms.
### Integration gain
**Figure 11** shows the achieved integration gain for all previously presented measurement passes as a function of the number of integrated radar frames \\(N_{\\rm f}\\). Unsurprisingly, the real-world scenarios fall short of the theoretical maximum coherent integration gain predicted by (10), shown as a dashed line. For comparison, the integration gain of a simulated point-target scenario with a sinusoidal vibration as in section 4.3 is shown in black. The simulation results were averaged over 7 individual simulations with different starting phases. No averaging was applied to the measured results. The simulation results further support the hypothesis that even with otherwise perfect knowledge of the trajectory, vibration of the radar sensor negatively
Figure 8: Achieved cross-range half-power main lobe width for trihedral corner reflector.
Figure 10: Measured trihedral corner reflector and simulated point-target cross-range peak profiles for a sensor vibration with \\(f_{\\rm vib}=3.83\\,\\mathrm{Hz}\\) and \\(A_{\\rm vib}=0.319\\,\\mathrm{mm}\\).
Figure 9: Measured trihedral corner reflector cross-range peak profiles for \\(v_{\\rm ego}=0.4\\,\\mathrm{m/s}\\).
affects the integration gain. In case of the metal rod depicted by the triangle markers, the integration gain follows the general shape of the simulation with a similar dip at \\(N_{\\mathrm{f}}=7\\). The performance in terms of achieved gain is also worse than the simulation. This seems plausible, as the real measurement scenario likely includes more complex vibration patterns and unlike the simulation also suffers from errors in trajectory estimates. For the case of the trihedral corner reflector denoted by the diamonds, the integration gains drop for larger synthetic apertures, further indicating problems related to drifting trajectory estimates. Unlike non-coherent integration, the SAR image formation in (6) always operates on complex valued data. In the worst case this can lead to destructive interference of correlated target returns when trajectory estimates include sufficiently large errors. As the noise is assumed to be uncorrelated on the other hand, it should not be affected by trajectory drift during integration. The overall result is a decrease in integration gain. At a higher robot speed of \\(v_{\\mathrm{ego}}=$1\\,\\mathrm{m}\\mathrm{/}\\mathrm{s}$\\), depicted by the dashed lines, the same number of integrated frames corresponds to a longer synthetic aperture. This means that the accumulated trajectory errors are also larger, explaining worse performance than in the lower speed case.
## 5 Summary
We have shown that SAR imaging using automotive radar chirp-sequence FMCW MIMO radar sensors mounted to a mobile robot based on radar-inertial odometry trajectory estimates is possible. The imaging results agree with expectations from well-known theoretical results for non-degenerate imaging cases. Vibration of the radar sensor during data collection has been identified as a major factor degrading imaging performance in terms of achievable side lobe level and integration gain. Future work will focus on improving image quality to aid in SAR-based mapping of the indoor environment. One obvious improvement to reduce sensor vibration is to increase the mechanical stiffness of the radar mounts and avoid suspending the sensor weight from a long lever arm.
## References
* [1]C. Doer and G. F. Trommer (2020) An EKF Based Approach to Radar Inertial Odometry. In 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 152-159. Cited by: SSI.
* [2]F. Hau, F. Baumgartner, and M. Vossiek (2020-12) The Degradation of Automotive Radar Sensor Signals Caused by Vehicle Vibrations and Other Nonlinear Movements. Sensors20, pp. 6195. Cited by: SS1.
* [3]F. Hau, F. Baumgartner, and M. Vossiek (2020-12) The Degradation of Automotive Radar Sensor Signals Caused by Vehicle Vibrations and Other Nonlinear Movements. Sensors20, pp. 6195. Cited by: SS1.
* [4]A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou (2013-04) A Tutorial on Synthetic Aperture Radar. IEEE Geoscience and Remote Sensing Magazine1, pp. 6-43. Cited by: SS1.
* [5]A. M.
| Small, low-cost radar sensors offer a lighting independent sensing capability for indoor mobile robots that is useful for localization and mapping. Synthetic aperture radar (SAR) offers an attractive way to increase the angular resolution of small radar sensors for use on mobile robots to generate high-resolution maps of the indoor environment. This work quantifies the maximum synthesizable aperture length of our mobile robot measurement setup using radar-inertial odometry localization and offers insights into challenges for robotic millimeter-wave SAR imaging. | Provide a brief summary of the text. | 97 |
iopscience/556be89a_20f4_4521_81d5_30b648bebc8c.md | # Method of developing the maps of passability for unmanned ground vehicles
K Pokonieczny
1Faculty of Civil Engineering and Geodesy, Military University of Technology, Warsaw 1
M Rybansky
2University of Defence, Brno 2
## 1 Introduction
The sense and utilization of unmanned vehicles is enormous. We can find a lot of various ways of their utilization, both in the civil and military area. Development of autonomous unmanned ground vehicles brings a key issue to be solved - determining the precise position of a vehicle in the area [1-3] and finding the optimal route to move.
Another perspective of interest are the particular factors limiting the speed of the vehicle over the terrain elements. For off-road terrain, it is of particular interest which factor caused the vehicle to be unable to traverse a terrain unit, a condition known as \"NO-GO\". There are several Cross-Country Movement (CCM) models, which are developed for the navigation of military vehicles - e.g. see [4-7][21]. The outputs of these models can be in the form of a map or digital Global Positioning System (GPS) navigation implemented in an off-road vehicle.
The NATO Reference Mobility Model (NRMM) [4] is the Army model and simulation standard for predicting vehicle mobility. The NRMM is a comprehensive computer model that predicts vehicle speed performance on roads, trails, and cross-country in all weather conditions, including terrain conditions associated with winter. The motion resistance is used in combination with other resisting forces (e.g., vegetation, slope) to determine the maximum possible force controlled by speed. Model outputs are velocities traveling up- and down-slope, average velocity, and a numerical code related to the speed-controlling algorithm. The CCM models calculate (accumulate) the proportion of the terrainwhere speed-made-good is limited by selected factors - see e.g. [8-13] or complete factors: slope gradient, soil conditions, vegetation, hydrology, roads, scenario, weather, etc. - see [14-17].
The aim of this paper is to discuss the issues related to the development of passability maps for Unmanned Ground Vehicles (UGV). Due to the fact that such vehicles are very often used by the military forces, the maps were generated with use of a terrain objects database used and generated by the army. This database contains information about land cover and terrain formation, which, pursuant to military reference standards, directly influence the capability of the vehicle to move in the given terrain. It was the direct intention of the authors for the resulting map to be generated in a completely automated and direct way (without any unnecessary processing), with use of the data contained in the topographic spatial database. The authors set a research task consisting in such selection of the relevant parameters of the generated map that would make it optimally suitable for predicting the influence of the geographic environment on the movement of the UGV. This influences the functional properties of the developed map, which in turn determines the possible scope of its application. These parameters are connected with the accuracy of the map, which manifests itself in the time needed by the computer system to develop such map.
## 2 Method
### Study area
Maps of passability have been generated in the area in the province located in the eastern part of the Czech Republic - Olomouc Region. The selection of this area was dictated by the fact that it is an area with a variety of terrain features that affect passability (Figure 1). The area occupies approximately 64 km\\({}^{2}\\) and encompasses large watercourses (the Morava river and the Trusovicky Potok), as well as vast areas of forests (17% of the analyzed area). In this area, there are both large terrain slope gradient levels - up to 10 degrees and extensive plains. It also includes the provincial capital (Olomouc). The density of the road network is variable and depends on the degree of land use.
### Used data
In the presented methodology for the development of land passability maps, a topographical database based on a Digital Landscape Model (DLM) was used. Models of this type are the backbone of national spatial information infrastructures created in various countries. An example of such a national database is BDOT10k (_Baza Danych Obiektow Topograficznych - Database of Topographic Objects_ in scale 1:10 000, covering the territory of Poland) and used in this article database DMU 25 (Digitalni
Figure 1: Study area
model uzemi - Digital Model of Area, which is the database for territory of the Czech Republic). Another interesting example is the OpenStreet map. This is open source database, which covers the whole world. The model used in these examples assumes storage of terrain objects as vector data. Depending on the spatial characteristics of the objects, it will be collected in the form of closed polygons (so-called area objects), broken lines (line objects), and singular points (point objects).
The maps of passability generated for UVG, are developed mainly for military purposes; therefore in experiments, the DMU, a standardized vector spatial database, was used. In terms of detail, it is equivalent to a topographic map at scale 1:25 000. In its conceptual model, there are 170 thematic categories, grouped into nine thematic layers: boundaries, terrain relief, physiography, transport, development, hydrography, vegetation, aviation content, and industry. This is a standard, general geographic elaboration, used by the Czech Republic Armed Forces (as well as others NATO Forces), whose data organization is very precisely defined in the Digital Geographic Information Exchange Standard (DIGEST) as a product of Defence Geospatial Information Working Group (DGIWG). It imposes a unified way of organizing data and developing the database on all compilers. The study carried out used this compilation as the primary source of data on land cover elements. The terrain formation data was derived from the DMR4 (_Digitalint model relieft 4. Generace - Digital Model of Relief of 4. generation_), on the basis of which a numerical model of land slopes was also generated. Used digital terrain model stores information about relief in form of grid in 5 m by 5 m size.
### Processing data
The main assumption for the conducted research was to refer passability to a square-shaped primary field. For each field was calculated Index of Passability (IOP) for the surface area of the whole square. IOP is an estimate reflecting the degree of limitation of vehicular speed by land cover elements. In this study, this index is determined on a continuous scale and decreases from 1.0 (easily passable terrain - GO terrain) to 0.0 (impassable terrain - NO GO terrain). In order to do so, grids of squares of different side lengths (25, 50, 100, 200 and 500 m) were generated at the analyzed area. For each of them, the information about elements of land cover were obtained in an automated way, with use of a specially written software application (Table 1):
* the total surface area of each area found in the given primary field;
* the total length of the linear object located within the given primary field;
* the number of objects located within the given primary field.
Apart from that, a digital terrain inclination model was generated for the analyzed area. It was created with use of DMR4. Each primary field was assigned the land denivelation parameter defined as the average slope, calculated from all points of the digital model of land inclinations located in the area of the given primary field.
The developed data model constituted the basis for generating land passability maps. They were created with use of the method based on the Vegetation Roughness Factor (denoted as VRF). VRF is a
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline
**Area (e.g. forests)** & **Line (e.g. roads)** & **Point (e.g. building)** & **Slope** \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Exemplary visualisation of data in used data model numerical evaluation reflecting the degree of the speed limit related to the movement of vehicles through the different types of land cover.
Each class of objects found in a given data model was assigned a resistance coefficient, provided that classes of objects which facilitate the passability of troops, it takes the value in the range (0, 1\\(>\\) (e.g. roads, tracks and open area), classes identified as hindering (limiting) passability were assigned the index of VRF (I\\({}_{\\text{VRF}}\\)) in the range \\(<\\)-1,0) (waters, forests, built-up areas, etc.), while classes of objects that do not affect passability (neutral) were assigned a coefficient of 0 (these are mainly single objects or cartographic elements such as descriptions or contours).
The index of passability (IOP) assigned to the whole primary field was calculated with use of the following algorithm:
* Due to the fact that information on land cover for each primary field is stored in the database in different units and that it has different numerical ranges, prior to the start of the analysis they are normalised to the range from 0 to 1, pursuant to the formula (1). \\[V=\\frac{V-V_{min}}{V_{max}-V_{min}}.\\left(new\\_max-new\\_min\\right)+new\\_min\\] (1)
where V is an input value and the V'- normalized value of input. Consequently, [V\\({}_{\\text{min}}\\), V\\({}_{\\text{max}}\\)] is the interval of input data and [new\\_min, new\\_max] is a new data range [0, 1].
* Indices of passability for all primary fields were calculated with use of the following formula:
\\[IOP_{i}=A_{i}^{nI}\\cdot IV_{VRF}^{nI}+L_{i}^{n2}\\cdot I_{VRF}^{n2}+N_{i}^{n3} \\cdot I_{VRF}^{n3}+ \\tag{2}\\]
where:
* IOP, is the index of passability of primary field with i index.
* For area objects, A\\({}^{n1}\\) is a normalized area (within i primary field) of n1 feature class.
* For linear objects, L\\({}^{a2}\\) is a normalized length (within i primary field) of n2 feature class.
* For singular point objects, N\\({}^{a3}\\) is a normalized quantity (within i primary field) of n3 feature class.
* IvRF is the Vegetation Roughness Factor of n 1, 2 or 3 feature class.
Formula (2) takes into account all object classes that are contained in the analyzed databases. The obtained indices of passability are then again normalised to a constant range from 0 to 1, pursuant to the formula (1).
A detailed description of the method presented above is provided in [6, 7]. In order to apply the methodology described above, it was necessary to assign a terrain resistance coefficient to each class of objects contained in the used database (DMU 25). The coefficient was assigned individually to a specific class of objects - see Table 2.
As a result of the application of the above algorithm 5 passability maps were created (for sizes of primary fields - 25, 50, 100, 200 and 500 m). In order to compare the content of these maps, the grid (in size 200 by 200 m) of control points were placed in the test area (Table 3A). For each of these points, data were obtained about the passability of the primary field where the given control point was
\\begin{table}
\\begin{tabular}{c c c c c} \\hline Facilitating & \\multicolumn{3}{c}{Hindering} & Neutral \\\\ \\hline _Object_ & _I\\({}_{\\text{VRF}}\\)_ & _Object_ & _I\\({}_{\\text{VRF}}\\)_ & _Object_ & _I\\({}_{\\text{VRF}}\\)_ \\\\ Roads & 0.7 & River & -1.0 & Monument & 0 \\\\ Footpath & 0.4 & Swamp & -0.8 & Tree & 0 \\\\ Firebrake & 0.3 & Forest (area) & -0.8 & Chimney & 0 \\\\ Tunell & 0.2 & Orchard & -0.5 & Forest (point) & 0 \\\\ Open areas (without area & 0.5 & Slope & -0.4 & All cartographic elements (eg. labels) & 0 \\\\ objects) & 0.5 & Slope & -0.4 & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Sample VRF (I\\({}_{\\text{VRF}}\\))values, source: own study located (Table 3B). This operation was performed for all generated maps. Thus, each control point was assigned 5 indices of passability. In order to analyse the correlations between the generated maps, Pearson correlation matrices were calculated for the indices of passability obtained for all control points. Calculations were performed separately for each size of the primary field. Basic statistic parameters of the obtained distributions of Indexes of Passability (arithmetic average and standard deviation) were calculated.
## 3 Results
The indices of passability determined for various primary field sizes were a basis for generating passability maps on various levels of detail. Fragments of maps created for various sizes of square shaped primary fields are presented in Figure 2.
Figure 2: Passability maps on various levels of raster data detail
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Map from OpenStreetMap & 20 m & 50 m \\\\ \\hline
100 m & 200 m & 500 m \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: A - distribution of the control points in 200 by 200 m grid, B – visualisation of data collection for a control point The Pearson correlation matrices that demonstrate the degree of similarity between individual maps are presented in Table 4. These matrices, together with the basic statistic parameters were generated separately for control points generated in grid in size 200 by 200 m (1576 points on test area).
The distribution of the generated indices of passability for each tested primary field size is presented in the histograms in Diagrams 1-5 (see Figure 3). They were generated for control points located on test area.
The total map generation time consists of: preparation of the data model (grid of squares) and direct calculation of the index of passability. The results of these operations for the test area (Olomouc province), are presented in Table 5. Calculations were performed on a computer with Intel Xeon e3-1230 v5 3, 4 GHz (4 cores) processor, memory: 8 GB DDR IV. For this hardware configuration, the time of preparation of the data model for 1 square (regardless of its size) is 1.1 s. Calculation of IOP, in accordance with formula (2), for one square takes approx. 0.03 s.
## 4 Discussion
Visualisations of the generated passability maps (Figure 2), demonstrate unambiguously that maps created basing on the smallest primary fields (25 m) enable to distinguish a large number of narrow UGV movement corridors. This map shows the course of roads, paths and ducts in forests, which are the key element for the movement of all vehicles (not only UGVs). For larger primary fields (50 and 100 m), the details of the terrain situation gradually disappear. These maps only show a general outline
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline & Average & Std. dev. & 25 m & 50 m & 100 m & 200 m & 500 m \\\\ \\hline
**25 m** & 0.61 & 0.17 & 1 & 0.94 & 0.86 & 0.83 & 0.65 \\\\
**50 m** & 0.62 & 0.16 & & 1 & 0.92 & 0.87 & 0.67 \\\\
**100 m** & 0.62 & 0.16 & & & 1 & 0.90 & 0.72 \\\\
**200 m** & 0.62 & 0.15 & & & & 1 & 0.79 \\\\
**500 m** & 0.68 & 0.17 & & & & & 1 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Demonstration of the degree of similarity between individual maps
Figure 3: The distribution of the various generated indices of passability
of terrain barriers. In maps generated with use of the largest analysed primary fields (200 m, and particularly 500 m), the generalisation level is so high that only the largest, consistent impassable areas are distinguishable. These maps are unsuitable for planning the routes of relatively small vehicles such as UGVs due to the high degree of content generalisation. This results from the fact that each individual primary field contains numerous elements that affect passability both in a positive and negative way and the passability of all these elements is \"averaged\" for the whole field.
The correlation matrices (Table 4) calculated basing on the IOPs obtained for control points in the 200 by 200 m grid, demonstrates that the correlation coefficient between maps diminishes with the increase in differences in the size of primary fields. This is a linear decrease (Figure 4A). However, it should be noted that the correlation coefficients between maps generated for primary fields of similar sizes are very high (e.g. 0.94 between maps based on 25 m and 50 m primary fields and 0.90 between maps based on 100 and 200 m primary fields). This proves that maps generated with use of primary fields of similar sizes are highly similar. The PPC between maps with the highest difference in primary field sizes (25 and 500 m) is low and equals 0.65.
Figure 3 (25-100 m square) showing the histograms of IOP distribution, clearly illustrate the increase in the number of samples with the index of passability of approx. 0.7. This peak gradually disappears for larger sizes of primary fields (Figure 3, 200-500 m square). The main reasons for the discussed growth are roads that are visualised in passability maps based on the smallest primary fields (25, 50 m). Along with the increase in generalisation that takes place gradually, with the growing size of primary fields, the course of roads becomes less and less visible on the map. This is manifested in the diagram of the distribution of indices of passability for the largest primary fields (200 and 500 m). They do not contain the characteristic increase in the number of primary fields with an IOP = 0.7.
An essential factor that affects the possibility to use the presented methodology, for operational purposes, is the time of generating passability maps of a specific accuracy. The conducted tests demonstrate, that it takes approx. 30 hours for the developed application to create the most accurate map for the smallest primary field (side length 25 m) and an area of 16 km\\({}^{2}\\)(Table 5). Increasing the size of primary fields and thus lowering the number of squares in the tested area, shortens the time of generating the resulting map considerably (to only 8 hours for the 50 m primary field, Figure 4B).
## 5 Conclusions
The presented methodology for the development of passability maps for UGVs is universal and completely automated. The determined index of passability may be calculated with use of any vector spatial database. It offers a possibility to generate primary fields of various shapes and sizes. Finally, different _Vegetation Roughness Factor_ coefficients may be selected for specific classes of objects included in the used spatial database. This is quite important for creating passability maps for various types of vehicles and terrain conditions. An example might be a large, heavy vehicle, for which the forest may be a major barrier (then, the \"forest\" class of objects should be assigned a low VRF, e.g. -
Figure 4: The correlation matrices based on the IOPs between (a) PCC and primary field size (for 25 m fields) and (b) map generating time and primary field size
0.8). On the other hand, the forest will not constitute an important barrier for a small vehicle, so a higher VRF may be assigned in this case (e.g. -0.2), as such vehicle may manoeuvre between trees.
The generated passability maps may also be used for planning the movement of UGVs. The operator of an unmanned vehicle may use them as auxiliary materials while controlling the vehicle. Moreover, planned routes of the unmanned vehicles may be placed on such maps. However, the most interesting way to use the generated maps is to record them in the vehicle memory. This is particularly important for fully automated devices, because, with the support of other sensors installed in the UGV, this may enable the algorithms implemented in the vehicle to determine the optimum route independently and thus the vehicle may move across the terrain in a completely automated way.
The conducted tests demonstrated that passability maps based on smaller primary fields are more suitable to be used by UGVs, due to higher level of detail. The smallest primary field used in the discussed analyses was a 25 by 25 m square. Hence, it would be reasonable to generate passability maps based on even smaller primary fields (e.g. 10 by 10 m or 5 by 5 m). This will be the direction of further research. Another element s to increase the number of parameters that are taken into account when determining the index of passability. The currently presented algorithm takes into consideration only terrain cover and relief elements to determine passability. It is planned to expand its scope by considering soil parameters as well as weather conditions [18] and accuracy of data e.g. see [19, 20].
## 6 Acknowledgments
This work was supported by the Faculty of Civil Engineering and Geodesy, Institute of Geodesy of the Military University of Technology with the frame of statutory research [grant number PBS 933/2017], by the defence research intentions DZRO K-110 PASVR II, DZRO K-210 NATURENVIR and Specific research project K-210_2017 all managed by the University of Defence in Brno. This research was performed on the basis of DMU25 and DMR4 obtained from Czech Military Directorate.
## References
* [1] Stodola P and Mazal J 2010 Optimal Location and Motion of Autonomous Unmanned Ground Vehicles _WSEAS Transactions on signal processing_ 6 68-77
* [2] Stodola P and Mazal J 2010 Autonomous Motion of Unmanned Ground Vehicles in General Environment _9th WSEAS International Conference Recent Advances in Signal Processing, Robotics and Automation (ISPRA 2010)_ (Cambridge: University of Cambridge) pp 226-231
* [3] Rybansky M and Kratochvil V Location of geographical objects in crisis situations. In: _8th International Symposium of the Digital Earth (ISDE8) 2013_. Kuching, Sarawak, Malaysia: Institute of Physics Publishing (IOP), 2014, p. 1-6. ISSN 1755-1307.
* [4] Ahlvin R B and Haley P W 1992 NRMM II Users Guide, vol 1,2, Ed. 2, Army Corps of Engineers, Waterways Experiment Station.
* The Impact and Evaluation of Geographic Factors\", CERN Brno, 113 pp, ISBN 978-80-7204-661-4.
* [6] Pokoniczny K 2017 Automatic military passability map generation system _2017 International Conference on Military Technologies (ICMT)_ (IEEE) pp. 285-292, DOI: 10.1109/MILTECHS.2017.7988771
* [7] Pokoniczny K 2018 Methods of Using Self-organising Maps for Terrain Classification, Using an Example of Developing a Military Passability Map _Dynamics in GIscience_ Lecture Notes in Geoinformation and Cartography I Ivan, J Horak and T Inspektor (Cham: Springer International Publishing) pp. 359-371, DOI: 10.1007/978-3-319-61297-3_26
* [8] Rybansky M 2015 Soil Trafficability Analysis. In: _International Conference on Military Technology Proceeding, ICMT'15_. Brno: University of Defence, p. 295-299. ISBN 978-80-7231-976-3.
* [9] Rybansky M and Vala M 2010 Relief Impact on Transport _Int. Conf. on Military Technologies 2009_ (Brno: University of Defence) p 9
* [10] Rybansky M and Vala M 2009 Analysis of relief impact on transport during crisis situations _Moravian geographical reports_**17** 19-26* [11] Dohnal F, Hubacek M, Sturcova M, Bures M and Simkova K 2017 Identification of Microrelief Shapes Along the Line Objects Over DEM Data and Assessing Their Impact on the Vehicle Movement _2017 International Conference on Military Technology (ICMT)_ (Brno: University of Defence) pp. 262-267
* [12] Rybansky M 2017 Trafficability Analysis through Vegetation. In: _Conference Proceedings of ICMT'17_. Brno: IEEE, p. 207-210. ISBN 978-1-5386-1988-9.
* [13] Cibulova K 2017 Instruments Used for Terrain Evaluation in the Army of the Czech Republic _Transport Means 2017_ (Kaunas: Kaunas University of Technology) pp 840-844
* [14] Hubacek M, Kovarik V, Talhofer V, Rybansky M, Hofmann A, Brenova M and Ceplova L 2016 Modelling of geographic and meteorological effects on vehicle movement in the open terrain _Central Europe Area in View of Current Geography_ (Brno: Masarykova univerzita) pp. 149-159
* [15] Rybansky M 2007 Effect of the geographic factors on the cross country movement during military operations and the natural disasters _Int. Conf. on Military Technologies_ (Brno: University of Defence) pp 590-596
* the Knowledge-Based Organization: Applied Technical Sciences and Advanced Military Technologies, Conference Proceeding 3, Nicolae Balcescu Land Forces, Academy, pp 262-270.
* [17] Rybansky M and Vala M 2009 Geographic Conditions of Military Transport Using Roads and Terrain _Int. Conf. on Military Technologies 2009_ (Brno: University of Defence) p 9
* June 2010 to September 2011. In: _Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci._. Praha: ISPRS, 2016, p. 281-284. ISSN 1682-1750.
* [19] Hoskova-Mayerova S, Talhofer V, Hofmann A and Kubicek P 2013 Mathematical model used in decision-making process with respect to the reliability of geodatabase, Studies in Computational Intelligence, 448, pp. 143-162.
* 14th Conference on Applied Mathematics, Proceedings, pp. 711-719.
* July 6, 2016, Book2 Vol. 3, 119-126 pp, DOI: 10.5593/SGEM2016/B23/S11.016. | The paper presents the methodology for developing passability maps that may be used for planning the movement of unmanned ground vehicles (UGVs). They were based on a standardised spatial data base in the scale of 1:25 000, created and used by the military forces and a high resolution digital terrain model. Maps were generated for square shaped primary fields. 5 sizes of such fields were used (side length 25, 50, 100, 200 and 500 m). Basing on the ground cover and terrain formation, for each primary field the index of passability (IOP) is calculated, being the terrain resistance coefficient. The obtained maps were compared to each other. As the presented methodology is completely automated, an analysis of the passability maps generation time depending on the size of the primary field was conducted. The obtained results demonstrated that maps that are most useful for planning UGV routes are those generated with use of the smallest primary fields (25 m), as they visualise the most terrain obstacles. The generated maps may be recorded in the UGV memory. As far as completely autonomous structures are concerned, this will enable the algorithms implemented in the vehicle, with the support of other sensors, to determine the optimal route of the UGV. | Provide a brief summary of the text. | 254 |
elsevier/c85593ee_f11f_48fe_badd_8fc6fb1562d2.md | Automatic generation of land use maps using aerial orthoimages and building floor data with a Conv-Depth Block (CDB) ResU-Net architecture
Suhong Yoo
Jisang Lee
Mohammad Gholami Farkoushi
Eunkwan Lee
Hong-Gyoo Sohn
School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-n, Sondemann-gu, Seoul 03722, Korea
## 1 Introduction
Land use (LU) and land cover (LC) represent vital information for urban planning, environmental monitoring, and geographical change detection (Jozdani et al., 2019; Kolotii et al., 2015; Nemmour and Chibani, 2006). Unlike LC, which only describes a topography's natural characteristics, LU can help researchers analyze the ecological or socio-economic interactions between the Earth's surface and human settlements (Pinto and Duque, 2013). Specifically, residential classifications provide crucial information when combined with population census data in exposure analyses for urban risk (Barredo and Engelen, 2010; Simmons et al., 2017). Several studies have attempted automatic LU classification using remotely sensed images, which can provide helpful spectral information regarding the classes of land objects. However, LU classification is still a challenging research topic when only the spectral information of input images is utilized, unlike lower-level LC semantic classification tasks (Meng et al., 2012; Zhang et al., 2018, 2019).
Several efforts have been made to categorize LUs using traditional classifiers such as the maximum likelihood, ISODATA, random forest, and support vector machine (SVM) by using multi-spectral satellite images and high-resolution aerial images (Meng et al., 2012; Rozenstein and Karnieli, 2011; Srivastava et al., 2012; Ustuner et al., 2015). In addition, some studies have used methods that fuse additional data, such as 3D light detection and ranging (LiDAR) point clouds or GIS-based vector maps to overcome the limitations of optical sensor images and classifiers.
However, recently, based on the concept of end-to-end learning, a deep learning-based model with higher generalizability than traditional classifiers has been popularized and used for automatic land use classification (Jozdani et al., 2019; Kussul et al., 2017; Li et al., 2016). Helber et al. (2019) trained widely known neural networks such as ResNet-50 (He et al., 2016) and GoogLeNet (Szegedy et al., 2015) to categorize 10 LU classes using Sentinel-2 satellite imagery and obtained a high accuracy of 98.57 %. Pritt and Chern (2017) also predicted class names by analyzing high-resolution images from digital globe constellation satellites and their metadata by utilizing a convolutional neural network (CNN) to denote 63 LU classes. As a result of the study, classes such as nuclear power plants, golf courses, tunnel openings, crop fields, and wind farms were found to have a classification accuracy of 97 % or more.
In the main trend of improving classification accuracy by changing the architecture of neural networks, studies are being published to evaluate the performance of neural networks by adding auxiliary data. This is because several previous studies have shown the importance of using multi-modals data with high-resolution aerial or satellite imagery to maximize classification performance. Zhang et al. (2018) developed an object-based CNN (OCNN) that fuses small- and large-kernel-sized neural networks. Digital surface model (DSM) and 50 cm resolution aerial images were used as training data, and Google maps and GIS-based vector maps were used as auxiliary data. They also applied the moment-bounding box technique to find the target object's optimal size to include only the targets of interest in the training samples. Additionally, the authors developed another approach by applying the joint probability distribution concept as it allows the application of multiple LC classes (Zhang et al., 2019). The concept of joint learning proscribes repeating LU predictions by inputting LC predictions from a multilayer perceptron back into an OCNN. While applying the new concept, DSM was used to perform image segmentation.
Unique geospatial data such as 3D point cloud data, building-related information, and GIS-based maps have been utilized depending on the goal of different studies. Rozenstein and Karnieli (2011) found that the accuracy improved by more than 5-10 % when using a GIS-based statistics map and survey map data as an auxiliary material. Hu and Wang (2013) experimentally proved that additional information such as parking lot areas and floor areas may aid in classifying LU. Zhang et al. (2017) reported that their algorithm improved classification accuracy, even though only the length of text detected from Google Street View images was incorporated. Overall, using geospatial data can improve model application. Yao et al. (2019) classified the LU class using real aerial orthoimages with spatial resolutions of 5 and 9 cm and introduced a method of using it as a special auxiliary data named an image coordinate layer. By applying the super-pixel algorithm concept (Liu et al., 2018) that groups pixels with similar spectral characteristics, the classification accuracy of 89.5 % was secured for 5 LU classes.
Our study proposes an automatic classification method for five LU classes with clutter based on intensive literature reviews. Aerial orthoimages are used as input data, and for output data, the LU map fuses the LC map of the Ministry of Environment (MoE) and the digital topographic map of the National Geographic Information Institute (NGII). Combining the two types of maps prevents the model from learning incorrect terrain information. Specifically, the classification accuracy improves when the building floor data obtained from the digital topographic map are used as auxiliary data.
In addition, we developed a Conv-Depth Block (CDB) ResU-Net architecture and compared it with the Deeplab v3+, ResUnet, ResASPP-Unet, and context-based ResU-Net neural networks. The user's accuracy, producer's accuracy, F1-score, kappa coefficient, and overall accuracy (OA) metrics (Barsi et al., 2018) were used for evaluating the neural networks, and the proposed neural network showed the best performance of all tested networks.
Chapter 2.1 introduces the experimental area, and chapters 2.2 to 2.6 describe the data used in this study and methods of creating training data. Next, chapters 3.1 to 3.4 describe the neural network used for this study, and chapter 3.5 describes hyperparameters discovered through repeated experiments. Finally, chapter 4 lists the experimental results, and chapter 5 introduces the findings and issues to be addressed in this study as a conclusion.
Figure 1: Location of the study area: Daejeon, Daegu, and Gwanglu cities in Korea.
## 2 Study area and training sample generation
### Study area
The study areas were Daejeon, Daegu, and Gwangju, which are metropolitan cities in Korea covering areas of 539.8, 883.5, and 431.0 km\\({}^{2}\\), respectively (Fig. 1). All three cities have basin topographies, are surrounded by mountains, and have rivers flowing through their downtown areas. In addition, each city has urban areas with dense low-rise and high-rise buildings, and buildings with various shapes are concentrated in small areas.
However, the topographic feature distribution patterns in the three cities are quite different. Owing to its flat profile, Daejeon has several research complexes and factories. Accordingly, buildings of various sizes are present, and commercial areas are scattered throughout residential areas (Fig. 2). Daegu also has many residences and factories, but its topographical features are closely grouped according to the purpose of their usage, unlike in Daejeon (Fig. 3). Similar to Daejeon, Gwangju has commercial areas scattered throughout residential areas, but agricultural land are widely distributed inside the metropolitan city boundary (Fig. 4). As previously described, it is challenging to develop a neural network that can comprehensively evaluate all three cities because of their different geographical feature distributions and compositions.
### Aerial orthoimages
In South Korea, NGII performs an aerial survey every two years to build a 25 cm resolution aerial photography database as a national base map project. Consequently, orthoimages for the entire Korean peninsula are produced and released to the public for free. The acquisition dates of the aerial images covering the three city areas are summarized in Table 1. The ground resolution of the publicly released orthoimages is 51 cm. However, for our experiments, the provided orthoimages were resampled to a resolution of 50 cm through bilinear interpolation to match with other input datasets.
### LC map
The MoE produced and released LC maps through the National Environment Information Network System (2021). The LC maps were mainly based on aerial images combined with satellite images and field survey data. The dates of aerial images were meticulously verified to ensure that consistent ground truth information was provided for the neural network. There are a total of seven major classes in the MoE LC map: urban, forest, grasslands, agricultural, barren, wetlands, and water, and there are 41 subclasses divided by more detailed classification criteria. Since the subclass includes various topographical classes such as residential, commercial, industrial, road, railway, airport, paddy, dry field, confetrous, deciduous, lake, and river, all the classes are required in this study can be obtained from this map.
### Ground truth data generation for LU maps
The classes selected for this study were residential buildings, other buildings, roads, forests, and water with clutter. The residential class is essential for risk analyses because it provides vital information for calculating vulnerability or exposure in urban areas (Barredo and Engelen, 2010; Kron, 2005). Road data is necessary for urban risk analyses as roads act as water channels in case of heavy rainfall. Forests and water were selected because they can be regarded as permeable areas (e. g., green spaces) that are also necessary for risk analyses. All non-residential buildings were assigned to the \"other buildings\" class, and other natural features were assigned to clutter. The selected classes are summarized in Table 2, with the descriptions of each class.
Figure 2: Daejeon, where industrial and commercial areas are scattered throughout residential areas (The Ministry of Environment, 2021).
Information on all five classes selected for automation was available on the LC map. Multiple buildings were shown as the same class in the original LC map (Fig. 5a). Therefore, if the LC maps were used as is, roads or grasslands around buildings could be grouped together as buildings, leading to inaccurate neural network training.
We, therefore, incorporated an additional digital topographic map to resolve this problem. Digital topographic maps at a scale of 1:5,000 for urban areas are produced and distributed annually by NGII; in the case of buildings, each building boundary is digitized separately, unlike in LC maps (Fig. 5b). In addition, the digital topographic maps include the
Figure 4: Gwangju, where residential, industrial, and commercial areas are scattered, and dry fields are close to metropolitan areas (The Ministry of Environment, 2021).
Figure 3: The main downtown area of Daegu, where the residential area is broad and clustered (The Ministry of Environment, 2021).
usage of the building as attribute data, which helps ensure proper residential class extraction. As a result, merged LU maps designated for deep learning were generated from the original LC map, except for defining the building class, which was extracted from the digital topographic map. This process helped clearly denote residential areas and improved the accuracy of the results.
Because the LC and digital topographic maps were provided in the ESRI shapefile format, the two maps can be simply merged by adding/deleting objects through the QGIS software. The merged LU map was converted into a raster format for use as a training sample. The code number for each class was assigned to a pixel value in the converted raster image. The spatial resolution of the map from NGI was 1 m, so the output resolution was set to 1 m. Examples of conversions from a vector format to a 1 m raster image in a section of Daejeon are presented in Fig. 6.
### Numerical building floor data generation
The number of building floors was used as auxiliary data in this study to increase the classification accuracy. Because of the nature of remote sensing data, classification is difficult when different objects have the same reflectance characteristics, with buildings often being affected by this problem. The building class data in the digital topographic map attributes included the administrative district code, zip code, building name, usage type, building number, number of floors, and information related to data production. Among the attributes, only the number of floors was extracted based on the expectation that this could substitute DSM or LiDAR data, providing information on the height of a building. Because the data type of the number of building floors is an integer, an image was generated by rasterizing the digital topographic map to store the number of building floors as pixel values (Fig. 7).
### Training and testing dataset generation
The map projection of the aerial orthoimage and the rasterized map was set to the Korea Central Belt 2010 (EPSG: 5186) coordinate system, which is one of Korea's official coordinate systems. The rasterized map was cropped to 240 \\(\\times\\) 240 pixels from the top-left corner to generate the training samples. However, because the resolution of the aerial orthoimage is 50 cm, which is half the resolution of the rasterized map, it was cropped to 480 \\(\\times\\) 480 pixels. Fig. 8 lists some samples of the data used for training and verification.
A total of 7,710 samples were generated in Daejeon, with 10,629 in Daegu, and 6,606 in Gwangju. Among these samples, 80 % were randomly selected as training samples, and the remaining 20 % were selected as testing samples for each city. Therefore, 6,168 samples from Daejeon, 8,503 samples from Daegu, and 5,284 samples from Gwangju were used for training, and 1,542 samples from Daejeon, 2,126 samples from Daegu, and 1,322 samples from Gwangju were used for testing. Fig. 9 presents the random selection results of the training and test samples for two example regions in Daejeon. Of the training samples, 20 % were randomly utilized as validation samples during training to avoid overfitting. During learning, samples from all cities were used.
## 3 Methodology
### Deeplab v3+
Deeplab v3+, which has the best performance among the Deeplab series, was selected as the comparative neural network because it produces stable results and is widely used in various classification studies (Chen et al., 2018; Yao et al., 2019; Zhang et al., 2019).
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline Cities & Acquisition dates & Number of \\\\ & & images \\\\ \\hline Daejeon & April 21, April 29, May 5, and May 26, 2018 & 80 \\\\ Daeegu & May 20, May 21, August 27, October 14, and & 126 \\\\ October 30, 2017 & & \\\\ Guangju & April 29, May 3, and May 6, 2017 & 69 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Aerial image acquisition date for the three studied cities.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline Code & Land use classes & Descriptions \\\\ \\hline
0 & Cutter & All features except for the below classes \\\\
1 & Other buildings & All buildings except residential \\\\
2 & Residential & All private housing and apartments \\\\ buildings & & Asphal and concrete roads, including bridges, lanes, and cars \\\\
3 & Roads & Deeplabours, confineous, and mixed forest \\\\
4 & Forests & Water & Rivers, lakes, oceans \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Land use (LU) classes and descriptions (National Environment Information Network System, 2021).
Figure 5: Comparison of building digitizing methods between LC and digital topographic maps.
Figure 6: Process of generating the rasterized merged map. A color palette corresponding to subclasses of the Ministry of Environment (Korea) is applied (The Ministry of Environment, 2021b).
Figure 7: Rasterized building floor data for an aerial orthoimage of Daejeon.
### ResUnet
ResUnet is a neural network developed by applying the idea of ResNet (He et al., 2016) to Unet (Ronneberger et al., 2015), which improves deep network learning performance by adding residual layers. Zhang et al. (2018) developed ResUnet to extract roads from aerial images. In addition to showing good performance in road extraction, Gomes et al. (2021) also showed satisfactory performance in land use and land cover classification studies. Accordingly, ResUnet was also selected as a comparative neural network in this study.
### ResASPP-Unet
ResASPP-Unet is a neural network that incorporates an Atrous spatial pyramid pooling (ASPP) based on the U-Net architecture (Zhang et al., 2018). ASPP has been used in various neural networks to improve classification accuracy, especially when incorporated into the bridge component of the U-Net architecture (Constantin et al., 2018; Demir et al., 2018; Zhang et al., 2018, 2019). This neural network's performance is better than those of SVM and basic U-Net, which have exhibited excellent performance in the past. It is claimed that the hybrid neural network performed best when set to a layer depth of 11 with 64 feature dimensions.
### Conv-Depth Block (CDB) ResU-Net
CDB ResU-Net is a neural network that uses the overall architecture of the context-based ResU-Net with some modifications. The context-based ResU-Net, which we developed in our recent research, is a neural network that simultaneously improves image resolution through super-resolution and predicts pixel values with different characteristics (Yoo et al., 2021). CDB yields a stable convergence, even for validation samples. It also demonstrates good performance when the neural network has large feature dimensions.
CDB ResU-Net has adopted the core configuration of most context-based ResU-Net. The original configuration of feature dimension \\(f=\\{f_{1},f_{2},f_{3},f_{4},f_{5}\\}\\), the kernel size of \\(3\\times 3\\), and the composition of CDB were adopted. It has a typical auto-encoder structure, and batch normalization is arranged to reduce dependence on the initial weight selection and reduce the risk of overfitting (Ioffe and Szegedy, 2015). In addition, a depth-wise separable convolution (DepthConv) was inserted to reduce the number of weight parameters while having the same performance as a convolutional layer (Chen et al., 2018). The main changes to improve the performance are summarized as follows: (1) The
Figure 8: Generated training samples (input: aerial orthoimages and building floor maps, output: merged maps). The color palette is detailed in Figs. 6 and 7.
upsampling method was changed from nearest neighbor to bilinear interpolation. (2) The bridge component that utilized the two CDBs was changed to three Conv blocks. (3) The stride of the CDB for its connection to the bridge component was changed from two to three. The scale of upsampling for the connection to the bridge component was accordingly changed to three. (4) Before the final result was output, the activation function was changed from ReLU to softmax (Fig. 10).
### Training parameter settings
The hyperparameters proposed by the corresponding neural network developers were adopted to maximize their performance. An initial learning rate of \\(1\\times 10^{-6}\\) was applied with a rate decay scheduler (Chen et al., 2018) with different epochs (Table 3), and the cross-entropy loss function was used for all neural networks. For the optimizer, as presented in each paper, ResUnet and ResASPP-Unet used stochastic gradient descent (SGD), context-based ResU-Net used Adam, and
Figure 10: CDB ResU-Net architecture (_f_: feature dimension).
Figure 9: Example areas of randomly selected training and testing samples in Daejeon.
Nesterov momentum with Adam (NAdam) (Ruder, 2016) was used for the rest. In addition, the early stopping scheduler was evaluated using validation samples to prevent overfitting.
The feature dimensions of \\(f=\\) {64, 128, 256, 512, 1024}, which yielded the best performance in previous studies, were adopted for the context-based ResU-Net and CDB ResU-Net. The Tensorflow Nightly (2.5.0) GPU version was used for all experiments. Two WIDIA GeForce RTX-2080 Ti 11 GB GPUs and one RTX-3090 24 GB GPU were utilized in parallel.
## 4 Results and discussion
For all models, we used the same training dataset described above, but training was conducted according to the individual parameters of each neural network. The final results were obtained by triggering the early stopping scheduler if the loss calculated based on the validation samples did not decrease after ten epochs. Table 3 presents the accuracy results in terms of various metrics for each neural network.
As per the experiment, the UA, PA, and F1-score of the CDB ResU-Net were higher than other neural networks. Using the building floor data helped increase the OA and kappa coefficient, with the former leading to an increase of at least 0.9 % and up to a maximum of 3.0 %. Specifically, the accuracy of other building and residential classes substantially improved, and for CDB ResU-Net, UA was the highest for most classes, which is advantageous for LU map users.
ResASPP-Unet performed the worst, followed by Deeplab v3 + and RestUnet, depending on whether or not the number of building floor data was used. This is because, in the case of RestUnet, the road was not classified well, and in the case of ResASPP-Unet and Deeplab v3+, the classification accuracy related to buildings was very low compared to other classes. For ResASPP-Unet and Deeplab v3+, the general difference from other neural networks is that they use ASPP. Although it is difficult to generalize, we speculate that ASPP, which uses distant pixel information together, is not suitable for data wherein various buildings are densely populated.
On the other hand, ResUnet showed good classification performance for residential buildings and water classes, even compared to CDB ResU-Net, but showed lower overall performance for other classes. In addition, although the overall accuracy increased as the number of building floors data was used, the effect was found to be insignificant for
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c c} \\hline \\hline \\multicolumn{11}{c}{**Solely using aerial orthoimages**} \\\\ \\hline \\multirow{4}{*}{Classes} & \\multicolumn{3}{c}{ResASPP-Unet} & \\multicolumn{3}{c}{Deeplab v3+} & \\multicolumn{3}{c}{ResUnet} & \\multicolumn{3}{c}{Context-based} & \\multicolumn{3}{c}{CDB ResU-Net} \\\\ & \\multicolumn{3}{c}{(epoch: 130)} & \\multicolumn{3}{c}{(epoch: 50)} & \\multicolumn{3}{c}{(epoch: 70)} & \\multicolumn{3}{c}{(epoch: 100)} & \\multicolumn{3}{c}{(epoch: 100)} \\\\ \\cline{2-11} & UA & PA & F1 & UA & PA & F1 & UA & PA & F1 & UA & PA & F1 & UA & PA & F1 \\\\ \\hline \\hline Residential buildings & 13.3 & 13.0 & 0.131 & 20.5 & 9.9 & 0.134 & **68.6** & 64.5 & **0.665** & 42.2 & 21.3 & 0.283 & 50.8 & 17.8 & 0.264 \\\\ Other buildings & 20.9 & 23.0 & 0.219 & 43.9 & 28.7 & 0.347 & 51.6 & 36.6 & 0.428 & 59.5 & 34.1 & 0.433 & **61.2** & **38.1** & **0.469** \\\\ Roads & 36.4 & 37.5 & 0.369 & 45.1 & 35.9 & 0.400 & 16.1 & 0.8 & 0.016 & **72.7** & 47.8 & 0.577 & 67.4 & **56.3** & **0.614** \\\\ Forests & 82.2 & 89.0 & 0.854 & 77.1 & 91.6 & 0.837 & 51.1 & 46.2 & 0.485 & 90.3 & 93.2 & 0.917 & **91.6** & **94.0** & **0.928** \\\\ Water & 62.8 & 52.2 & 0.570 & 41.0 & 72.0 & 0.522 & 73.3 & **92.6** & **0.818** & 76.3 & 75.9 & 0.761 & **84.2** & 72.2 & 0.778 \\\\ Clutter & 67.9 & 61.7 & 0.646 & 71.2 & 61.2 & 0.658 & 42.8 & 2.7 & 0.051 & 72.5 & 84.2 & 0.779 & **75.0** & **84.3** & **0.794** \\\\ \\hline OA & \\multicolumn{3}{c}{66.4\\%} & \\multicolumn{3}{c}{68.0\\%} & \\multicolumn{3}{c}{68.3\\%} & \\multicolumn{3}{c}{79.4\\%} & \\multicolumn{3}{c}{80.7\\%} \\\\ Kappa coefficient & \\multicolumn{3}{c}{0.508} & \\multicolumn{3}{c}{0.527} & \\multicolumn{3}{c}{0.516} & \\multicolumn{3}{c}{0.689} & \\multicolumn{3}{c}{0.711} \\\\ \\hline \\hline \\multicolumn{11}{c}{**Aerial orthoimages combined with building floor data**} \\\\ \\hline \\multirow{4}{*}{Classes} & \\multicolumn{3}{c}{ResASPP-Unet} & \\multicolumn{3}{c}{Deeplab v3+} & \\multicolumn{3}{c}{ResUnet} & \\multicolumn{3}{c}{Context-based} & \\multicolumn{3}{c}{CDB ResU-Net} \\\\ & \\multicolumn{3}{c}{(epoch: 130)} & \\multicolumn{3}{c}{(epoch: 50)} & \\multicolumn{3}{c}{(epoch: 80)} & \\multicolumn{3}{c}{ResU-Net (Ours)} \\\\ \\cline{2-11} & UA & PA & F1 & UA & PA & F1 & UA & PA & F1 & UA & PA & F1 \\\\ \\hline Residential buildings & 12.9 & 13.7 & 0.133 & 24.6 & 1.1 & 0.022 & **68.0** & **65.0** & **0.664** & 67.3 & 44.0 & 0.532 & 67.9 & 55.9 & 0.613 \\\\ Other buildings & 24.9 & 28.2 & 0.265 & 47.6 & 35.1 & 0.404 & 50.2 & 37.6 & 0.430 & 70.7 & 74.6 & 0.726 & **75.0** & **77.7** & **0.763** \\\\ Roads & 39.4 & 39.7 & 0.396 & 46.0 & 34.2 & 0.392 & 44.2 & 5.2 & 0.093 & 82.5 & 49.8 & 0.621 & **83.3** & **52.2** & **0.642** \\\\ Forests & 83.4 & 88.7 & 0.860 & 81.3 & 87.2 & 0.841 & 52.7 & 42.8 & 0.472 & 90.5 & 93.1 & 0.918 & **92.2** & **93.8** & **0.930** \\\\ Water & 64.0 & 53.9 & 0.585 & 38.3 & 74.5 & 0.506 & 7building-related class classification.
The merged LU map generated by each neural network is shown in Figs. 11-13. For the ResASPP-Unet, despite repeated experiments, a \\(5\\times 5\\) chessboard pattern was observed in the final image. Because the convolution layer size of the bridge component is \\(5\\times 5\\), this result can be attributed to a lack of information caused by using fewer image bands than those in prior studies. As in previous studies, further experimentation is required with additional information, perhaps by using near-infrared imaging. For Deeplab v\\(3+\\), small groups are scattered on the generated map, probably because the decoder performs quadruple upsampling twice. If we only analyze the results of this experiment, there appears to be a limit to learning for LU maps wherein various geographic features are closely mixed. Unlike the two neural networks mentioned above, predictions were similar to the ground truth for the proposed neural network.
When the building floor data were used, the boundary and interior of the building were clearly detected (Fig. 12a, 12b, and 12c). Specifically, pixels corresponding to a single building did not present different classes. We speculate that this causes one class per building to be dominant. In addition, many areas where predicted roads were more similar to the ground truth can be observed, which seems to have resulted from distinguishing roads from buildings (Fig. 11a, 11c, and 12b).
Neural network performance can be clearly confirmed in small areas where various buildings and roads are mixed. Unlike ResASPP-Unet and
Figure 11: Generated maps with inputs and outputs: residential area. Black boxes are marked for explanations in the text.
Deeplab v3+, which had relatively poor prediction performance, ResUnet, context-based ResU-Net, and CDB ResU-Net predicted buildings and roads relatively accurately (Figs. 11 and 12). Unlike other neural networks that misclassify some buildings or roads as forests or water (Fig. 11a, 11b, and 11d), CDB ResU-Net predicted the outline of a building and its interior accurately and suitably detected roads that were not found by other neural networks (Fig. 11c and Fig. 12).
For the permeable layers of forest and water, as features with similar reflectance characteristics were gathered, the classification of all neural networks was more accurate than that for urban areas (Fig. 13), but in the case of ResUnet, a case of misclassifying water as the forest was found (Fig. 13d). ResUnet, ResASPP-Unet, and Deeplab v3 + did not discriminate correctly when there were other geographic features within the group, whereas context-based ResU-Net and CDB ResU-Net accurately distinguished non-forested regions within a forest (Fig. 13a and 13b).
In some cases, identifiable features on LU maps or aerial orthoimages were not adequately detected. For example, the model did not detect unpaved and tree-covered roads, as shown in Fig. 11a, 13a, and 13c. In addition, we observed an area where residential building detection failed (Fig. 12b and 12c); we speculate that this occurred because it is currently difficult to distinguish residential buildings based on only the color of the roof and the number of floors. In addition, many areas with road detection failure were observed (Fig. 11c and 12d).
Figure 12: Generated maps with inputs and outputs: mixed area. Black boxes are marked for explanations in the text.
## 5 Conclusions
In this study, 24,945 sample images (Daejeon, 7,710; Daegu, 10,629; and Gwangju, 6,606) were acquired from three metropolitan areas with different characteristics in Korea to automatically generate LU maps. In addition, aerial orthoimages, digital topographic maps, and LC maps provided by the Korean government were used. The training samples were generated and utilized to train five neural networks. To increase OA, we developed a CDB ResU-Net that automatically generates LU maps based on land use classes using aerial orthoimages. The proposed CDB ResU-Net is an improved version of the context-based ResU-Net, as confirmed by its superior performance over Deeplab v3+, RestUnet, ResASPP-Unet, and context-based ResU-Net for various metrics.
The findings of this study are as follows: (1) when the number of building floors was utilized as auxiliary data, the classification accuracy of residential and other buildings increased, and (2) the proposed CDB ResU-Net had a strong classification performance.
In this study, building floor data were used as a substitute for elevation, but information on elevation would nonetheless be useful in future research to help ensure accurate modeling. In other words, if high-resolution 3D data such as 3D LiDAR point clouds or DSM data can be obtained, it will be possible to further clarify the relationship between the classes defined in this study and elevation data.
Figure 13: Generated maps with inputs and outputs: permeable layer area. Black boxes are marked for explanations in the text.
## Funding
This research was supported by a grant (2018-MOIS33-001) from the Lower-level Core Disaster-Safety Technology Development Program, which is funded by the Ministry of Interior and Safety (MOIS, Korea).
## Data availability statement
Only Korean citizens can access the data due to legal restrictions.
## CRediT authorship contribution statement
**Suhong Yoo:** Conceptualization, Methodology, Software, Writing - original draft. **Jisang Lee:** Methodology, Software. **Mohammad Gho-Iami Farkoushi**: Data curation, Software. **Eunkwan Lee:** Data curation, Visualization. **Hong-Gyoo Sohn:** Writing - review & editing, Investigation.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## References
* (1)
* (2)
* (3)
* (4)
* (5)
* (6)
* (7)
* (8)
* (9)
* (10)
* (11)
* (12)
* (13)
* (14)
* (
| Unlike land classification maps, it is difficult to automate the generation of land use (LU) maps. The deep learning approach is a state-of-the-art methodology that can expedite the creation of LU maps. However, as the deep learning output depends on the training input, it is critical to decide upon the input that should be selected. In this study, a method for securing accurate LU information is established and used for ground truthing, using data on the number of building floors extracted from a digital topographic map and a 51 cm resolution aerial orthoimages as inputs. To this end, we developed a Conv-Depth Block (CDB) ResU-Net architecture. To verify the versatility of the proposed network, our neural network was applied to three complex metropolitan areas with different LU characteristics in Korea. The accuracy of LU maps for these cities was improved by combining convolution layers and depth-wise separable convolution as well as by including numerical building floor data. The proposed CDB ResU-Net achieved an overall accuracy of 83.7 % for the test samples. Our network exhibited an improved performance compared to Deeplab v3+, ResUnet, ResASPP-Unet, and context-based ResU-Net in classifying residential classes, which is crucial for estimating the degree of exposure in urban risk analyses.
+
Footnote †: journal: homepage: www.elsevier.com/locate/jag
+
Footnote †: journal: homepage: www.elsevier.com/locate/jag
+
Footnote †: journal: homepage: www.elsevier.com/locate/jag | Summarize the following text. | 323 |
arxiv-format/0706_4130v1.md | # A review of wildland fire spread modelling,
1990-present
3: Mathematical analogues and simulation models
A.L. Sullivan
## 1 Introduction
### History
The ultimate aim of any prediction system is to enable an end user to carry out useful predictions. A useful prediction is one that helps the user achieve a particular aim. Inthe field of wildland fire behaviour, that aim is primarily to stop the spread of the fire or to at least reduce its impact on life and property. The earliest efforts at wildland fire behaviour prediction concentrated on predicting the likely danger posed by a particular fire or set of conditions prior to the outbreak of a fire. These fire danger systems were used to set the level of preparedness of suppression resources or to aid in the identification of the onset of bad fire weather for the purpose of calling total bans on intentionally lit fires.
In addition to a subjective index of fire danger, many of early fire danger systems also provided a prediction of the likely spread of a fire, as a prediction of the rate of forward spread of the fire, the rate of perimeter increase or rate of area increase. In many cases, these predictions were used by users to plot the likely spread of the fire on a map, thereby putting the prediction in context with geographic features or resource locations, and constituted the first form of fire spread simulation.
Because much of the development of the early wildland fire behaviour models was carried out by those organisations intended to use the systems, the level of sophistication of the model tended to match the level of sophistication of the technology used to implement it. Thus, the early fire spread models provided only a single dimension prediction (generally the forward rate of spread of the headfire) which could be easily plotted on a map and extrapolated over time. While modern wildland fire spread modelling has expanded to include physical approaches (Sullivan, 2007a), all modern operational fire spread models have continued this empirical approach in the form of one-dimensional spread models (Sullivan, 2007b). Much of the development of technology for implementing the models in a simulation environment has concentrated on methods for converting the one-dimensional linear model of fire spread to that of two-dimensional planar models of fire spread.
In parallel with approaches to implement existing empirical models of fire spread have been efforts to approach the simulation of fire spread across the landscape from a holistic perspective. This has resulted in the use of methods other than those directly related to the observation, measurement and modelling of fire behaviour. These methods are mathematical in nature and provide an analogue of fire behaviour. Many of these approaches have also paralleled the development of the computer as a computational device to undertake the calculations required to implement the mathematical concepts.
An increase in the capabilities of remote sensing, geographical information systems and computing power during the 1990s resulted in a revival in the interest of fire behaviour modelling as applied to the prediction of spread across the landscape.
### 1.2 Background
This series of review papers endeavours to comprehensively and critically review the extensive range of modelling work that has been conducted in recent years. The range of methods that have been undertaken over the years represents a continuous spectrum of possible modelling (Karplus, 1977), ranging from the purely physical (those that are based on fundamental understanding of the physics and chemistry involved in the behaviour of a wildland fire) through to the purely empirical (those that have been based on phenomenological description or statistical regression of fire behaviour). In between is a continuous meld of approaches from one end of the spectrum or the other. Weber (1991) in his comprehensive review of physical wildland fire modelling proposed a system by which models were described as physical, empirical or statistical, depending on whether they account for different modes of heat transfer, make no distinction between different heat transfer modes, or involve no physics at all. Pastor et al. (2003) proposed descriptions of theoretical, empirical and semi-empirical, again depending on whether the model was based on purely physical understanding, of a statistical nature with no physical understanding, or a combination of both. Grishin (1997) divided models into two classes, deterministic or stochastic-statistical. However, these schemes are rather limited given the combination of possible approaches and, given that describing a model as semi-empirical or semi-physical is a 'glass half-full or half-empty' subjective issue, a more comprehensive and complete convection was required.
Thus, this review series is divided into three broad categories: Physical and quasi-physical models; Empirical and quasi-empirical models; and Simulation and Mathematical analogous models. In this context, a physical model is one that attempts to represent both the physics and chemistry of fire spread; a quasi-physical model attempts to represent only the physics. An empirical model is one that contains no physical basis at all (generally only statistical in nature), a quasi-empirical model is one that uses some form of physical framework upon which to base the statistical modelling chosen. Empirical models are further subdivided into field-based and laboratory-based. Simulation models are those that implement the preceding types of models in a simulation rather than modelling context. Mathematical analogous models are those that utilise a mathematical precept rather than a physical one for the modelling of the spread of wildland fire.
Since 1990, there has been rapid development in the field of spatial data analysis, e.g. geographic information systems and remote sensing. As a result, I have limited this review to works published since 1990. However, as much of the work that will be discussed derives or continues from work carried out prior to 1990, such work will be included much less comprehensively in order to provide context.
### Previous reviews
Many of the reviews that have been published in recent years have been for audiences other than wildland fire researchers and conducted by people without an established background in the field. Indeed, many of the reviews read like purchase notes by people shopping around for the best fire spread model to implement in their part of the world for their particular purpose. Recent reviews (e.g. Perry (1998); Pastor et al. (2003); etc), while endeavouring to be comprehensive, have offered only superficial and cursory inspections of the models presented. Morvan et al. (2004) takes a different line by analysing a much broader spectrum of models in some detail and concludes that no single approach is going to be suitable for all uses.
While the recent reviews provide an overview of the models and approaches that have been undertaken around the world, mention must be made of significant reviews published much earlier that discussed the processes in wildland fire propagation themselves. Foremost is the work of Williams (1982), which comprehensively covers the phenomenology of both wildland and urban fire, the physics and chemistry of combustion, and is recommended reading for the beginner. The earlier work of Emmons (1963, 1966) and Lee (1972) provides a sound background on the advances made during the post-war boom era. Grishin (1997) provides an extensive review of the work conducted in Russia in the 1970s, 80s and 90s.
The first paper in this series discussed those models based upon the fundamental principles of the physics and chemistry of wildland fire behaviour. The second paper in the series discussed those models based directly upon only statistical analysis of fire behaviour observations or models that utilise some form of physical framework upon which the statistical analysis of observations have been based. Particular distinction was made between observations of the behaviour of fires in the strictly controlled and artificial conditions of the laboratory and those observed in the field under more naturally occurring conditions.
This paper, the final in the series, focuses upon models concerned only with the simulation of fire spread over the landscape and models that utilise mathematical conceits analogous to fire spread but which have no real-world connection to fire. The former generally utilise a pre-existing fire spread model (which can be physical, quasi-physical, quasi-empirical or empirical) and implements it in such a way as to simulate the spread of fire across a landscape. As such, it is generally based upon a geographic information system (GIS) of some description to represent the landscape and uses a propagation algorithm to spread the fire perimeter across it. The latter models are for the most part based upon accepted mathematical functions or concepts that have been applied to wildland fire spread but are not derived from any understanding of wildland fire behaviour. Rather, these models utilise apparent similarities between wildland fire behaviour and the behaviour of these concepts within certain limited contexts. Because of this, these mathematical concepts could equally be applied to other fields of endeavour and, for the most part have been, to greater or lesser success.
Unlike the preceding entries in this series, this paper is segmented by the approaches taken by the various authors, not by the authors or their organisations, given the broad range of authors that in some instances have taken similar approaches.
## 2 Fire Spread Simulations
The ultimate aim of any fire spread simulation development is to produce a product that is practical, easy to implement and provides timely information on the progress of fire spread for wildland fire authorities. With the advent of cheap personal computing and the increased use of geographic information systems, the late 1980s and early 1990s saw a flourishing of methods to predict the spread of fires across the landscape (Beer, 1990a). As the generally accepted methods of predicting the behaviour of wildland fires at that time were (and still are) one-dimensional models derived from empirical studies (Sullivan, 2007a), it was necessary to develop a method of converting the single dimension forward spread model into one that could spread the entire perimeter in two dimensions across a landscape. This involves two distinct processes: firstly, representing the fire in a manner suitable for simulation, and secondly, propagating that perimeter in a manner suitable for the perimeter's representation.
Two approaches for the representation of the fire have been implemented in a number of softwares. The first treats the fire as a group of mainly contiguous independent cellsthat grows in number, described in the literature as a raster implementation. The second treats the fire perimeter as a closed curve of linked points, described in the literature as a vector implementation.
The propagation of the fire is then carried using some form of expansion algorithm. There are two main methods used. The first expands the perimeter based on a direct-contact or near-neighbour proximity spread basis. The second is based upon Huygens' wavelet principle in which each point on the perimeter becomes the source of the interval of fire spread. While the method of propagation and method of fire representation are often tied (for example, Hyuygens' wavelet principle is most commonly used in conjuction with a vector representation of the fire perimeter, there is no reason why this should be so and methods of representation and propagation can be mixed.
### Huygens wavelet principle
Huygen's wavelet principle, originally proposed for the propagation of light waves, was first proposed in the context of fire perimeter propagation by Anderson et al. (1982). In this case, each point on a fire perimeter is considered a theoretical source of a new fire, the characteristics of which are based upon the given fire spread model and the prevailing conditions at the location of the origin of the new fire. The new fires around the perimeter are assumed to ignite simultaneously, to not interact and to spread for a given time, \\(\\Delta t\\). During this period, each new fire attains a certain size and shape, and the outer surface of all the individual fires becomes the new fire perimeter for that time.
Anderson et al. (1982) used an ellipse to define the shape of the new fires with the long axis aligned in the direction of the wind. Ellipse shapes have been used to described fire spread in a number of fuels (Peet, 1965; McArthur, 1966; Van Wagner, 1969) and, although many alternative and more complex shapes have been proposed (e.g. double ellipse, lemniscate and tear drops (Richards, 1995), the ellipse shape has been found to adequately described the propagation of wildland fires allowed to burn unhindered for considerable time (Anderson et al., 1982; Alexander, 1985). The geometry of the ellipse template is determined by the rate of forward spread as predicted by the chosen fire spread model and a suitable length-to-breadth ratio (L:B) to give the dimensions of the ellipse (Fig. 1). McArthur (1966) proposed ratios for fires burning in grass fuels in winds up to \\(\\simeq\\) 50 km h\\({}^{-1}\\). Alexander (1985) did the same for fires in conifer forests also up to a wind speed of \\(\\simeq\\) 50 km h\\({}^{-1}\\).
Figure 2 illustrates the application of Huygens' principle to the propagation of a fire perimeter utilising the ellipse template. A section of perimeter defined by a series of linked nodes that act as the source of a series of new fires. The geometry of the ellipse used for each new fire is determined by the prevailing conditions, the chosen fire spread model and length-breadth ratio model, and the given period of propagation \\(\\Delta t\\). In the simple case of homogeneous conditions, all ellipses are the same and the propagation is uniform in the direction of the wind. The boundary of the new ellipses forms the new perimeter at \\(t+\\Delta t\\).
Richards (1990, 1995) produced analytical solutions for this modelling approach, for a variety of template shapes, in the form of a pair of differential equations. A computer algorithm Richards and Bryce (1996) that utilises these equations was developed and was subsequently incorporated into fire simulation packages, including FARSITE (USA) (Finney,1994, 1998) and Prometheus (Canada) (CWFGM Steering Committee, 2004). An alternative method that utilises the elliptical geometry only is that of Knight and Coleman (1993) which is used in SiroFire (Australia) (Beer, 1990a; Coleman and Sullivan, 1996). This method provides solutions to the two main problems with the closed curve expansion approach using Huygens' wavelet principle, namely rotations in the perimeter, in which a section of perimeter turns itself inside-out, and enclosures, in which unburnt fuel is enclosed by two sections of the same perimeter (termed a 'bear hug'). A similar method was proposed by Wallace (1993).
FARSITE is widely used in the US by federal and state land management agencies for predicting fire spread across the landscape. It is based upon the BEHAVE (Andrews, 1986) fire behaviour prediction systems, which itself is based upon the spread model of Rothermel (1972). It includes models for fuel moisture content (Nelson, 2000), spotting (Albini, 1979), post-front fuel consumption (Albini and Reinhardt, 1995; Albini et al., 1995), crown-fire initiation (Van Wagner, 1977) and crown-fire spread (Rothermel, 1991). It is PC-based in MS-Windows and utilises the ARCView GIS system for describing the spatial fuel data and topography.
SiroFire was developed for operational use in Australia and utilises McArthur's fire spread models for grass (McArthur, 1966) and forest (McArthur, 1967) as well as the recommended replacement grassland model (Cheney et al., 1998) and versions of Rothermel's models configured for Australian grass and forest litter fuel, however while it was never used operationally it did find use as a training tool for volunteer bushfire firefighters. It uses a proprietary geographic format intended to reduce computation time with data derived from a number of GIS platforms. It was PC-based using DOS protected mode although would run under MS-Windows. It has now been subsumed into a risk management model, Phoenix, being developed by the University of Melbourne
The Canadian Wildland Fire Growth Model, Prometheus, is based on the Canadian Fire Behaviour Prediction (FBP) System (Forestry Canada Fire Danger Group, 1992) and utilises the wavelet propagation algorithms of Richards (1995); Richards and Bryce (1996) to simulate the spread of wildland fire across landscapes. It was initially developed by the Alberta Sustainable Resource Development, Forest Protection Division for the Canadian Forest Service, but is now a national interagency endeavour across Canada endorsed by the Canadian Interagency Forest Fire Centre. It is Windows-based, utilises maps and geographic data exported from the Esri GIS platform ARC and is intended for use as a realtime operational management tool. As with FARSITE and SiroFire, Prometheus allows the user to enter and edit fuel and meteorological data and carry out simulations of fire spread.
The symmetric nature of the template ellipse in conjunction with the application of Huygens' wavelet principle neatly provides the flank and rear spread of a fire. By relating the flank and rear spread through the ellipse geometry, the single forward rate of spread of the fire is all that is needed to simulate the spread of the entire perimeter. French et al. (1990) found that in homogeneous fuels and weather conditions, the Huygens' wavelet principle with template ellipse shape suitably modelled fire spread, with only small distortion of the fire shape. However, such a method cannot adequately handle conditions and fuels that are heterogeneous and errors introduced through changes in the conditions during the period \\(\\Delta t\\) as well as distortions in the fire perimeter due to artifacts in the Huygens' wavelet method.
Changes in conditions during the propagation period \\(\\Delta t\\) cause the predicted perimeter to over- or under-predict the spread of the perimeter because those changes are not reflected in the predicted perimeter. Reducing \\(\\Delta t\\) can reduce the impact of such changes and a flexible approach to the setting of \\(\\Delta t\\) has been used with great success (Finney, 1998).
Eklund (2001) implemented the method of Knight and Coleman (1993) as a fire propagation engine existing with a geographic database on a distributed network such as the World Wide Web (WWW).
### Raster-based simulation
In a raster-based simulation, the fire is represented by a raster grid of cells whose state is unburnt, burning or burnt. This method is computationally less intensive than that of the closed curve (vector) approach, and is much more suited to heterogenous fuel and weather conditions. However, because fuel information needs to be stored for each and every cell in the landscape, there is a trade-off between the resolution at which the data is stored and the amount of data that needs to be stored (and thus memory requirements and access times, etc)3.
Footnote 3: In vector data, fuel is stored as polygons represented by a series of data points representing the vertices of the outline of the fuel and the fuel attributes for the whole polygon. Very large areas can be stored in this fashion but with overhead in processing to determine if a point is inside the polygon.
The method of expanding the fire in this fashion is similar to that of cellular automata, in which the fire propagation is considered to be a simple set of rules defining the interaction of neighbouring cells in a lattice. I will differentiate fire propagation simulations that utilise a pre-existing fire spread model to determine the rate of fire expansion from those that are true cellular automata. The former are described here as raster- or grid-based simulations and the latter are dealt with in the following section on mathematical analogues.
Kourtz and O'Regan (1971) were the first to apply computer techniques to the modelling of fire spread across a landscape. Initially simulating the smouldering spread of small fires (\\(<0.02\\) ha) using a grid of 50 \\(\\times\\)50 square cells each of 1 ft\\({}^{2}\\) in no wind and no slope, this model was extended using a combination of Canadian and US (Rothermel, 1972) fire behaviour models (Kourtz et al., 1977) and the output was in the form of text-based graphical representation of predicted spread. King (1971) developed a model of rate-of-area increase of aerial prescribed burns (intended for use on a hand-held calculator) based on an idealised model of the growth of a single spot fire. Fransden and Andrews (1979) utilised a hexagonal lattice to represent heterogeneous fuel beds and a least-time-to-ignition heat balance model to simulate fire spread across it. Green (1983); Green et al. (1983) generalised the approach of Kourtz and O'Regan and investigated the effect of discontinuous, heterogeneous fuel in square lattices on fire shape utilising both a heat balance and a flame contact spread models and found that while fire shapes are less regular than in continuous fuels, the fires tended to become more elliptical in shape as the fire progressed, regardless of the template shape used.
Green et al. (1990) produced a landscape modelling system called IGNITE that utilised the fire spread mechanics of Green (1983). This system is a raster-based fire spread model that uses fire spread models of McArthur (1966, 1967) as retro-engineered by Noble et al. (1980) and an elliptical ignition template to predict the rate of forward spread in the form of \"time to ignition\" for each cell around a burning cell. IGNITE very easily deals with heterogenous fuels and allows the simulation of fire suppression actions through changes in the combustion characteristics of the fuel layers.
Kalabokidis et al. (1991) and Vasconcelos and Guertin (1992) introduced similar methods to spatially resolve Rothermel's spread model in BEHAVE by linking it to raster-based GIS platforms. Kalabokidis et al. (1991) developed a simulation technique that derived a 'friction' layer within the GIS for six base spread rates for which the friction value increased as spread rate decreased. This was combined with six wind speed classes to produce a map of potential fire extent contours and fireline intensity strata across a range of slope and aspect classes. Vasconcelos and Guertin (1992) developed a similar simulation package called FIREMAP that continued the earlier work of Vasconcelos et al. (1990). FIREMAP stored topographic, fuel and weather information as rasterised layers within the GIS. It is assumed that the resolution of the rasters are such that all attributes within each cell are uniform. Fire characteristics such as rate of spread, intensity, direction of maximum spread, flame length, are calculated for each cell and each weather condition to produce a database of output maps of fire behaviour. Simulation is then undertaken by calculating each cell's 'friction' or time taken for a fire front to consume a cell. Ball and Guertin (1992) extended the work of Vasconcelos by improving the method used to implement the cell to cell spread by adjusting the ROS for flank and rear spread based upon BEHAVE's cosine relation with head fire ROS. The authors found the resulting predicted fire shapes to be unnaturally angular and attribute this to the poor relation for flank spread given by BEHAVE, the regular lattice shape of the raster, and the fact that spread angles are limited, concluding that the raster structure cannot properly represent 'the continuous nature of the actual fire'.
Karafyllidis and Thanailakis (1997) developed a raster-based simulation also based on Rothermel (1972) for hypothetical landscapes. The state of each raster cell is the ratio of the area burned of the cell to the total area of the cell. The passage of the fire front is determined by the sum of the states of each cell's neighbours at each time step until the cell is completely burnt. This approach requires, as input parameter for each cell, the rate of spread of a fire in that cell based on the fuel alone. Berjak and Hearne (2002) improved the model by incorporating the effects of slope and wind on the scalar field of cell rate of spread using the slope effect model of Cheney (1981) and an empirical flame angle/wind speed function. This model was then applied to spatially heterogeneous Savanna fuels of South Africa and found to be in good agreement with observed fire spread.
FireStation (Lopes et al., 1998, 2002) and PYROCART (Perry et al., 1999) both implement Rothermel's fire spread model in a raster-based GIS platform. FireStation utilises both single and double-ellipse fire shape templates, depending on wind speed, to dictate the spread across cells. The 3-dimensional wind field across the landscape is based on local point observations extrapolated using either a linear conservation of mass approach, or a full 3-dimensional solution of the Navier-Stokes equations. Slope is treated as an 'equivalent' wind. PYROCART utilises the fire shape model of Green et al. (1990) which is a function of wind speed. It was validated against a small wildfire and its predictive accuracy (a measure of performance based on the percentage of cells predicted to be burnt compared to those that were unburnt or not predicted to burn) estimated to be 80%.
Guariso and Baracani (2002) extend the standard 2-dimensional approach to modellingthe spread of a surface fire by implementing two levels of raster-based models, one to represent surface fuel and its combustion and another to represent, once a critical threshold value has been reached, the forest canopy and its combustion. They utilise Rothermel's fire spread model with fuel models modified and validated for Mediterranean fuel complexes. To improve its capabilities as an operational tool, fire fighting resources are tracked on screen using Global Positional System (GPS). Trunfio (2004) implemented Rothermel's model using a hexagonal cell shape and found that the model did not produce the spurious symmetries found with square-shaped lattices.
### Other propagation methods
There are alternatives to the raster cell or vector ellipse template propagation methods described above, although these are less widespread in their use. Coupled fire-atmosphere models that incorporate a pre-existing fire spread model (such as given by Rothermel or McArthur) are, at their most basic, a form of propagation algorithm. The coupled fire-atmosphere model of Clark et al. (1996a,b, 2004) represents a considerable effort to link a sophisticated 3-dimensional high-resolution, non-hydrostatic mesoscale meteorological model to a fire spread model. In this particular case, the mesoscale meteorological model was originally developed for modelling atmospheric flows over complex terrain, solving the Navier-Stokes and continuity equations and includes terrain following coordinates, variable grid size, two-way interactive nesting, cloud (rain and ice) physics, and solar heating (Coen, 2005). It was originally linked to the empirical model of forest fire spread of McArthur (1967) (Clark et al., 1998) but was later revised to incorporate the spread model of Rothermel instead (Coen and Clark, 2000).
The atmosphere model is coupled to the fire spread model through the sensible (convection and conduction effects) and latent heat fluxes approximated from the fireline intensity (obtained via the ROS) predicted by the model for a given fuel specified in the model. Fuel is modelled on a raster grid of size 30 m (Coen, 2005). Fuel moisture is allowed to vary diurnally following a very simple sinusoid function based around an average daily value with a fixed lag time behind clock time (Coen, 2005). Assumptions are made about the amount of moisture evaporated prior to combustion. Fuel consumption is modelled using the BURNUP algorithm of Albini and Reinhardt (1995). Effects of radiation, convection and turbulent mixing occurring on unresolved scales (i.e. \\(<30\\) m) are 'treated crudely' without any further discussion.
The coarse nature of the rasterised fuel layer meant that a simple cell fire spread propagation technique was too reliant on the cell resolution. A fire perimeter propagation technique that is a unique mix of the raster- and the vector-based techniques was developed. Each cell in the fuel layer is allowed to burn at an independent rate, dependent upon the predicted wind speed at a height of 5 m, the predicted rate of spread and the fuel consumption rate. Four tracers aligned with the coordinate system, each with the appropriate ROS in the appropriate directions (headfire ROS is defined as that parallel to the wind direction) are used to track the spread of fire across a fuel cell. The coordinates of the tracers define a quadrilateral that occupies a fuel cell which is allowed to spread across the fuel cell. The tracers move across a fuel cell until they reach a cell boundary. If the adjacent cell is unburnt, it is ignited and a fresh set of tracers commenced for the boundaries of that cell. Meanwhile, once the tracers reach a cell boundary, they can then only move in the orthogonal direction. In this way, the quadrilateral can progress across the cell. The boundaries of all the quadrilaterals then make up the fire perimeter. The size of the quadrilateral then allows an estimate of the amount of fuel that has been consumed since the cell ignited. The fireline propagation method allows for internal fire perimeters, although it only allows one fireline per fuel cell.
The interaction between the heat output of the burning fuel and the 3-dimensional wind field results in complex wind patterns, which can include horizontal and vertical vortices. Clark et al. (1996a,b); Jenkins et al. (2001) explored these in producing fireline phenomena such as parabolic headfire shape, and convective and dynamic fingering. However, as wind speed increases (\\(>\\) 5 m s\\({}^{-1}\\) at 15 m above ground), Clark et al. (1996b) found that the coupling weakens and the wind flows through the fire.
The real utility of the coupled fire-atmosphere model, however, is the prediction of wind direction around the fire perimeter, used to drive the spread of the fire. This, in effect, replaces Huygens' wavelet approach with a much more physically direct method. However, the use of Rothermel's fire spread model for spread in directions other than in the direction of the prevailing wind is questionable and results in odd deformations in the fire perimeter when terrain or fuel are not uniform (Clark et al., 2004).
Several other workers have taken the same approach as Clark in linking a mesoscale meteorological model to a fire model. Gurer and Georgopoulos (1998) coupled an off-the-shelf mesoscale meteorological model (the Regional Atmospheric Modeling System (RAMS) (Pielke et al., 1992) with Rothermel's model of fire spread to predict gas and particulate fall out from forest fires for the purpose of safety and health. The Rothermel model is used to obtain burning area and heat for input into RAMS. Submodels are used for prediction of the emission components (CO2, CH4, etc, polycyclic aromatic hydrocarbons, etc). Simulation of the fire perimeter propagation is not undertaken. Speer et al. (2001) used a numerical weather prediction model to predict the speed and direction of the wind for input into a simple empirical model of fire spread through heathland fuel to predict the rate of forward spread (not to simulate the spread) of two wildfires in Sydney 1994.
Plourde et al. (1997) extend the application of Huygens's wavelet propagation principle as utilised by Knight and Coleman (1993). However, rather than relying on the template ellipse as the format for the next interval propagation, the authors utilise an innovative closed contour method based on a complex Fourier series function. Rather than considering the perimeter as a series of linked points that are individually propagated, the perimeter is considered as a closed continuous curve that is propagated in its entirety. A parametric description of the perimeter is derived in which the x and y coordinates of each point are encoded as a real and imaginary pair. However, as with Knight and Coleman (1993), a sufficiently fine time step is critical to precision and anomalies such as rotations and overlaps must be identified and removed. Plourde et al.,'s propagation model appears to handle heterogeneous fuel but the timestep is given as 0.05 s, resulting in very fine scale spread but with the trade-off of heavy computational requirements.
Viegas (2002) proposed an unorthodox propagation mechanism in which the fire perimeter is assumed to be a continuous entity that will endeavor to rotate to align itself with a slope to an angle of approximately 60 degrees across the slope. Based on observation of laboratory and field experiments of line fires lit at angles to the slope, Viegas constructs a fire perimeter propagation algorithm in which he redefines the flank spread of a fire burning in a cross-slope wind as a rotation of the front.
In perhaps a sign of the times, Seron et al. (2005) takes the physical model of Asensio and Ferragut (2002) and simulates it using the techniques and approaches developed for computer generated imagery (CGI). A strictly non-realtime method is used to solve the fire spread model utilising satellite imagery and the terrain (using flow analysis techniques), interpolated using kriging, to determine fuel and non-burnables such water bodies and rivers, etc. All attributes are non-dimensionalised. Wind is calculated as a vector for each cell derived from the convection form of the physical fire model (Asensio et al., 2005). This vector is then added to the terrain gradient vector. 256 x 256 cells are simulated, resulting in 131589 equations that need to be solved for each timestep, which is 0.0001 s.
## 3 Mathematical Analogues
In the broader non-wildland-fire-specific literature there is a considerable number of works published involving wildland fire spread that are not based on wildland fire behaviour. For the most part, these works implement mathematical functions that appear analogous to the spread of fires and thus are described as wildland fire spread models, while in some cases wildland fire spread is used simply as a metaphor for some behaviour. These mathematical functions include cellular automata and self-organised criticality, reaction-diffusion equations, percolation, neural networks and others. This section briefly discusses some of the fire spread-related applications of these functions.
### Cellular automata and self-organised criticality
Cellular automata (CA) are a formal mathematical idealisation of physical systems in which space and time are discretised, and physical quantities take on a finite set of values (Wolfram, 1983). CA were first introduced by Ulam and von Neumann in the late 1940s and have been known by a range of names, including cellular spaces, finite state machines, tessellation automata, homogenous structures, and cellular structures. CA can be described as discrete space/time logical universes, each obeying their own local physics (Langton, 1990). Each cell of space is in one of a finite number of states at any one time. Generally CAs are implemented as a lattice (i.e. 2D) but can be of any dimension. A CA is specified in terms of the rules that define how the state changes from one time step to the next and the rules are generally specified as functions of the states of neighbours of each cell in the previous time step. Neighbours can be defined as those cells that share boundaries, vertices or even more further removed4. The key attribute of a CA is that the rules that govern the state of any one cell are simple and based primarily upon the states of its neighbours, which can result in surprisingly complex behaviour, even with a limited number of possible states (Gardner, 1970) and can be capable of Universal Computation (Wolfram, 1986).
Footnote 4: In a 2D lattice, the cells sharing boundaries form the von Neumann neighbourhood (4 neighbours), cells sharing boundaries and vertices form the Moore neighbourhood (8 neighbours) (Albinet et al., 1986)
Due to the inherent spatial capacity with interrelations with neighbouring cells, CA have been used to model a number of natural phenomena, e.g. lattice gases and crystal growth (with Ising models) (Enting, 1977), ecological modelling (Hogeweg, 1988) and have also been applied to the field of wildland fire behaviour. Albinet et al. (1986) first introduced the concept of fire propagation in the context of CA. It is a simple idealised isotropic model of fire spread based on epidemic spread in which cells (or 'trees') receive heat from burning neighbours until ignition occurs and then proceed to contribute heat to its unburnt neighbours. They showed that the successful spread of the fire front was dependent upon a critical density of distribution of burnable cells (i.e. 'trees') and unburnable (or empty) cells, and that this critical density reduced with the increasing number of neighbours allowed to 'heat' a cell. They also found that the fire front structure was fractal with a dimension \\(\\simeq 1.8\\). The isotropic condition in which spread is purely a result of symmetrical neighbour interactions (i.e. wind or slope are not considered) is classified as percolation (discussed below). von Niessen and Blumen (1988) extended the model of Albinet et al. to include anisotropic conditions such as wind and slope in which ignition of crown and surface fuel layers was stochastic.
The idealised-more of a metaphor, really-'forest fire' model CA, along with the sandpile (avalanche) and earthquakes models, was used as a primary example of self-organised criticality (Bak et al., 1987; Bak, 1996), in which it is proposed that dynamical systems with spatial degrees of freedom evolve into a critical self-organised point that is scale invariant and robust to perturbation. In the case of the forest fire model, the isotropic model of Albinet et al. (1986) was modified and investigated by numerous workers to explore this phenomenon, eg. Bak et al. (1990); Chen et al. (1990); Drossel and Schwabl (1992, 1993); Clar et al. (1994); Drossel (1996) such that, in its simplest form, trees grow at a small fixed rate, \\(p\\), on empty sites. At a rate \\(f\\), (\\(f<<p\\)), sites are hit by lightning strikes (or matches are dropped) that starts a fire that burns if the site is occupied. The fire spreads to every occupied site connected to that burning site and so on. Burnt sites are then considered empty and can be re-colonised by new growing trees. The rate of consumption of an occupied site is immediate, thus the only relevant parameter is the ratio \\(\\theta=p/f\\), which sets the scale for the average fire size (i.e. the number of trees burnt after one lightning strike) (Grassberger 2002). Self-organised criticality occurs as a result of the rate of tree growth and the size of fire that results when ignition coincides with an occupied site, which in turn is a function of the rate of lightning strike. At large \\(\\theta\\), less frequent but larger fires occur. As \\(\\theta\\) decreases, the fires occur more often but are smaller. This result describes the principle underlying the philosophy of hazard reduction burning. The frequency distribution of fire size against number of fires follows a power law (Malamud et al., 1998; Malamud and Turcotte, 1999) similar to the frequency distributions found for sandpiles and earthquakes.
More recent work (Schenk et al., 2000; Grassberger, 2002; Pruessner and Jensen, 2002; Schenk et al., 2002) has found that the original forest fire model does not truly represent critical behaviour because it is not scale invariant; in larger lattices (\\(\\simeq 65000\\)), scaling laws are required to correct the behaviour (Pastor-Satorras and Vespignani, 2000). Reed and McKelvey (2002) compared size-distribution of actual burned areas in six regions of North America and found that a simple power-law distribution was 'too simple to describe the distribution over the full range. Rhodes and Anderson (1998) suggesting using the forest fire model as a model for the dynamics of disease epidemics.
Self-organised criticality, however, is generally only applicable to the effect of many fires over large landscapes over long periods of time, and provides no information about the behaviour of individual bushfires. There are CA that have used actual site state and neighbourhood rules for modelling fire spread but these have been based on an overly simple understanding of bushfire behaviour and their performance is questionable. Li and Magill (2000, 2003) attempted to model the spread of individual bushfires across a landscape modelled in the 2D CA lattice in which fuel is discrete and discontinuous. While they supposedly implemented the Rothermel wind speed/ROS function, their model shares more in common with the Drossel-Schwabl model than any raster-based fire spread model. Li and Magill determine critical 'tree' densities for fire spread across hypothetical landscapes with both slope and wind effects in order to study the effect of varying environmental conditions on fire spread. However, ignition of a cell or 'tree' is probabilistic, based on 'heat conditions' (or accumulated heat load from burning neighbours); and the 'tree' flammability is an arbitrary figure used to differentiate between dead dry trees and green 'fire-resistant' trees. Essentially this is the same as tree immunity as proposed by Drossel and Schwabl (1993). A critical density of around 41% was found for lattices up to 512 \\(\\times\\,\\)512 cells. The model is not compared to actual fire behaviour.
Duarte (1997) developed a CA of fire spread that utilised a probabilistic cell ignition model based on a moisture content driven extinction function based Rothermel (1972) using an idealised parameter based on fuel characteristics, fuel moisture and heat load. Fuel was considered continuous (but for differences in moisture). Duarte investigated the behaviour of the CA and found the isotropic (windless) variant associated with undirected percolation. In the presence of wind, the CA belonged to the same universality class (i.e. the broad descriptive category) as directed percolation. Duarte notes that no CA at that time could explain the parabolic headfire shape observed in experimental fires by workers such as Cheney et al. (1993).
Rather than use hard and fast rules to define the states of a CA, Mraz et al. (1999) used the concept of fuzzy logic to incorporate the descriptive and uncertain knowledge of a system's behaviour obtained from the field in a 2D CA. Fuzzy logic is a control system methodology based on an expert system in which rates of change of output variables are given instead of absolute values and was developed for systems in which input data is necessarily imprecise. Mraz et al. develop cell state rules (simple 'if-then-else' rules)in which input data (such as wind) is 'fuzzified' and output states are stochastic.
Hargrove et al. (2000) developed a probabilistic model of fire spread across a heterogeneous landscape to simulate the ecological effects of large fires in Yellowstone National Park, USA. Utilising a square lattice (each cell 50 m \\(\\times\\,\\)50 m), a stochastic model of fire spread in which the ignition of the Moore neighbourhood around a burning cell is based upon an ignition probability that is isotropic in no wind and biased in wind (using three classes of wind speed). The authors determined a critical ignition probability (isotropic) of around 0.25 in order for a fire to have a 50% chance of spreading across the lattice. Spotting is modelled based on the maximum spotting distance as determined within the SPOT module of the BEHAVE fire behaviour package (Andrews, 1986) using the three wind classes, a 3\\({}^{\\circ}\\)random angle from that of the prevailing wind direction, and the moisture content of the fuel in the target cell to determine spot fire ignition probability. Inclusion of spotting dramatically increased the ROS of the fire and the total area burned. Validation of the model, despite considerable historical weather and fire data, has not been undertaken due to difficulties in parameterising the model and the poor resolution of the historical data.
Muzy et al. (2002, 2003, 2005a,b), Dunn and Milne (2004) and Ntaimo et al. (2004) explore the application of existing computational formalisms in the construction of automata for the modeling of wildland fire spread; Muzy et al. (2002, 2003) and Ntaimo et al. using Discrete Event System Specification (DEVS or cell-DEVS) and Dunn and Milne (2004) using CIRCAL. DEVS attempts to capture the processes involved in spatial phenomena (such as fire spread) using an event-based methodology in which a discrete event (such as ignition) at a cell triggers a corresponding discrete process to occur in that cell which may or may not interact with other cells. CIRCAL is derived from a process algebra developed for electronic circuit integration and in this case provides a rigorous formalism to describe the interactions between concurrently communicating, interacting automata in a discretised landscape to encode the spatial dynamics of fire spread.
Sullivan and Knight (2004) combined a simple 2-dimensional 3-state CA for fire spread with a simplified semi-physical model of convection. This model explored the possible interactions between a convection column, the wind field and the fire to replicate the parabolic headfire shape observed in experimental grassland fires (Cheney et al., 1993). It used local cell-based spread rules that incorporated semi-stochastic rules (allowing discontinuous, non-near neighbour spread) with spread direction based on the vector summation of the mean wind field vector and a vector from the cell to the centre of convection (as determined by overall heat output of the fire as recorded in a six-stage convection column above the fire). Fire shapes closely resembled those of fires in open grassland but ROS was not investigated.
### Reaction-Diffusion
Reaction-diffusion is a term used in chemistry to describe a process in which two or more chemicals diffuse over a surface and react with one another at the interface between the two chemicals. The reaction interface in many cases forms a front which moves across the surface of the reactants and can be described using travelling wave equations. Reaction-Diffusion equations are considered one of the most general mathematical models and may be applied to a wide variety of phenomena. A reaction-diffusion equation has two main components: a reaction term that generates energy and a diffusion term in which the energy is dissipated. The general solution of a reaction-diffusion equation is that of the wave.
Watt et al. (1995) discussed the spatial dimensional reduction of a two-dimensional reaction-diffusion equation describing a simple idealised fire spread model down to a single spatial dimension. An analytical solution for the temperature and thence the speed of the wave solution, which depends on the reaction term and the upper surface cooling rate, is obtained from the reduced reaction-diffusion equation by various algebra and linearisation.
Mendez and Liebot (1997) presents a mathematical analogue model of the cellular automata 'forest fire' model of Drossel and Schwabl in which they start with a hyperbolic reaction-diffusion equation and then commence to apply particular boundary conditions in order to determine the speed of propagation of a front between unburnt or 'green' trees on one side and 'burnt' trees on the other. It is assumed that in state 0 (all green) and state 1 (all burnt) that the system is in equilibrium. Particular abstract constraints on the speed of the front are determined and the nonequilibrium of the thermodynamics between the two states are found.
Margerit and Sero-Guillaume (1998) transformed the elliptical growth model of Richards (1990) in order to find an intrinsic expression for fire front propagation. The authorsre-write the model in an optic geometric 'variational form' of the model in which the forest is represented as three-state 'dots' and the passage from unburnt or 'at rest' dots and burning or 'excited' dots is represented by a wave front. This form is a Hamilton-Jacobi that when solved gives the same result as Richards model. The authors then attempt to bring physicality to Richards' model by proposing that the wave front is a temperature isotherm of ignition. They put forward two forms: a hyperbolic equation (double derivative Laplacian) and a parabolic (single derivative Laplacian) which is the standard reaction diffusion equations (i.e. wave solutions due to both the production of energy by a reaction and to the transport of this energy by thermal diffusion and convection). After some particularly complicated algebra the authors bring the reaction-diffusion equation back to an elliptical wave solution, showing that Richards' model is actually a special case of the reaction-diffusion equation with geometry and no physics.
### Diffusion Limited Aggregation
Clarke et al. (1994) proposed a cellular automata model in which the key propagation mechanism is that of diffusion limited aggregation (DLA). DLA is an aggregation process used to explain the formation of crystals and snowflakes and is a combination of the diffusion process with restriction upon the direction of growth. DLA is related to Brownian trees, self-similarity and the generation of fractal shapes (\\(1<D<2\\)). In this case, fire ignitions ('firelets') are sent out from a fire source and survive by igniting new unburnt fuel. The direction of spread of the firelet is determined by the combination of environmental factors (wind direction, slope, fuel). If there is no fuel at a location, the firelet 'dies' and a new firelet is released from the source. This continues until no burnable fuel remains in direct connection with the original fire source. Clarke's model is modified somewhat such that a firelet can travel over fuel cells previously burned such that the fire aggregates to the outer edge from the interior and wind, fuel and terrain are used to weight or bias the direction of travel of the firelet.
The model was calibrated against an experimental fire conducted in 1986 that reached 425 ha and was mapped using infrared remote sensing apparatus every 10 minutes. Environmental conditions from the experimental fire were held constant in the model and the behaviour criteria of the cellular automata adjusted, based on the spatial pattern of the experimental burn, its temperature structure and temporal patten. Comparison of 100 fire simulations over the duration of the experimental fire found that pixels that did actually burn were predicted to burn about 80% of the time. Clarke et al. naively state that fires burning across the landscape are fractal due to the self-similarity of the fire perimeter but this ignores the fact that on a landscape scale, the fire follows the fuel and it is the distribution of the fuel that may well be fractal, not the fire as such.
A similar approach was taken by Achtemeier (2003) in which a \"rabbit\" acts analogously to a fire, following a set of simple rules dictating behaviour, e.g. rabbits eat food, rabbits jump, and rabbits reproduce. A hierarchy of rules deal with rabbit death, terrain, weather, hazards. An attempt to incorporate rules regarding atmospheric feedbacks is also included. A similar dendritic pattern of burning to that of Clarke et al. (1994) results when conditions for spread are tenuous. Strong winds results in a parabolic headfire shape.
### Percolation and fractals
Percolation is a chemical and materials science process of transport of fluids through porous materials. In mathematics it is a theory that is concerned with transport through a randomly distributed media. If the medium is a regular lattice, then percolation theory can be considered the general case of isotropic CA. Unlike CA, percolation can occur on the boundaries of the lattice cells (bond percolation), as well as the cells themselves (site percolation). Percolation theory has been used for the study of numerous physical phenomena, from CO\\({}_{2}\\) gas bubbles in ice to electrical conductivity. In addition to the CA models that have been applied to wildland fire behaviour described above, several workers have investigated the application of percolation itself to wildland fire behaviour.
Beer (1990b); Beer and Enting (1990) investigated the application of isotropic percolation theory to fire spread (without wind) by comparing predictions against a series of laboratory experiments utilising matches (with ignitable heads) placed randomly in a two-dimensional lattice. The intent was to simulate the original percolation work of Albinet et al. (1986) using von Neumann neighbours. They found that while the theory yielded qualitative information about fire spread (e.g. that as the cell occupation density increased toward the critical density, the time of active spread also increased) it was unable to reproduce quantitatively the laboratory results. Effects such as radiant heating from burning clusters of matches and convective narrowing of plumes have no analogue in site/bond percolation where only near-neighbour interactions are involved. They concluded that such models are unlikely to accurately model actual wildfires and that models based on a two-dimensional grid with nearest neighbour ignition rules are too naive.
Nahmias et al. (2000) conducted similar experimental work but on a much larger scale, investigating the critical threshold of fire spread (both with and without wind) across two-dimensional lattices. Utilising both scaled laboratory experiments and field experiments in which the fuel had been manipulated to replicate the combustible/noncombustible distribution of lattice cells in percolation, Nahmias et al. found critical behaviour in the spread of fire across the lattice. In the absence of wind, they found that the value of the critical threshold to be the same as that of percolation theory when first and second neighbours are considered. In the presence of wind, however, they observed interactions to occur far beyond second nearest neighbours which were impossible to predict or control, particularly where clusters of burning cells were involved. They conclude that a simple directed percolation model is not adequate to describe propagation under these conditions.
Ricotta and Retzlaff (2000); Caldarelli et al. (2001) investigated percolation in wildfires by analysing satellite imagery of burned area (i.e. fire scars). Both found the final burned area of wildfires to be fractal5 (fractal dimension, \\(D_{f}\\simeq 1.9\\)). Caldarelli et al. (2001) found the accessible perimeter to have a fractal dimension of \\(\\simeq 1.3\\) and and to be denser at their centres. They then show that such fractal shapes can be generated using self-organised 'dynamical' percolation in which a time-dependent probability is used to determine ignition of neighbours. Earlier, McAlpine and Wotton (1993) determined the fractal dimension of 14 fires (a mix of data from unpublished fire reports and the literature) to be \\(\\simeq 1.15\\). They then developed a method to convert perimeter predictions based on elliptical length-to-breadth to a more accurate prediction using the fractal dimension and a given measurement length scale.
Footnote 5: A fractal is a geometric shape which is recursively self-similar (i.e. on all scales), defining an associated length scale such that its dimension is not an integer, i.e. fractionalFavier (2004) developed an isotropic bond percolation model of fire spread in which two time-related parameters, medium ignitability (the time needed for the medium to ignite) and site combustibility (the time taken for the medium to burn once ignited are the key controlling factors of fire spread. Favier determined the critical values of these parameters and found fractal patterns similar to that of Caldarelli et al. (2001).
### Other methods
Other mathematical methods that have been used to model the spread of wildland fires include artificial neural networks (McCormick et al., 1999), in which a large number of processing nodes (neurons) store information (weighting) in the connections between the nodes. The weightings for each connection between the nodes is learnt by repeated comparing of input data to output data. Weightings can be non-linear but necessarily needs a large dataset on which to learn and assumes that the dataset is complete and comprehensive. A related field of endeavour is genetic algorithms (Karafyllidis, 1999, 2004), in which a pool of potential algorithms are mutated (either at random or directed) and tested against particular criteria (fitness) to determine the subset of algorithms that will the next generation of mutating algorithms. The process continues until an algorithm that can carry out the specific task evolves. This may lead to local optimisations of the algorithm that fail to perform globally. Again, the processes depends on a complete and comprehensive dataset on which to be bred.
Catchpole et al. (1989) modelled the spread of fire through a heterogeneous fuel bed as a spatial Markov chain. A Markov chain is a time-discrete stochastic process in which the previous states of a system are unrelated to the current state of the system. Thus the probability of a system being in a certain state is unrelated to the state it was in previously. In this context, Catchpole et al. treated, in one dimension, a heterogeneous fuel as a series of discrete, internally homogeneous, fuel cells in which the variation of fuel type is a Markov chain of a given transition matrix. The rate of spread in each homogeneous fuel cell is constant related only to the fuel type of that cell and to the spread rate of the cell previous. The time for a fire to travel through the _n_th cell of the chain is then a conditional probability based on the transition matrix. Taplin (1993) noted that spatial dependence of fuel types can greatly influence the variance of the rate of spread thus predicted and expanded the original model to include the effect of uncertainty in the spread rate of different fuel types.
The 'forest-fire' model of self-organised criticality (discussed above) has led to a variety of methods to investigate the behaviour of such models. These include renormalisation group (Loreto et al., 1995), bifurcation analysis (Dercole and Maggi, 2005), and small-world networks (Graham and Matthai, 2003).
## 4 Discussion
The field of computer simulation of fire spread is almost as old as the field of fire spread modelling itself and certainly as old as the field of computer simulation. While the technology of computing has advanced considerably in those years, the methods and approachesof simulating the spread of wildland fire across the landscape has not changed significantly since the early days of Kourtz and O'Regan (1971) on a CDC6400 computer. What has changed significantly has been the access to geographic data and level of complexity that can be undertaken computationally. The two areas of research covered in this paper, computer simulation of fire spread and the application of mathematical analogues to fire spread modelling are very closely related. So much so that key methods can be found in both approaches (e.g. raster modelling and cellular automata (percolation)).
The discussion of simulation techniques concentrated on the various methods of converting existing point (or one dimensional) models of rate of forward spread of a fire to two dimensions across a landscape. The most widely used method is that of Huygens' wavelet principle, which has been used in both vector (line) and raster (cell) simulation models. The critical aspect of this method is the choice of a template shape for the spawned wavelet (or firelet). The most common is that of the simple ellipse but Richards (1995) showed that other shapes are also applicable.
French (1992) found that vector-based simulations produce a much more realistic fire perimeter, particularly in off-axis spread, than raster-based simulations. However, raster-based simulations are more proficient at dealing with heterogeneous fuels than vector-based models. Historically, the requirements of raster-based models to have raster geographic data at a high enough resolution to obtain meaningful simulations has meant that vector-based models have been favoured, but the reducing cost of computer storage has seen a swing in favour of raster-based models. Increasingly, the choice of type of fire spread model is driven by the geographic information system (GIS) platform in which the geographic data resides, making the decision between the two moot.
Alternative fire propagation methods such as Clark et al. (2004) are restricted due to the specificity of the model in which the method is embedded. The coupled fire-atmosphere model is unique in the field of fire spread simulation in that it links a fully formed 3D mesoscale meteorological model of the atmosphere with a 1D fire spread model. In order to make the simulation work, the authors had to devise methods of propagating the fire perimeter at a resolution much smaller than the smallest resolvable area of the coupled model. The result is a method specific to that model and one which simply replaces the broad generalised approach of the template ellipse with a much more fundamental (but not necessarily more correct) variable spread direction around the perimeter.
The broad research area of mathematics has yielded many mathematical functions that could be seen as possible analogies to the spread of fire across the landscape. The most prevalent of these is the cellular automata model (or the much more general percolation theory). These models are well suited to spatial propagation of an entity across a medium and thus have found application in modelling epidemic disease spread, crowd motion, traffic and fire spread across the landscape. The approaches taken in such modelling range from attempting to model the thermodynamic balance of energy in combusting cells to treating the fire as a contagion with no inherent fire behaviour whatsoever apart from a moving perimeter. The later models have found great use in the exploration of critical behaviour and self-organisation.
Other approaches, such as reaction-diffusion models, have had a more physical basis to the modelling of fire spread but their application to fire assumes that fire behaviour is essentially a spatially continuous process that does not include any discontinuous processessuch as spotting. Models based on this mathematical conceit generally have to have specific components fitted to the original model to incorporate fundamental combustion processes such as spotting, non-local radiation, convection, etc.
While the mathematical analogue models discussed here may appear to have little in conjunction with real fire behaviour, they do offer a different perspective for investigating wildland fire behaviour, and in many cases are more computationally feasible in better than real time than many physical or quasi-physical models. Divorced as they are from the real world, approaches such as that of percolation theory and cellular automata provide a reasonable platform for the biological heterogeneity of fuels across the landscape. This is confirmed by the fact that many GIS platforms take such a raster view of the world. However, to simulate the spread of a fire across this (possibly 3D) surface, requires the development of rules of propagation that incorporate both local (traditional CA) rules with larger scale global rules in order to replicate the physical local and non-local processes involved in fire spread. The foremost of these is the importance of the convection column above a fire. The perimeter of a fire, although it may seem to be a loosely linked collection of independently burning segments of fuel, actually does behave has a continuous process in which the behaviour of the neighbouring fuel elements does affect the behaviour of any single fuel element. Other non-local interactions include spotting and convective and radiative heating of unburnt fuels.
Regardless of the method of fire propagation simulation that is used, the underlying fire spread model that determines the behaviour of the simulation is the critical element. The preceding two papers in this series discussed the various methods, from purely physical to quasi-physical to quasi-empirical and purely empirical, that have been developed since 1990. It is the veracity, verification and validation of these models that will dictate the quality of any fire spread simulation technique that is used to implement them. However, performance of the most accurate fire spread model will be limited, when applied in a simulation, to the quality of the input data required to run it. The greater the precision of a fire spread model, the greater the precision of the geographical, topographical, meteorological and fuel data needed to achieve that precision-one does not want a model's prediction to fail simply because a there was a tree beside a road that was not present in the input data! The need for greater and greater precision in fire spread models should be mitigated to a certain extent by the requirements of the end purpose and quantification of the range of errors to be expected in any prediction.
As in the original attempts at'simulation' using a single estimate of forward spread rate and a wall map to plot likely head fire locations, in many instances highly precise predictions of fire spread are just not warranted, considering the cost of the prediction and access to suitable high quality data. For the most part, 'ball-park' predictions of fire spread and behaviour (not unlike those found in current operational prediction systems) are more than enough for most purposes. An understanding of the range of errors, not only in the input data but also in the prediction itself, will perhaps be more efficient and effective use of resources.
In the end, there will be a point reached at which the computational and data cost of increases in prediction accuracy and precision will outweigh the cost of cost-effective suppression.
Acknowledgements
I would like to acknowledge Ensis Bushfire Research and the CSIRO Centre for Complex Systems Science for supporting this project; Jim Gould and Rowena Ball for comments on the draft manuscript; and the members of Ensis Bushfire Research who ably assisted in the refereeing process, namely Miguel Cruz and Ian Knight.
## References
* Achtemeier (2003) Achtemeier, G. L. (2003). \"Rabbit Rules\"-An application of Stephen Wolfram's \"New Kind of Science\" to fire spread modelling. In _Fifth Symposium on Fire and Forest Meteorology, 16-20 November 2003, Orlando, Florida_. American Meteorological Society.
* Albinet et al. (1986) Albinet, G., Searby, G., and Stauffer, D. (1986). Fire propagation in a 2-d random medium. _Le Journal de Physique_, 47:1-7.
* Albini (1979) Albini, F. (1979). Spot fire distance from burning trees-a predictive model. General Technical Report INT-56, USDA Forest Service, Intermountain Forest and Range Experimental Station, Odgen UT.
* Albini et al. (1995) Albini, F., Brown, J., Reinhardt, E., and Ottmar, R. (1995). Calibration of a large fuel burnout model. _International Journal of Wildland Fire_, 5:173-192.
* Albini and Reinhardt (1995) Albini, F. and Reinhardt, E. (1995). Modeling ignition and burning rate of large woody natural fuels. _International Journal of Wildland Fire_, 5:81-91.
* Alexander (1985) Alexander, M. (1985). Estimating the length to breadth ratio of elliptical forest fire patterns. In _Proceedings of the Eighth Conference on Forest and Fire Meteorology_, pages 287-304. Society of American Forecsters.
* Anderson et al. (1982) Anderson, D., Catchpole, E., de Mestre, N., and Parkes, T. (1982). Modelling the spread of grass fires. _Journal of Australian Mathematics Society, Series B_, 23:451-466.
* burn subsystem, part 1. Technical Report General Technical Report INT-194, 130 pp., USDA Forest Service, Intermountain Forest and Range Experiment Station, Ogden, UT.
* Asensio and Ferragut (2002) Asensio, M. and Ferragut, L. (2002). On a wildland fire model with radiation. _International Journal for Numerical Methods in Engineering_, 54(1):137-157.
* Asensio et al. (2005) Asensio, M., Ferragut, L., and Simon, J. (2005). A convection model for fire spread simulation. _Applied Mathematics Letters_, 18:673-677.
* Bak (1996) Bak, P. (1996). _How Nature Works: The Science of Self-organised Criticality_. Springer-Verlag Telos, New York, USA.
* Bak et al. (1990) Bak, P., Chen, K., and Tang, C. (1990). A forest-fire model and some thoughts on turbulence. _Physics Letters A_, 147(5-6):297-300.
* Bak et al. (1987) Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-organised criticality: An explanation of 1/f noise. _Physical Review Letters_, 59(4):381-384.
* Bak et al. (1996)Ball, G. and Guertin, D. (1992). Improved fire growth modeling. _International Journal of Wildland Fire_, 2(2):47-54.
* Beer (1990a) Beer, T. (1990a). The australian national bushfire model project. _Mathematical and Computer Modelling_, 13(12):49-56.
* Beer (1990b) Beer, T. (1990b). Percolation theory and fire spread. _Combustion Science and Technology_, 72:297-304.
* Beer and Enting (1990) Beer, T. and Enting, I. (1990). Fire spread and percolation modelling. _Mathematical and Computer Modelling_, 13(11):77-96.
* Berjak and Hearne (2002) Berjak, S. and Hearne, J. (2002). An improved cellular automaton model for simulating fire in a spatially heterogeneous savanna system. _Ecological Modelling_, 148(2):133-151.
* Caldarelli et al. (2001) Caldarelli, G., Frondoni, R., Gabrielli, A., Montuori, M., Retzlaff, R., and Ricotta, C. (2001). Percolation in real wildfires. _Europhysics Letters_, 56(4):510-516.
* Catchpole et al. (1989) Catchpole, E., Hatton, T., and Catchpole, W. (1989). Fire spread through nonhomogeneous fuel modelled as a Markov process. _Ecological Modelling_, 48:101-112.
* Chen et al. (1990) Chen, K., Bak, P., and Jensen, M. (1990). A deterministic critical forest fire model. _Physics Letters A_, 149(4):4.
* Cheney (1981) Cheney, N. (1981). Fire behaviour. In Gill, A., Groves, R., and Noble, I., editors, _Fire and the Australian Biota_, chapter 5, pages 151-175. Australian Academy of Science, Canberra.
* Cheney et al. (1993) Cheney, N., Gould, J., and Catchpole, W. (1993). The influence of fuel, weather and fire shape variables on fire-spread in grasslands. _International Journal of Wildland Fire_, 3(1):31-44.
* Cheney et al. (1998) Cheney, N., Gould, J., and Catchpole, W. (1998). Prediction of fire spread in grasslands. _International Journal of Wildland Fire_, 8(1):1-13.
* Clar et al. (1994) Clar, S., Drossel, B., and Schwabl, F. (1994). Scaling laws and simulation results for the self-organized critical forest-fire model. _Physical Review E_, 50:1009-1018.
* Clark et al. (2004) Clark, T., Coen, J., and Latham, D. (2004). Description of a coupled atmosphere-fire model. _International Journal of Wildland Fire_, 13(1):49-63.
* Clark et al. (1998) Clark, T., Coen, J., Radke, L., Reeder, M., and Packham, D. (1998). Coupled atmosphere-fire dynamics. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 67-82.
* Clark et al. (1996a) Clark, T. L., Jenkins, M. A., Coen, J., and Packham, D. (1996a). A coupled atmosphere-fire model: Convective feedback on fire-line dynamics. _Journal of Applied Meteorology_, 35(6):875-901.
* Clark et al. (1996b) Clark, T. L., Jenkins, M. A., Coen, J. L., and Packham, D. R. (1996b). A coupled atmosphere-fire model: Role of the convective froude number and dynamic fingering at the fireline. _International Journal of Wildland Fire_, 6(4):177-190.
* Clark et al. (1998)Clarke, K. C., Brass, J. A., and Riggan, P. J. (1994). A cellular automaton model of wildfire propagation and extinction. _Photogrammetric Engineering and Remote Sensing_, 60(11):1355-1367.
* Coen (2005) Coen, J. L. (2005). Simulation of the Big Elk Fire using coupled atmosphere/fire modeling. _International Journal of Wildland Fire_, 14(1):49-59.
* Coen and Clark (2000) Coen, J. L. and Clark, T. L. (2000). Coupled atmosphere-fire model dynamics of a fireline crossing a hill. In _Third Symposium on Fire and Forest Meteorology, 9-14 January 2000, Long Beach, California._, pages 7-10.
* Coleman and Sullivan (1996) Coleman, J. and Sullivan, A. (1996). A real-time computer application for the prediction of fire spread across the australian landscape. _Simulation_, 67(4):230-240.
* CWFGM Steering Committee (2004) CWFGM Steering Committee (2004). _Prometheus User Manual v.3.0.1_. Canadian Forest Service.
* Dercole and Maggi (2005) Dercole, F. and Maggi, S. (2005). Detection and continuation of a border collision bifurcation in a forest fire model. _Applied Mathematics and Computation_, 168:623-635.
* Drossel (1996) Drossel, B. (1996). Self-organized criticality and synchronisation in a forest-fire model. _Physical Review Letters_, 76(6):936-939.
* Drossel and Schwabl (1992) Drossel, B. and Schwabl, F. (1992). Self-organized critical forest-fire model. _Physical Review Letters_, 69:1629-1632.
* Drossel and Schwabl (1993) Drossel, B. and Schwabl, F. (1993). Forest-fire model with immune trees. _Physica A_, 199:183-197.
* Duarte (1997) Duarte, J. (1997). Bushfire automata and their phase transitions. _International Journal of Model Physics C_, 8(2):171-189.
* Dunn and Milne (2004) Dunn, A. and Milne, G. (2004). Modelling wildfire dynamics via interacting automata. In Sloot et al. (2004), pages 395-404.
* Eklund (2001) Eklund, P. (2001). A distributed spatial architecture for bush fire simulation. _International Journal of Geographical Information Science_, 15(4):363-378.
* Emmons (1963) Emmons, H. (1963). Fire in the forest. _Fire Research Abstracts and Reviews_, 5(3):163-178.
* Emmons (1966) Emmons, H. (1966). Fundamental problems of the free burning fire. _Fire Research Abstracts and Reviews_, 8(1):1-17.
* Enting (1977) Enting, I. (1977). Crystal growth models and ising models: disorder points. _Journal of Physics C: Solid State Physics_, 10:1379-1388.
* Favier (2004) Favier, C. (2004). Percolation model of fire dynamic. _Physics Letters A_, 330(5):396-401.
* Finney (1994) Finney, M. (1994). Modeling the spread and behaviour of prescribed natural fires. In _Proceedings of the 12th Conference on Fire and Forest Meteorology, October 26-28 1993, Jekyll Island, Georgia_, pages 138-143.
* Finney (1998) Finney, M. (1998). FARSITE: Fire area simulator-model development and evaluation. Technical Report Research Paper RMRS-RP-4, USDA Forest Service.
* Fardard and Ritter (1996)Forestry Canada Fire Danger Group (1992). Development and structure of the Canadian Forest Fire Behavior Prediction System. Information Report ST-X-3, Forestry Canada Science and Sustainable Development Directorate, Ottawa, ON.
* Fransden and Andrews (1979) Fransden, W. and Andrews, P. (1979). Fire behaviour in non-uniform fuels. Research paper int-232, USDA Forest Service, Intermountain Forest and Range Experiment Station.
* French (1992) French, I. (1992). Visualisation techniques for the computer simulation of bushfires in two dimensions. Master's thesis, Department of Computer Science, University College, University of New South Wales, Australian Defence Force Academy.
* French et al. (1990) French, I., Anderson, D., and Catchpole, E. (1990). Graphical simulation of bushfire spread. _Mathematical and Computer Modelling_, 13(12):67-71.
* Gardner (1970) Gardner, M. (1970). The fantastic combinations of John Conway's new solitary game of \"Life\". _Scientific American_, 222:120-123.
* art. no. 036109. _Physical Review E_, 6803(3):6109-6109.
* Grassberger (2002) Grassberger, P. (2002). Critical behaviour of the drossel-schwabl forest fire model. _New Journal of Physics_, 4:17.1-17.15.
* Green (1983) Green, D. (1983). Shapes of simulated fires in discrete fuels. _Ecological Modelling_, 20(1):21-32.
* Green et al. (1983) Green, D., Gill, A., and Noble, I. (1983). Fire shapes and the adequacy of fire-spread models. _Ecological Modelling_, 20(1):33-45.
* Green et al. (1990) Green, D., Tridgell, A., and Gill, A. (1990). Interactive simulation of bushfires in heterogeneous fuels. _Mathematical and Computer Modelling_, 13(12):57-66.
* Grishin (1997) Grishin, A. (1997). _Mathematical modeling of forest fires and new methods of fighting them_. Publishing House of Tomsk State University, Tomsk, Russia, english translation edition. Translated from Russian by Marek Czuma, L Chikina and L Smokotina.
* Guariso and Baracani (2002) Guariso, G. and Baracani,. (2002). A simulation software of forest fires based on two-level cellular automata. In Viegas, D., editor, _Proceedings of the IV International Conference on Forest Fire Research 2002 Wildland Fire Safety Summit, Luso, Portugal, 18-23 November 2002_.
* Gurer and Georgopoulos (1998) Gurer, K. and Georgopoulos, P. G. (1998). Numerical modeling of forest fires within a 3-d meteorological/dispersion model. In _Second Symposium on Fire and Forest Meteorology, 11-16 January 1998, Pheonix, Arizona_, pages 144-148.
* Hargrove et al. (2000) Hargrove, W., Gardner, R., Turner, M., Romme, W., and Despain, D. (2000). Simulating fire patterns in heterogeneous landscapes. _Ecological Modelling_, 135(2-3):243-263.
* Hogeweg (1988) Hogeweg, P. (1988). Cellular automata as a paradigm for ecological modeling. _Applied Mathematics and Computation_, 27:81-100.
* Jenkins et al. (2001) Jenkins, M. A., Clark, T., and Coen, J. (2001). Coupling atmospheric and fire models. In Johnson, E. and Miyanishi, K., editors, _Forest Fires: Behaviour and Ecological Effects_, chapter 5, pages 257-302. Academic Press, San Diego, CA, 1st edition.
* Hogeweg (1988)Kalabokidis, K., Hay, C., and Hussin, Y. (1991). Spatially resolved fire growth simulation. In _Proceedings of the 11th Conference on Fire and Forest Meteorology, April 16-19 1991, Missoula, MT_, pages 188-195.
* Karafyllidis (1999) Karafyllidis, I. (1999). Acceleration of cellular automata algorithms using genetic algorithms. _Advances in Engineering Software_, 30(6):419-437.
* Karafyllidis (2004) Karafyllidis, I. (2004). Design of a dedicated parallel processor for the prediction of forest fire spreading using cellular automata and genetic algorithms. _Engineering Applications of Artificial Intelligence_, 17(1):19-36.
* Karafyllidis and Thanailakis (1997) Karafyllidis, I. and Thanailakis, A. (1997). A model for predicting forest fire spreading using cellular automata. _Ecological Modelling_, 99(1):87-97.
* Karplus (1977) Karplus, W. J. (1977). The spectrum of mathematical modeling and systems simulation. _Mathematics and Computers in Simulation_, 19(1):3-10.
* King (1971) King, N. (1971). Simulation of the rate of spread of an aerial prescribed burn. _Australian Forest Research_, 6(2):1-10.
* Knight and Coleman (1993) Knight, I. and Coleman, J. (1993). A fire perimeter expansion algorithm based on huygens' wavelet propagation. _International Journal of Widland Fire_, 3(2):73-84.
* Kourtz et al. (1977) Kourtz, P., Nozaki, S., and O'Regan, W. (1977). Forest fires in a computer : A model to predict the perimeter location of a forest fire. Technical Report Information Report FF-X-65, Fisheries and Environment Canada.
* Kourtz and O'Regan (1971) Kourtz, P. and O'Regan, W. (1971). A model for a small forest fire to simulate burned and burning areas for use in a detection model. _Forest Science_, 17(1):163-169.
* Langton (1990) Langton, C. G. (1990). Computation at the edge of chaos: Phase transitions and emergent computation. _Physica D: Nonlinear Phenomena_, 42(1-3):12-37.
* Lee (1972) Lee, S. (1972). Fire research. _Applied Mechanical Reviews_, 25(3):503-509.
* Li and Magill (2000) Li, X. and Magill, W. (2000). Modelling fire spread under environmental influence using a cellular automaton approach. _Complexity International_, 8:(14 pages).
* Li and Magill (2003) Li, X. and Magill, W. (2003). Critical density in a fire spread model with varied environmental conditions. _International Journal of Computational Intelligence and Applications_, 3(2):145-155.
* Lopes et al. (1998) Lopes, A., Cruz, M., and Viegas, D. (1998). Firestation-an integrated system for the simulation of wind flow and fire spread over complex topography. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 741-754.
* Lopes et al. (2002) Lopes, A., Cruz, M., and Viegas, D. (2002). Firestation-an integrated software system for the numerical simulation of fire spread on complex topography. _Environmental Modelling & Software_, 17(3):269-285.
* Loreto et al. (1995) Loreto, V., Pietronero, L., Vespignani, A., and Zapperi, S. (1995). Renormalisation group approach to the critical behaviour of the forest-fire model. _Physical Review Letters_, 75(3):465-468.
* Lopes et al. (2003)* Malamud et al. (1998) Malamud, B., Morein, G., and Turcotte, D. (1998). Forest fires: An example of self-organised critical behaviour. _Science_, 281:1840-1842.
* Malamud and Turcotte (1999) Malamud, B. and Turcotte, D. (1999). Self-organised criticality applied to natural hazards. _Natural Hazards_, 20:93-116.
* Margerit and Sero-Guillaume (1998) Margerit, J. and Sero-Guillaume, O. (1998). Richards' model, hamilton-jacobi equations and temperture field equations of forest fires. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 281-294.
* McAlpine and Wotton (1993) McAlpine, R. and Wotton, B. (1993). The use of fractal dimension to improve wildland fire perimeter predictions. _Canadian Journal of Forest Research_, 23:1073-1077.
* McArthur (1966) McArthur, A. (1966). Weather and grassland fire behaviour. Technical Report Leaflet 100, Commonwealth Forestry and Timber Bureau, Canberra.
* McArthur (1967) McArthur, A. (1967). Fire behaviour in eucalypt forests. Technical Report Leaflet 107, Commonwealth Forestry and Timber Bureau, Canberra.
* a complex systems approach using artificial neural networks. In _Proceedings of the The Joint Fire Science Conference and Workshop, June 15-17, 1999, Boise, ID_. [http://jfsp.nifc.gov/conferenceproc/](http://jfsp.nifc.gov/conferenceproc/).
* Mendez and Liebot (1997) Mendez, V. and Liebot, J. E. (1997). Hyperbolic reaction-diffusion equations for a forest fire model. _Physical Review E_, 56(6):6557-6563.
* Morvan et al. (2004) Morvan, D., Larini, M., Dupuy, J., Fernandes, P., Miranda, A., Andre, J., Sero-Guillaume, O., Calogine, D., and Cuinas, P. (2004). Euifrelab: Behaviour modelling of wildland fires: a state of the art. Deliverable D-03-01, EUFIRELAB. 33 p.
* Mraz et al. (1999) Mraz, M., Zimic, N., and Virant, J. (1999). Intelligent bush fire spread prediction using fuzzy cellular automata. _Journal of Intelligent and Fuzzy Systems_, 7(2):203-207.
* Muzy et al. (2005a) Muzy, A., Innocenti, E., Aiello, A., Santucci, J., Santoni, P., and Hill, D. (2005a). Modelling and simulation of ecological propagation processes: application to fire spread. _Environmental Modelling and Software_, 20(7):827-842.
* Muzy et al. (2002) Muzy, A., Innocenti, E., Aiello, A., Santucci, J.-F., and Wainer, G. (2002). Cell-devs quantization techniques in a fire spreading application. In Ycesan, E., Chen, C.-H., Snowdon, J., and Charnes, J., editors, _Proceedings of the 2002 Winter Simulation Conference_.
* Muzy et al. (2005b) Muzy, A., Innocenti, E., Aiello, A., Santucci, J.-F., and Wainer, G. (2005b). Specification of discrete event models for fire spreading. _Simulation_, 81(2):103-117.
* Muzy et al. (2003) Muzy, A., Innocenti, E., Santucci, J. F., and Hill, D. R. C. (2003). Optimization of cell spaces simulation for the modeling of fire spreading. In _Annual Simulation Symposium_, pages 289-296.
* Nahmias et al. (2000) Nahmias, J., Tephany, H., Duarte, J., and Letaconnoux, S. (2000). Fire spreading experiments on heterogeneous fuel beds. applications of percolation theory. _Canadian Journal of Forest Research_, 30(8):1318-1328.
* Nahmias et al. (2002)Nelson, RM, J. (2000). Prediction of diurnal change in 10-h fuel stick moisture content. _Canadian Journal of Forest Research_, 30(7):1071-1087.
* Noble et al. (1980) Noble, I., Bary, G., and Gill, A. (1980). Mcarthur's fire-danger meters expressed as equations. _Australian Journal of Ecology_, 5:201-203.
* Ntaimo et al. (2004) Ntaimo, L., Zeigler, B. P., Vasconcelos, M. J., and Khargharia, B. (2004). Forest fire spread and suppression in devs. _Simulation_, 80(10):479-500.
* Pastor et al. (2003) Pastor, E., Zarate, L., Planas, E., and Arnaldos, J. (2003). Mathematical models and calculation systems for the study of wildland fire behaviour. _Progress in Energy and Combustion Science_, 29(2):139-153.
* Pastor-Satorras and Vespignani (2000) Pastor-Satorras, R. and Vespignani, A. (2000). Corrections to the scaling in the forest-fire model. _Physical Review E_, 61:4854-4859.
* Peet (1965) Peet, G. (1965). A fire danger rating and controlled burning guide for the northern jarrah (euc. marginata sm.) forest of western australia. Technical Report Bulletin No 74, Forests Department, Perth, Western Australia.
* Perry (1998) Perry, G. (1998). Current approaches to modelling the spread of wildland fire: a review. _Progress in Physical Geography_, 22(2):222-245.
* Perry et al. (1999) Perry, G. L., Sparrow, A. D., and Owens, I. F. (1999). A gis-supported model for the simulation of the spatial structure of wildland fire, cass basin, new zealand. _Journal of Applied Ecology_, 36(4):502-502.
* rams. _Meteorology and Atmospheric Physics_, 49:69-91.
* Plourde et al. (1997) Plourde, F., Doan-Kim, S., Dumas, J., and Malet, J. (1997). A new model of wildland fire simulation. _Fire Safety Journal_, 29(4):283-299.
* Pruessner and Jensen (2002) Pruessner, G. and Jensen, H. J. (2002). Broken scaling in the forest-fire model. _Physical Review E_, 65(5):056707 (8 pages).
* Reed and McKelvey (2002) Reed, W. and McKelvey, K. (2002). Power-law behaviour and parametric models for the size-distribution of forest fires. _Ecological Modelling_, 150(3):239-254.
* Rhodes and Anderson (1998) Rhodes, C. and Anderson, R. (1998). Forest-fire as a model for the dynamics of disease epidemics. _Journal of The Franklin Institute_, 335B(2):199-211.
* Richards (1990) Richards, G. (1990). An elliptical growth model of forest fire fronts and its numerical solution. _International Journal for Numerical Methods in Engineering_, 30:1163-1179.
* Richards (1995) Richards, G. (1995). A general mathematical framework for modeling two-dimensional wildland fire spread. _International Journal of Wildland Fire_, 5:63-72.
* Richards and Bryce (1996) Richards, G. and Bryce, R. (1996). A computer algorithm for simulating the spread of wildland fire perimeters for heterogeneous fuel and meteorological conditions. _International Journal of Wildland Fire_, 5(2):73-79.
* Ritter et al. (1998)Ricotta, C. and Retzlaff, R. (2000). Self-similar spatial clustering of wildland fires: the example of a large wildfire in spain. _Internation Journal of Remote Sensing_, 21(10):2113-2118.
* Rothermel (1972) Rothermel, R. (1972). A mathematical model for predicting fire spread in wildland fuels. Research Paper INT-115, USDA Forest Service.
* Rothermel (1991) Rothermel, R. (1991). Predicting behavior and size of crown fires in the northern rocky mountains. Research Paper INT-438, USDA Forest Service.
* Schenk et al. (2000) Schenk, K., Drossel, B., Clar, S., and Schwabl, F. (2000). Finite-size effects in the self-organised critical forest-fire model. _European Physical Journal B_, 15:177-185.
* Schenk et al. (2002) Schenk, K., Drossel, B., and Schwabl, F. (2002). Self-organized critical forest-fire model on large scales. _Physical Review E_, 65(2):026135 (8 pages).
* Seron et al. (2005) Seron, F., Gutierrez, D., Magallon, J., Ferrgut, L., and Asensio, M. (2005). The evolution of a wildland forest fire front. _The Visual Computer_, 21:152-169.
* Sloot et al. (2004) Sloot, P. M. A., Chopard, B., and Hoekstra, A. G., editors (2004). _Cellular Automata, 6th International Conference on Cellular Automata for Research and Industry, ACRI 2004, Amsterdam, The Netherlands, October 25-28, 2004, Proceedings_, volume 3305 of _Lecture Notes in Computer Science_. Springer.
* Speer et al. (2001) Speer, M., Leslie, L., Morison, R., Catchpole, W., Bradstock, R., and Bunker, R. (2001). Modelling fire weather and fire spread rates for two bushfires near sydney. _Australian Meteorological Magazine_, 50(3):241-246.
* Sullivan (2007a) Sullivan, A. (2007a). A review of wildland fire spread modelling, 1990-present, 1: Physical and quasi-physical models. arXiv:0706.3074v1[physics.geo-ph], 46 pp.
* Sullivan (2007b) Sullivan, A. (2007b). A review of wildland fire spread modelling, 1990-present, 2: Empirical and quasi-empirical models. arXiv:0706.4128v1[physics.geo-ph], 32 pp.
* Sullivan and Knight (2004) Sullivan, A. and Knight, I. (2004). A hybrid cellular automata/semi-physical model of fire growth. In _Proceedings of the 7th Asia-Pacific Conference on Complex Systems, 6-10 December 2004, Cairns_, pages 64-73.
* Taplin (1993) Taplin, R. (1993). Sources of variation for fire spread rate in non-homogeneous fuel. _Ecological Modelling_, 68:205-211.
* Trunfio (2004) Trunfio, G. A. (2004). Predicting wildfire spreading through a hexagonal cellular automata model. In Sloot et al. (2004), pages 385-394.
* Van Wagner (1969) Van Wagner, C. (1969). A simple fire-growth model. _The Forestry Chronicle_, 45(1):103-104.
* Van Wagner (1977) Van Wagner, C. (1977). Conditions for the start and spread of crown fire. _Canadian Journal of Forest Research_, 7(1):23-24.
* simulation of fire growth with a geographic information system. _International Journal of Wildland Fire_, 2(2):87-96.
* Van Vasconcelos, M., Guertin, P., and Zwolinski, M. (1990). FIREMAP: Simulations of fire behaviour, a gis-supported system. In _Krammes, JS (ed.). Effects of Fire Management of Southwestern Natural Resources, Proceedings of the Symposium. Nov. 15-17, 1988, Tucson, AZ. USDA Forest Service General Technical Report RM-GTR-191_, pages 217-221.
* Viegas (2002) Viegas, D. (2002). Fire line rotation as a mechanism for fire spread on a uniform slope. _International Journal of Wildland Fire_, 11(1):11-23.
* von Niessen and Blumen (1988) von Niessen, W. and Blumen, A. (1988). Dynamic simulation of forest fires. _Canadian Journal of Forest Research_, 18:805-812.
* Wallace (1993) Wallace, G. (1993). A numerical fire simulation-model. _International Journal of Wildland Fire_, 3(2):111-116.
* Watt et al. (1995) Watt, S., Roberts, A., and Weber, R. (1995). Dimensional reduction of a bushfire model. _Mathematical and Computer Modelling_, 21(9):79-83.
* Weber (1991) Weber, R. (1991). Modelling fire spread through fuel beds. _Progress in Energy Combustion Science_, 17(1):67-82.
* Williams (1982) Williams, F. (1982). Urban and wildland fire phenomenology. _Progress in Energy Combustion Science_, 8:317-354.
* Wolfram (1983) Wolfram, S. (1983). Statistical mechanics of cellular automata. _Reviews of Modern Physics_, 55:601-644.
* Wolfram (1986) Wolfram, S. (1986). _Theory and Application of Cellular Automata_. Advanced Series on Complex Systems-Volume 1. World Scientific, Singapore.
Figure 2: A schematic of the application of Huygens wavelet principle to fire perimeter propagation. In the simple case of homogeneous fuel, a uniform template ellipse, whose geometry is defined by the chosen fire spread model, length:breadth ratio model and the given period of propagation \\(\\Delta t\\), is applied to each node representing the current perimeter. The new perimeter at \\(t+\\Delta t\\) is defined by the external points of all new ellipses.
Figure 1: Ellipse geometry determined from the fire spread and length:breadth model. a+c determined from the predicted rate of spread, b is determined from a and length:breadth model. | In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behvaiour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of a simulation or mathematical analogue nature. Most simulation models are implementations of existing empirical or quasi-empirical models and their primary function is to convert these generally one dimensional models to two dimensions and then propagate a fire perimeter across a modelled landscape. Mathematical analogue models are those that are based on some mathematical conceit (rather than a physical representation of fire spread) that coincidentally simulates the spread of fire. Other papers in the series review models of an physical or quasi-physical nature and empirical or quasi-empirical nature. Many models are extensions or refinements of models developed before 1990. Where this is the case, these models are also discussed but much less comprehensively.
Ensis1 Bushfire Research 2
Footnote 1: A CSIRO/Scion Joint Venture
PO Box E4008, Kingston, ACT 2604, Australia
email: [email protected] or [email protected]
phone: +61 2 6125 1693, fax: +61 2 6125 4676
version 3.0 | Write a summary of the passage below. | 305 |
arxiv-format/2312_14592v1.md | # Ultra-low power 10-bit 50-90 MSps SAR ADCs in 65 nm CMOS for multi-channel ASICs
Miroslaw Firlej
Tomasz Flutowski
Marek Idzik
Jakub Moron
Krzysztof Swientek
## 1 Introduction
In modern and newly designed particle physics detection systems, there is a growing demand for detectors with ever-increasing speed, high granularity, and high channel density. The key part of such a detector is a dedicated multi-channel readout Application-Specific Integrated Circuit (ASIC), which has gained increasing functionality in recent years, slowly becoming a System on Chip (SoC). In particular, the speed of signal processing is increasing; each channel is required to measure the amplitude or time (or both) and convert the result to a digital form. As the number of bits of information increases, so does the demand for faster data transmission. With the above requirements and increasing channel density, ultra-low power consumption per channel is a must.
A fast, ultra-low power, area-efficient Analog-to-Digital Converter (ADC) is one of the indispensable components of a SoC-type readout ASIC. An ultra-low power ADC with a sampling rate of 40 MSps or more, medium-high resolution, and small pitch is required for multi-channel readout ASICs in modern and future LHC or other experiments. Recent developments of such complex readout ASICs are a 128-channel SALT ASIC for the LHCb Upstream Tracker, which contains an analogue front-end and a 6-bit 40 MSps ADC in each channel [1], or a 72-channel HGCROC ASIC for the CMS High Granularity Calorimeter, which contains an analogue front-end, a 10-bit 40 MSps ADC and a precision TDC in each channel [2]. In fact, a fast 10-bit ADC is one of the most requested and used blocks in the readout of various detector systems [2; 3; 4; 5; 6]. These and other readout ASICs for LHC and other experiments have been developed in the 130 nm CMOS process,hich has been studied in the past and selected for use in High Energy Physics (HEP) experiments many years ago due to its very good performance and good radiation tolerance [7].
For medium- and long-term future experiments, newer CMOS processes will be used, not only because of the higher speed, density, and lower power, but also because of the limited availability in time of current technologies. One of such technology that has already been verified, also in terms of radiation hardness, is the 65 nm CMOS process [8]. Several developments of complex readout ASICs in CMOS 65 nm have already started [9, 10, 11] and this process will be dominant for the next 5-10 years until a newer one, probably CMOS 28 nm, will take place. For the highest density ASICs, e.g. pixel detectors, the transition to 28 nm CMOS will be much faster.
The aim of this work is to develop a fast, ultra-low power ADC in CMOS 65 nm, ready for integration into a multi-channel readout ASICs for future experiments. The main goals for ADC are: a sampling rate of at least 40 MSps (but possibly significantly higher), 10-bit resolution, ultra-low power consumption of around 500 \\(\\upmu\\)W at 40 MSps, small pitch per channel below 100 \\(\\upmu\\)m, and easy implementation in a multi-channel readout ASIC.
## 2 ADC design
The demand for ultra-low power ADC naturally leads to a Successive Approximation Register (SAR) architecture with a capacitive Digital-to-Analog Converter (DAC), shown in the block diagram in figure 1. A fully differential ADC architecture was chosen, comprising a pair of bootstrapped switches, a differential capacitive DAC, a dynamic comparator, and asynchronous control logic [12]. Moreover, the control logic was implemented as dynamic to increase the speed of the ADC and, at the same time, to reduce power consumption. Due to technological limitations in minimum capacitance available in the Process Design Kit (PDK), a split DAC architecture with split capacitor \\(C_{s}\\) was used to reduce the DAC input capacitance. As a result, the effective unit capacitance is much lower than the minimum physical one used in the DAC design. For additional power savings, all blocks were designed to dissipate power only during conversion, eliminating all static power. Asynchronous logic was used to increase speed and eliminate the fast bit-cycling clock distribution, greatly simplifying the design of a multi-channel ASIC and significantly improving the power budget.
To explore and optimise ADC performance, several versions of SAR ADC have been developed. The ADCs differ in the DAC switching scheme, the implementation of the DAC capacitors, and the power dissipated by the SAR logic. All ADC versions use the same bootstrapped sampling
Figure 1: Block diagram of a fully differential 10-bit SAR ADC with split DAC architecture.
switch [13; 14] and dynamic comparator [15]. The designed comparator, shown in Figure 2, consists of two gain stages and an output latch. To symmetrise the circuit and minimise the effect of parasititics, a decision stage (generating the _Valid_ signal) was added to the comparator core.
### Switching scheme and DAC
Numerous DAC switching schemes have been proposed for SAR ADC to achieve the highest power efficiency [16]. In this work, two very efficient schemes have been used. The first one, the Merged Capacitor Switching (MCS) scheme [17; 18] shown in figure 3 (left), uses three reference voltages Vref+, Vref-, Vcm, but the accuracy of the DAC does not depend on the accuracy of the reference common voltage Vcm. Another great advantage of this scheme is that the DAC output common voltage is constant and equal to Vcm during conversion, which makes comparator operation easier. In the sampling phase, all DAC switches are connected to the Vcm voltage (as shown in figure 3 (left)). After the first decision of the comparator, the upper/lower MSB switch (connected to 32 C) of DAC changes to Vref+/Vref- or Vref-/Vref+ depending on the result of the comparison. Such changes occur after each bit has been converted, so that at the end of the conversion there is no switch connected to Vcm.
The second switching scheme shown in figure 3 (right), called HL (HighLow) for brevity, is a modification of the scheme proposed by Sanyal and Sun [19]. The difference from this scheme is the use of Vref+, Vref- voltage references for all bits down to the least significant one, and omitting an additional common-mode reference Vcm. The consequence of not using Vcm is an additional capacitive branch of DAC, and thus doubling of the DAC capacitance. A disadvantage of this scheme is that the DAC output common mode voltage is not constant during conversion. In the sampling phase, all DAC switches, except the MSB bit, are connected to the Vref+ voltage,
Figure 2: Schematic diagram of the dynamic comparator
while the MSB switch is connected to Vref- (as shown in figure 3 (right)). After the first decision of the comparator, the upper or lower MSB switch (connected to 32 C) of DAC changes to Vref+ depending on the result of the comparison. After the conversion of each subsequent bit, the upper or lower DAC switch of the corresponding bit changes to Vref-, depending on the result of the comparison.
There are two types of capacitors available in the 65 nm CMOS process, MIM and MOM, so each version of DAC has been implemented using both of them. The MIM capacitor has a higher minimum value than the MOM, but its advantage is a lower parasitic capacitance of the top plate. The MIM DAC was designed with a 6-bit sub-DAC for the most significant bits and a 3-bit sub-DAC for the least significant bits, as shown in figure 4 (left). The MOM DAC, for which a smaller minimum capacitance was available, was designed with a different split, using a 7-bit sub-DAC for the most significant bits (using 64 C in the MSB branch) and a 2-bit sub-DAC for the least significant bits (using 2 C in the MSB branch). The comparison of both DAC splits is shown in figure 4; for the sake of clarity, only half of the DAC is presented.
### Asynchronous SAR logic
The asynchronous design allows individual control of the time of subsequent conversion steps, providing the ability to find the best trade-off between effective ADC resolution and speed. In the first phase of the ADC operation, the sampling of the analogue input signal is performed in capacitive DAC. Then the main phase, analogue-to-digital conversion, is started. The sequence, which is repeated for each bit conversion, consists of the following steps:
Figure 4: Comparison of DAC split used for MIM (left) and MOM (right) capacitors; for the sake of clarity only half of the DAC in MCS is presented (in HL the Vcm is omitted).
Figure 3: Block diagram of differential 9-bit DACs with the MCS switching scheme (left) and the HL switching scheme (right).
1. initialisation of the comparator operation and waiting until its decision is made,
2. memorizing the decision of the processed bit,
3. toggling the switches setting the capacitive DAC voltage,
4. waiting until the DAC voltage is settled precisely enough before the next bit conversion can start.
During the last bit conversion, after step 2, the DAC is reset to the initial configuration and ADC is ready for the next conversion. Although steps 1 to 3 should be as fast as possible, the duration of step 4 can be optimised to find a compromise between a long enough settling time sufficient for the required ADC precision and the shortest possible time for the fastest ADC conversion. Since the required precision (and thus the settling time) is the highest for the MSB bit and the lowest for the LSB bit, the duration of step 4 can be optimised separately for different bits.
To achieve the best compromise between the highest effective resolution and the highest conversion rate, a variable delay has been introduced that adjusts the settling time of DAC. It may be optimised separately for different groups of bits, similar to what was done in [20]. The concept of this solution is shown in figure 5. There are four delays for four groups of bits: one for the most significant bit 9, denoted _Del9_, the second for bits 8-7, denoted _Del87_, the third for bits 6-5, denoted _Del65_, and the last for the remaining bits 4-0, denoted _Del40_. During conversion, the 2-bit _Sel_Group_ signal selects the appropriate delay for the subsequent bit. Each of these delays can be set to eight different values (typically longer for more significant bits) controlled by a 3-bit register. The delays should be set experimentally to achieve the best performance of the given ADC prototype, and can be reused for all ADCs from the same production batch.
The SAR logic of the MCS and HL ADCs is very similar but adapted to correctly control the switches to Vref+, Vref-, Vcm in the MCS version and the switches to Vref+, Vref- in the HL version. As the SAR logic is the largest contributor to the ADC power consumption, two versions of asynchronous control logic have been designed, focusing on the lowest power or the highest sampling rate. For the lowest power version, an additional condition was set to achieve a minimum sampling rate of 40 MSps. The logic implemented in both versions is functionally the same; the only difference being the transistor sizing.
### Design summary and layout
In total, eight different SAR ADC versions were designed that differed in: DAC switching scheme (MCS, HL), DAC capacitor type (MIM, MOM), and standard or low-power (lp) consumption.
Figure 5: Simplified block diagram of the variable delay block.
he prototype ADCs were fabricated in 65 nm CMOS technology, which has been proven to be a radiation hard technology. An additional basic precaution has been taken in the design of not using transistors of minimum size to improve ADC immunity to radiation damage.
One of the key design tasks was to draw a layout of capacitive DAC that would guarantee good ADC linearity. To achieve this, the guidelines listed below were followed.
1. First, it is crucial to minimise parasitic capacitance seen from the LSB subDAC top plate (see figure 1) to all constant potentials, as this parasitic directly degrades the DAC linearity.
2. Next, you need to ensure that the values of all parasitic capacitances, parallel to subsequent capacitors in DAC, scale proportionally to them to maintain binary weighting.
3. Finally, the parasitic capacitance seen from the top plate of the MSB subDAC to any constant potential should be reduced, but only in a way that does not affect the previous optimisations, since this parasitic only reduces the ADC input range, but has no effect on the DAC linearity.
The ADC layout was drawn in 60 \\(\\mathrm{\\SIUnitSymbolMicro m}\\) pitch to facilitate the implementation of a multi-channel readout ASICs. The size of ADCs with MIM DAC is 330 \\(\\mathrm{\\SIUnitSymbolMicro m}\\times 60\\)\\(\\mathrm{\\SIUnitSymbolMicro m}\\) and with MOM DAC 235 \\(\\mathrm{\\SIUnitSymbolMicro m}\\)\\(\\times\\) 60 \\(\\mathrm{\\SIUnitSymbolMicro m}\\), as shown in figure 6. The blocks from left to right are: bootstrap switches, capacitive DACs, comparator, switches to reference voltages for subsequent DAC bits, and SAR control logic.
## 3 ADC measurements
A dedicated FPGA-based setup was built as shown in figure 7 to characterise the performance of ADC prototypes. The Agilent B1500 semiconductor parameter analyser delivers the power supply, reference voltages, and measures the corresponding currents. It also generates input signals for static measurements. The Agilent 81160A generates the sampling clock and input signals for
Figure 6: Layout of the ADC with MIM (top) and MOM (bottom) DAC.
n the Xilinx Virtex-5 FPGA incorporated into the Genesys evaluation board.
Using this setup, standard static measurements, that is, Integral Non-Linearity (INL) and Differential Non-Linearity (DNL), as well as basic dynamic metrics, that is, Signal-to-Non Harmonic Ratio (SNHR), Total Harmonic Distortion (THD), Spurious-Free Dynamic Range (SFDR), Signal-to-Noise-and-Distortion ratio (SINAD), and Effective Number of Bits (ENOB), were obtained. Due to the limitations of the setup, the dynamic metrics were measured at the 0.1 Nyquist input frequency, so the effective number of bits at lower input frequencies, called ENOB\\({}_{\\mathrm{LF}}\\) was calculated.
### Internal delay optimisation and static measurements
In the first series of measurements, the static DNL and INL errors were measured at a sampling frequency of 10 MHz for different settings of ADC internal delays. The delays for each group of bits were tuned to achieve the best ADC performance, with the goal of eliminating missing codes (not always possible) and obtaining the best compromise between acceptable DNL, INL errors (it was not always possible to get it below 1 LSB) and the shortest possible delays.
After optimisation, performed separately for each ADC version, the internal delays were set as shown in table 1. These settings were used for all the following measurements. It should be noticed that unit delays of standard and low power versions are different, so the delay settings cannot be compared directly.
The results of DNL and INL errors obtained for all ADC versions are shown in figures 8 and 9 respectively.
Analysing the results, one can observe several features.
* Analysis of table 1 allows to formulate some tentative conclusions. The MCS-MIM configuration is expected to be the fastest as its total delay is the smallest. Furthermore, there is potential room for future improvements in ADC performance by shortening delays because some less significant bits (mainly _Del65_ and _Del40_) have delays set to zero. On the other
Figure 7: Setup for static and dynamic ADC measurements.
hand, _Del9_ for the HL-MIM-lp version is 7, suggesting that the maximum delay is too small and results in many missing codes (DNL\\(<-0.9\\)) as seen in figure 8.
* ADCs with the HL switching scheme show worse linearity (spikes in INL and DNL) than the MCS versions, for which both INL and DNL errors show good performance remaining always below 1 LSB. For ADCs with the HL switching scheme, except for HL-MOM, there are always one or two missing codes and one or two codes with absolute INL or DNL errors
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & Del9 & Del87 & Del65 & Del40 \\\\ \\hline \\hline MCS-MIM & 2 & 0 & 0 & 0 \\\\ \\hline MCS-MOM & 3 & 2 & 1 & 1 \\\\ \\hline HL-MIM & 2 & 1 & 1 & 0 \\\\ \\hline HL-MOM & 1 & 1 & 1 & 1 \\\\ \\hline \\hline MCS-MIM-lp & 3 & 2 & 1 & 1 \\\\ \\hline MCS-MOM-lp & 3 & 2 & 1 & 1 \\\\ \\hline HL-MIM-lp & 7 & 1 & 1 & 0 \\\\ \\hline HL-MOM-lp & 4 & 2 & 0 & 0 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Optimised internal delay settings for all ADC versions.
Figure 8: DNL error for all ADC versions measured at 10 MHz sampling frequency.
greater than one. The HL-MIM-lp version typically shows six missing codes, and for this reason this version was not used in further studies. Since in both MCS and HL versions the same DACs were used, the worst non-linearity errors are not caused by the DAC layout imperfections or parasitics. A possible reason may be poorer performance of the comparator, particularly during the MSB conversion, which does not work at constant common mode voltage in the HL switching scheme.
* Comparing the INL results, it is seen that both the MIM and MOM DAC versions show similar behaviour (except HL-MOM-lp) and have a large change of INL at the centre code (512). This is probably due to the layout imperfections of the differential DAC.
* The standard and low power ADC versions have similar INL and DNL behaviour (except for the HL-MOM and HL-MOM-lp versions) and also do not differ much quantitatively. This could be expected since they differ only in the dimensioning of the transistors in SAR logic.
### Dynamic parameter measurements
Dynamic ADC metrics were measured for all ADC versions as a function of the sampling frequency at 0.1 Nyquist input signal frequency. The main dynamic parameters SINAD, THD, SNHR, SFDR, and ENOB\\({}_{\\rm LF}\\) are presented in figure 10 for the MCS-MIM ADC version. It is seen that ENOB\\({}_{\\rm LF}\\) is saturated at about 9.3 for lower sampling frequencies, starts to decrease above 60 MHz reaching \\(\\sim\\)8.9 at 90 MHz, and decreases sharply for higher sampling frequencies.
Figure 9: INL error for all ADC versions measured at 10 MHz sampling frequency.
he comparison of SINAD and ENOB for all ADC prototypes (except HL-MIM-lp and HL-MOM-lp) is shown in figure 11. The HL-MOM-lp ADC version is not shown and will not be used in future studies because its effective resolution is 1-2 bits worse than for other versions shown. The HL-MIM version of the ADC, although it has worse non-linearity errors, shows ENOB\\({}_{\\text{LF}}\\) like the other versions. The two codes with worse INL and DNL errors do not significantly affect ENOB\\({}_{\\text{LF}}\\) for this version.
As expected, the standard versions of ADC are much faster than the low-power versions. In
Figure 11: Comparison of SINAD and ENOB as a function of sampling frequency for different ADC versions.
Figure 10: Measurement of dynamic ADC metrics in function of sampling frequency at 0.1 Nyquist input signal frequency, for the MCS-MIM ADC version.
fact, MCS-MIM has an ENOB\\({}_{\\rm LF}\\) of 8.9 bits up to 90 MHz, while the HL-MIM and HL-MOM versions have an ENOB\\({}_{\\rm LF}\\) of more than 9 bits up to 80 MHz. The best versions of the low-power ADCs, MCS-MOM-lp and MCS-MIM-lp keep ENOB\\({}_{\\rm LF}\\) above 9 bits up to 50 MHz.
### Power consumption
Power consumption was measured for different ADC versions as a function of the sampling frequency, up to the frequencies at which the ADC performed well (lower for low-power versions of the ADC). Contributions to the total power of key ADC blocks are presented in figure 12 for the MCS-MIM and MCS-MIM-lp ADC versions.
As expected, the power consumption is proportional to the sampling rate, and the different power contributions of the MCS-MIM and MCS-MIM-lp versions overlap, except for the digital part (and so the total power). Total power is extremely low, reaching 1 mW at about 80 MHz sampling frequency for MCS-MIM ADC. About two-thirds of the total power comes from the digital ADC part, while less than a third comes from the analogue part (comparator and bootstrapped switches), and the smallest contribution comes from the reference voltages.
The comparison of the total power for different ADCs is shown in figure 13. Two groups of straight curves are clearly visible, the curves for standard and low-power ADCs. The power consumption of standard ADC versions is slightly above 500 \\(\\upmu\\)W at 40 MHz sampling frequency and slightly above 1 mW at 80 MHz. The low-power ADCs are slower and work well up to about 55 MHz sampling frequency, but their power consumption at 40 MHz is only about 400 \\(\\upmu\\)W, more than 20% less than the standard ADCs.
### The ADC figure of merit
Using the effective ADC resolution and the power consumption, the well-known Walden ADC Figure of Merit (FOM) [22] can be calculated:
\\[\\text{FOM}=\\frac{\\text{Power}}{2^{\\text{ENOB}}\\cdot f_{\\text{sample}}}, \\tag{1}\\]
Figure 12: Contributions to power consumption as a function of sampling frequency for the standard MCS-MIM and the low-power MCS-MIM-lp ADC versions.
here ENOB is typically measured at the Nyquist frequency of the input signal. The low-frequency Figure of Merit (\\(\\rm{FOM}_{LF}\\)) is also often used, which is calculated using the effective resolution \\(\\rm{ENOB}_{LF}\\) obtained at the lower frequency of the input signal.
The comparison of \\(\\rm{FOM}_{LF}\\) for the six best ADC versions (except HL-MIM-lp and HL-MOM-lp) is presented in figure 14.
The standard ADCs are characterised by \\(\\rm{FOM}_{LF}\\) between 20-26 \\(\\rm{fJ}/conv.\\)_-step_ over most of the usable sampling frequency range, while the low power ADCs have, as expected, an even better \\(\\rm{FOM}_{LF}\\), in the range 15-20 \\(\\rm{fJ}/conv.\\)_-step_.
## 4 Comparison to the state-of-the-art
In table 2 the key parameters of the standard (HL-MOM) and low-power (MCS-MOM-lp) ADC versions are compared with the state-of-the-art ADCs with the same resolution and similar sampling rates (but at least 40 MSps), designed in similar size CMOS technologies.
Figure 14: Comparison of \\(\\rm{FOM}_{LF}\\) as a function of sampling frequency for different ADC versions.
Figure 13: Comparison of total power consumption as a function of sampling frequency for different ADC versions.
The FOM\\({}_{\\rm LF}\\) of the state-of-the-art ADCs is between 15-30 \\(\\rm{fJ}/conv.\\)-_step_ and the designs presented in this work also stay well in this range. This is due to the fact that the most important parameters, such as power/frequency or effective resolution, obtained in this work are similar to the state-of-the-art ADCs. Furthermore, the ADC size (plus small pitch) of this work compares very well with other designs, which is particularly important considering applications in multi-channel ASICs. Since all the above designs strive to achieve maximum speed with minimum power and minimum area, this comes at the cost of the resulting ENOB, which is noticeably lower (at least 0.7 LSB) than the nominal (10 bits).
## 5 Conclusion
The design and measurements of a fast ultra-low power 10-bit SAR ADCs in CMOS 65 nm process have been presented. The measurements performed confirm very good ADC functionality, reflected in ENOB\\({}_{\\rm LF}\\) of about 8.9-9.1 bits up to maximum sampling frequencies of 80-90 MHz, ultra-low power of about 1 mW at 80 MHz, and excellent FOM\\({}_{\\rm LF}\\) of 24-26 \\(\\rm{fJ}/conv.\\)-_step_ at 80 MHz. The low-power ADC versions work well up to about 50 MHz sampling frequency, achieving an ENOB\\({}_{\\rm LF}\\) of about 9.3 bits at 40 MHz with a power consumption of about 400 \\(\\rm{\\upmu W}\\), corresponding to very low FOM\\({}_{\\rm LF}\\) below 16 \\(\\rm{fJ}/conv.\\)-_step_. Measurements have shown that the prototype ADCs are fully functional with both the MCS and HL switching schemes, although the MCS version is more robust to non-linearity errors. The designed ADCs are ready for implementation in multi-channel readout ASICs.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline & [24] & [25] & [23] & [27] & [26] & \\begin{tabular}{c} MCS- \\\\ MOM-lp \\\\ \\end{tabular} &
\\begin{tabular}{c} HL- \\\\ MOM \\\\ \\end{tabular} \\\\ \\hline \\hline Architecture & SAR & SAR & SAR & SAR & SAR & SAR & SAR \\\\ \\hline CMOS [nm] & 40 & 40 & 90 & 65 & 65 & 65 & 65 \\\\ \\hline Resolution [bits] & 10 & 10 & 10 & 10 & 10 & 10 & 10 \\\\ \\hline Supply [V] & 1.2 & 1.1/1.3 & 1.2 & 1 & 1 & 1.2 & 1.2 \\\\ \\hline Area [mm\\({}^{2}\\)] & 0.023 & 0.027 & 0.024 & 0.039 & 0.135 & 0.014 & 0.014 \\\\ \\hline \\(C_{in}\\) [pF] (each input) & 1 & 1 & 1.78 & 0.51 & 1.9 & 0.6 & 0.6 \\\\ \\hline \\(f_{\\rm sample}\\) [MHz] & 120 & 100 & 50 & 50 & 90 & 40 & 80 \\\\ \\hline Power [\\(\\rm{\\upmu W}\\)]a & 1120 & 1090 & 664 & 820 & 1760 & 402 & 1040 \\\\ \\hline Max INL [LSB] & 0.6 & 0.82 & 0.45 & \\textless{}0.82 & - & 0.89 & 0.97 \\\\ \\hline Max DNL [LSB] & 0.73 & 0.73 & 0.36 & \\textless{}0.72 & - & 0.52 & 0.88 \\\\ \\hline ENOB\\({}_{\\rm LF}\\) [bits] & 9.26 & 9.3 & 9.26 & 9.16 & 8.8 & 9.32 & 9.1 \\\\ \\hline FOM\\({}_{\\rm LF}\\) [\\(\\rm{fJ}/conv.\\)-_step_] & 15.2 & 17.3 & 21.68 & 28.7 & 44 & 15.69 & 23.61 \\\\ \\hline ENOB [bits] & 8.83 & 9.06 & - & 9.1 & 8.7 & - & - \\\\ \\hline FOM [\\(\\rm{fJ}/conv.\\)-_step_] & 20.5 & 20.4 & - & 29.7 & 47 & - & - \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Comparison with the state-of-the-art ADCs.
One of the ADCs (MCS-MIM version) has already been implemented in the monitoring subsystem of the Low Power Giga Bit Transceiver (lpGBT) ASIC [28], the common serialiser/deserialiser device for the Large Hadron Collider (LHC) detectors. As this application does not require high sampling rates, while the design was added directly to the production version of the lpGBT before the ADC prototype was available and verified, in the actual implementation, for safety, the asynchronous SAR logic was replaced by an automatically synthesised synchronous control logic. The lpGBT tests confirmed the very good performance of the ADC and showed that it works correctly during irradiation up to at least 3.8 MGy dose [28].
This work has received funding from the Polish Ministry of Science and Higher Education under contract No 5179/H2020/2021/2, and from the European Union's Horizon 2020 Research and Innovation programme under grant agreement No 101004761 (AIDAinnova).
## References
* readout ASIC for silicon strip sensors of upstream tracker in the upgraded LHCb experiment_, _Sensors_, vol.22, pp.1-21, December 2022.
* [2] F. Bouyjou, G. Bombardi, F. Dulucq, A. El Berni, S. Extier, M. Firlej, T. Fittowski, F. Guilloux, M. Idzik, C. De La Taille, A. Marchioro, A. Molenda, J. Moron, K. Swientek, D. Thienpont, T. Vergine, _The front-end readout ASIC for the CMS High Granularity Calorimeter_, _Journal of Instrumentation_, JINST C03015 2022.
* [3] H. Hernandez, B. Sanches, D. Carvalho, M. Bregant, A. Pabon, R. Wilton, R. Hernandez, T. Oliveira Weber, A. Couto, A. Campos, H. Alarcon, T. Martins, M. Munhoz, W. Noije, _A Monolithic 32-channel Front-End and DSP ASIC for Gaseous Detectors_ IEEE Transactions on Instrumentation and Measurement 2020, 69, 2686-2697. doi:10.1109/TIM.2019.2931016.
* [4] T. Niknejad, E. Albuquerque, A. Benaglia, A. Boletti, R. Bugalho, F. De Guio, M. Firlej, T. Fittowski, R. Francisco, M. Gallinaro, A. Ghezzi, M. Idzik, M. Lucchini, M. Malberti, J. Moron, L. Oliveira, M. Pisano, N. Redaelli, J.C. Silva, R. Silva, M. Silveira, K Swientek, T. Tabarelli de Fatis, J. Varela, _Results with the TOFHIR2X Revision of the Front-end ASIC of the CMS MTD Barrel Timing Layer_, 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC).
* [5] G. de Geronimo, G. Iakovidis, S. Martoiu; V. Polychronakos, _The VMM3a ASIC_, IEEE Transactions on Nuclear Science, Volume: 69, Issue: 4, April 2022.
* [6] S. Conforti Di Lorenzo, A. Afiri, S. Bolognesi, G. Bombardi, F. Bouyjou, S. Callier, C. De La Taille, P. Dinaucourt, O. Drapier, F. Dulucq, A. El Berni, S. Extier, M. Firlej, T. Fittowski, F. Gastaldi, F. Guilloux, M. Idzik, A. Marchioro, A. Mghazli, A. Molenda, J. Moron, J. Nanni, B. Quilain, L. Raux, K. Swientek, D. Thienpont and T. Vergine, _HKROC: an integrated front-end ASIC to readout photomultiplier tubes for the Hyper-Kamiokande experiment_, Journal of Instrumentation, Volume 18, January 2023.
* [7] L. Gonella, F. Faccio, M. Silvestri, S. Gerardin, D. Pantano, V. Re, M. Manghisoni, L. Ratti, A. Ranieri, _Total ionizing dose effects in 130-nm commercial CMOS technologies for HEP experiments_, Nucl. Instrum. Methods Phys. Res., A 582 (2007) 750-754.
* [8] G. Borghello, E. Lerario, F. Faccio, H. Koch, G. Termo, S. Michelis, F. Marquez, F. Palomo, F. Munoz, _Ionizing radiation damage in 65 nm CMOS technology: Influence of geometry, bias and temperature at ultra-high doses_, Microelectron. Reliab. 116 (2021) 114016
* [9] M. Standke et al, _RD53B Wafer Testing for the ATLAS ITk Pixel Detector_, _J. Phys.: Conf. Ser._, 2374 012087, 2022.
* [10] X. Llopart, J. Alozy, R. Ballabriga, M. Campbell, R. Casanova, V. Gromov et al., _Timepix4, a large area pixel detector readout chip which can be tiled on 4 sides providing sub-200 ps timestamp binning_, _J. Instrum._ 17 (2022) C01044.
* [11] S. Biereigel, S. Kulis, R. Francisco, P. Leitao, P. Leroux, P. Moreira, J. Prinzie, _The lpGBT PLL and CDR architecture, performance and SEE robustness_, PoS TWEPP2019 (2020) 034. DOI: [https://doi.org/10.22323/1.370.0034](https://doi.org/10.22323/1.370.0034)
* [12] S. W. M. Chen and R. W. Brodersen, _A 6-bit 600-MS/s 5.3-mW asynchronous ADC in 0.13-CMOS_, _IEEE J. Solid-State Circuits_, vol. 41, no. 12, pp. 2669-2680, Dec. 2006.
* [13] L. Sumanen, M. Waltari, K.A.I. Halonen, _A 10-bit 200-MS/s CMOS Parallel Pipeline A/D Converter_, _IEEE J. Solid-State Circuits_, vol. 36, pp. 1048-1055, July 2001.
* [14] M. Dessouky, A. Kaiser, _Input switch configuration for rail-to-rail operation of switched opamp circuits_, _Electronics Letters_, vol. 35, no. 1 pp. 8-10, January 1999.
* [15] H.J. Jeon, Y-B. Kim, _Offset voltage analysis of dynamic latched comparator_, _IEEE International SOC Conference_, 2010.
* [16] X. Tang, J. Liu, Y. Shen, S. Li, L. Shen, A. Sanyal, K. Ragab, and N. Sun, _Low-Power SAR ADC Design: Overview and Survey of State-of-the-Art Techniques_, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 69, no. 6, pp. 2249-2262, June 2022.
* [17] Y. Zhu, Ch.-H. Chan, U.-F. Chio, S.-W. Sin, S.-P. U, R.-P. Martins, F. Maloberti, _A 10-bit 100-MS/s Reference-Free SAR ADC in 90 nm CMOS_, _IEEE Journal of Solid State Circuits_, **vol. 45**, no. 6, June 2010.
* [18] V. Hariprasath, J. Guerber, S-H. Lee, U-K. Moon, _Merged capacitor switching based SAR ADC with highest switching energy-efficiency_, _Electronics Letters_, vol. 46, no. 9, April 2010.
* [19] A. Sanyal and N. Sun, _SAR ADC architecture with 98% reduction inswitching energy over conventional scheme_, _Electronics Letters_, vol. 49, no. 4, February 2013.
* [20] M. Firlej, T. Fiutowski, M. Idzik, S. Kulis, J. Moron, K. Swientek, _An Ultra-Low Power 10-bit, 50 MSps SAR ADC for Multi-Channel Readout ASICs_, _Journal of Instrumentation_, JINST 18 (2023) P11013.
* [21] J. Holub and J. Vedral, _Stochastic testing of ADC-Step-Gauss method_, _Comput. Stand. Interfaces_, vol. 26, pp.251-257, 2004.
* [22] R.H. Walden, _Analog-to-digital converter survey and analysis_, _IEEE J. Selected Areas in Communications_, no.4 pp. 539-550, April 1999.
* [23] Chi-Chang Lu,Ding-ke Huang, _A 10-Bits 50-MS/s SAR ADC Based on Area-Efficient and Low-Energy Switching Scheme_, _IEEE Access_ v.8, February 2020.
* [24] Y. Shen et al., _A 10-bit 120-MS/s SAR ADC with reference ripple cancellation technique_, IEEE J. Solid-State Circuits, vol. 55, no. 3, pp. 680-692, Oct. 2019.
* [25] Y Shen, X. Tang, X. Xin, S. Liu, Z. Zhu, and N. Sun, _A 10-bit 100-MS/s SAR ADC With Always-OnReference Ripple Cancellation_, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 69, no. 10, pp. 3965-3975, October 2022.
* [26] J. Digel, M. Grozing, M. Berroth, _A 10 bit 90 MS/s SAR ADC in a 65 nm CMOS technology_, 2016 IEEE 16th Topical Meeting on Silicon Monolithic Integrated Circuits in RF Systems (SiRF).
* [27] M. Yoshioka, K. Ishikawa, T. Takayama, S. Tsukamoto, _A 10-b 50-MS/s 820-\\(\\upmu\\)W SAR ADC With On-Chip Digital Calibration_, IEEE Trans. on Biomedical Circuits and Systems, vol. 4, no. 6, 2010.
* [28] M. Firlej, T. Fiutowski, J. Fonseca, M. Idzik, S. Kulis, P. Moreira, J. Moron, K. Swienttek, _An lpGBT Subsystem for Environmental Monitoring of Experiments_, _Journal of Instrumentation_, JINST 18 (2023) P06008. | The design and measurement results of ultra-low power, fast 10-bit Successive Approximation Register (SAR) Analog-to-Digital Converter (ADC) prototypes in 65 nm CMOS technology are presented. Eight prototype ADCs were designed using two different switching schemes of capacitive Digital-to-Analog Converters (DACs), based on MIM or MOM capacitors, and controlled by standard or low-power SAR logic. The layout of each ADC prototype is drawn in 60 um pitch to make it ready for multi-channel implementation. A series of measurements have been made confirming that all prototypes are fully functional, and six of them achieve very good quantitative performance. Five out of eight ADCs show both integral (INL) and differential (DNL) nonlinearity errors below 1 LSB. In dynamic measurements performed at 0.1 Nyquist input frequency, the effective number of bits (ENOB) between 8.9-9.3 was obtained for different ADC prototypes. Standard ADC versions work up to 80-90 MSps with ENOB between 8.9-9.2 bits at the highest sampling rate, while the low-power versions work up to above 50 MSps with ENOB around 9.3 bits at 40 MSps. The power consumption is linear with the sample rate and at 40 MSps it is around 400 \\(\\upmu\\)W for the low-power ADCs and just over 500 \\(\\upmu\\)W for the standard ADCs. At 80 MSps the standard ADCs consume about 1 mW.
Front-end electronics for detector readout, Analogue electronic circuits, Digital electronic circuits, VLSI circuits +
Footnote †: Corresponding author. | Condense the content of the following passage. | 358 |
arxiv-format/2403_07071v1.md | # LISO: Lidar-only Self-Supervised 3D Object Detection
Stefan Andreas Baur
1Mercedes-Benz, Germany 12University of Tubingen 2
Frank Moosmann
1Mercedes-Benz, Germany 1
Andreas Geiger
2University of Tubingen 23Tubingen AI Center3
## 1 Introduction
Human developmental research reveals that infants less than one year of age are able to categorize animate and inanimate objects based on observed motion cues and can generalize this categorization to previously unseen objects [24]. Yet, lidar object detectors with SOTA performance are trained using manually selected and categorized annotations. These annotations are very expensive to obtain and become outdated quickly: new lidar sensors are coming to market regularly, trained SOTA object detectors are sensitive to sensor characteristics and change in mounting position, and existing annotations are difficult to transfer between sensors or different sensor mounting positions [16], resulting in high re-labeling efforts with each change.
In this paper, we aim at bridging this gap by distilling motion cues observed in self-supervised lidar scene flow into SOTA single-frame lidar object detectors. We introduce a novel self-supervised method which is charmingly simple and easy to use: Input to our method are solely lidar pointcloud sequences. No humanannotated bounding boxes, cameras, costly high-precision GPS, or tedious sensor rig calibration is required. Our method trains and runs a self-supervised lidar flow estimator [3] under the hood in order to create motion cues. We show in our experiments that these motion cues are a key factor to the success of our method. Based on the estimated lidar flow we bootstrap initial pseudo ground truth using simple clustering and track optimization. With these mined bounding boxes of _moving_ objects we initialize a self-supervised, trajectory-regularized variant of self-training [1] (which is semi-supervised in its original form): We train a first version of a SOTA object detector, then iteratively re-generate and trajectory-regularize the pseudo ground truth and re-train the detector. Since the single-frame object detector has no concept of motion, it generalizes to detect any _movable_ object on the way. Exemplary output of the trained detector, 3D boxes in single-frame point clouds, is depicted in Fig. 1.
Our contribution is a novel self-supervised trajectory-regularized self-training framework for single-frame 3D object detection with the following properties:
* It is based entirely on lidar, i.e. without the limitations of prior works: no cameras, no calibration, no high-precision GPS, no manual annotations.
* It is agnostic w.r.t. the sensor model, mounting pose, and detectors architecture and works with the same set of hyper parameters: We demonstrate this across four different datasets and different SOTA detector networks.
* It is able to generalize from _moving_ objects (motion cues) to _movable_ objects (final detection results) and significantly outperforms SOTA methods while being simpler.
We show that using motion cues together with trajectory-regularized self-training is key to this success.
The code of this approach will be published for easy use and comparison by other researchers.
The paper is organized as follows: Sec. 2 discusses related work before the proposed method is described in detail in Sec. 3. Extensive experiments in Sec. 4 show the performance of our method before we conclude in Sec. 5.
Figure 1: Objects predicted by our method using no manual annotations. Red boxes are ground truth boxes, yellow boxes are predicted by our network.
## 2 Related Work
### Single Frame Lidar 3D Object Detection
Object detectors operating on 3D point clouds are an active research field. The currently best-performing ones are using deep neural networks trained via supervised learning and can be categorized by their internal representation: Some networks operate on points directly like PointRCNN [19], 3DSSD [29], and IA-SSD [33]. Others project the points either to a virtual range image [11, 12, 23] or into a voxel representation like VoxelNet [34], PointPillars, CenterPoint [30], and Transfusion-L [2]. However, all aforementioned methods require large human-annotated datasets in order to perform well and obtaining such annotations is very expensive. We address this problem in this paper, enabling training of SOTA object detectors using pseudo ground truth.
### Object Distilling from Motion Cues
Multiple approaches have been suggested which leverage motion cues from lidar frame pairs in order to detect _moving_ objects. Using the assumption of local geometric constancy, they decompose dynamic scenes into separate moving entities by applying as-rigid-as-possible optimization. Examples of these are the works by Dewan et al. [6], RSF [5] and OGC [21]. The two former methods are optimization-based whereas OGC uses these constraints as loss-function to self-supervise a segmentation network.
Although these methods avoid the need for expensive labels, in contrast to our method they tend not to work well in low-resolution areas, they are typically slow, and can only detect _moving_ but not _movable_ (_i.e_. static but potentially moving) objects.
### Pseudo Ground Truth for Object Detection
Different approaches have been proposed to mine pseudo ground truth for training object detectors: Najibi et al. [13] and very recently Seidenschwarz et al. [17] use motion cues similar to section 2.2 in order to distill _moving_ objects as pseudo ground truth and use it to train an object detector. [13] runs optimization for each frame pair to obtain lidar scene flow, clusters points, fits boxes, tracks them using a Kalman Filter and finally refines them on ICP-registered point clouds. [17] optimizes a clustering algorithm on motion cues through message passing which they then apply to segment point clouds into a set of instances and subsequently fit boxes. As shown in [17], and in contrast to our approach, both methods suffer from a large performance gap between _moving_ and _movable_ objects. We demonstrate that our method does not suffer from this gap.
[10], [20] were the first to _iteratively_ apply a detection, tracking, retraining paradigm to autonomous driving (AD) data. They build upon consistency constraints between object detectors which they train in the lidar and cameradomain, making use of an optical flow network which is trained using supervision. The approach requires a calibrated sensor rig with possibly large camera coverage, IMU, and precision GPS. Similarly, [26] uses video sequences together with lidar scene flow to jointly train a camera and a lidar object detector. MODEST [28, 31] does not require coverage by calibrated cameras, but instead adds the additional requirement to have multiple lidar recordings of the same location in order to identify objects that vanished over time. With such demanding requirements, these approaches are not easy to use and are partially unsuited for popular AD datasets.
Oyster [32] is the approach most similar to ours. It uses DBSCAN [7] on lidar point clouds to initialize pseudo ground truth. Clusters are then tracked using forward-and-reverse tracking in sensor coordinates (i.e. without consideration for ego motion), using a complex policy for confidence-based track retention. After training an object detector on close range data, they employ zero-shot generalization to the far range data, track again and iteratively retrain the detector. In their experiments they use different hyperparameters for different datasets. Our method, in contrast, explicitly considers sensor motion, produces much cleaner initial proposals by leveraging a self-supervised lidar scene flow network, does not require zero-shot-generalization from near-to-far-field, and works with current SOTA object detectors. We show on multiple datasets that our method outperforms [32], that it works robustly with the same set of hyper-parameters and also generalizes well to detect _movable_ objects. Additionally, we will release our code to facilitate further research on this topic.
## 3 Method
A general overview of our method is sketched in Fig. 2 and some steps illustrated in Fig. 3. As input, we take raw (unlabeled) point cloud sequences and undergo three stages, all of which are detailed in the following: Preprocessing of point clouds and lidar scene flow computation (Sec. 3.1), initial pseudo ground truth generation (Sec. 3.2) and repeated training with pseudo ground truth refinement (Sec. 3.3). The final output of the method is a trained object detector which can detect _movable_ objects in raw single-frame point clouds.
Figure 2: **Overview of the proposed method.** Point cloud sequences are preprocessed (blue, Sec 3.1), initial pseudo ground truth is created (orange, Sec. 3.2) and the object detector is iteratively trained and pseudo ground truth regenerated (red, Sec. 3.2).
### Preprocessing
We preprocess the raw input point clouds as follows:
**Ground Removal:** First, we remove distracting ground points from each single point cloud using JCP [18], which is a simple, robust, yet effective algorithm to remove ground points using changes in observed height above ground.
**Ego Motion Estimation:** Second, we compute ego-motion between neighboring frame pairs using KISS-ICP [25], which is based on a robust version of ICP. The output is a cm-level accurate transformation \\(\\mathbf{T}_{\\text{ego}}^{t\\to t+1}\\in\\mathbb{R}^{4\\times 4}\\), describing the ego vehicle position at time \\(t+1\\) represented in the ego frame from time \\(t\\).
**Lidar Scene Flow Estimation:** Third, we compute lidar scene flow between neighboring frame pairs resulting in a flow vector \\(\\mathbf{f}_{i}=(dx,dy,dz)\\) for every point \\(i\\) in the first point cloud \\(\\mathcal{P}^{t}\\). We chose to use SLIM [3] as its code is readily available, it is easy to use, features fast inference, and produces SOTA results. The network is trained self-supervised on raw point cloud sequences, minimizing a k-nearest-neighbor loss between forward and time-reversed point clouds.
The components of our preprocessing steps (Ground Removal, Ego Motion Estimation, Lidar Scene Flow Estimation) have been selected for robustness and are all used with their default parameters from their respective publications.
Figure 3: **Overview over preprocessing, initial pseudo ground truth generation and training with examples.****Top left:** In the first step, self-supervised lidar scene flow is computed and corrected for vehicle ego-motion. Points are colored by flow direction and magnitude. **Top right:** In the second step, the scene flow is clustered and bounding boxes are fitted (to the moving objects). **Bottom left:** In the third step, the network is trained on the pseudo ground truth and is generalizing to static objects also, since it does not have the motion information as input signal. Points are thus colored by laser intensity. **Bottom right:** Ground truth, for reference.
### Initial Pseudo Ground Truth Generation
The aim of our method is to mine pseudo ground truth for training a 3D lidar object detector. For a single point cloud \\(\\mathcal{P}^{t}\\) (consisting of \\(n\\in\\mathbb{N}\\) points \\(\\mathbf{p}_{i}\\), \\(\\mathbf{p}_{i}\\in\\mathcal{P}^{t}\\)) this ground truth is a set of 3D bounding boxes \\(\\mathcal{B}\\) with confidences representing the objects at time \\(t\\):
\\[\\mathcal{B}^{t}=\\{\\mathrm{B}_{j}^{t},j\\in\\mathbb{N}\\}=\\{(x,y,z,l,w,h,\\theta,c)_{ j}\\} \\tag{1}\\]
Here, \\((x,y,z)=\\mathbf{x}\\) define the center position, \\((l,w,h)\\) the length, width and height, \\(\\theta\\) the heading (orientation around up axis), and \\(c\\) the confidence for a single box.
The key success factor of our method is to focus on a high precision (and potentially low recall) of the initial set of bounding boxes in order to avoid \"wrong\" objects to negatively influence the object detector. We achieve this by leveraging shape, density, and especially motion cues to robustly identify _moving_ objects solely (see also top-right of Fig. 3). Sec. 3.3 targets at generalizing these to _movable_ objects later on.
**Flow Clustering:** The points \\(\\mathbf{p}_{i}\\in\\mathcal{P}^{t}\\) in each preprocessed point cloud are clustered based on geometry and motion: All stationary points in a scene should have a flow \\(\\mathbf{f}_{i}\\) similar to the vehicle's ego-motion, i.e. \\(\\mathbf{f}_{i}\\approx\\mathbf{f}_{i,\\mathrm{sta}}=((\\mathbf{T}_{\\mathrm{ego}}^ {t\\to t+1})^{-1}-\\mathbf{I}_{4})\\cdot\\mathbf{p}_{i}\\). The residual flow must then be caused by motion of other actors: \\(\\mathbf{f}_{i,\\mathrm{dyn}}=\\mathbf{f}_{i}-\\mathbf{f}_{i,\\mathrm{sta}}\\). We filter all static points by applying a threshold of 1m/s to the residual flow and cluster the remaining points based on their point location and flow vector in 6D using DBSCAN [7](with parameters \\(\\varepsilon=1.0\\), \\(\\mathrm{minPts}=5\\)).
We fit a 3D bounding box \\(\\mathrm{B}_{j}^{t}\\) to each resulting cluster following [31] and discard boxes with \\(l/w>4.0\\), \\(lw<0.35\\mathrm{m}^{2}\\), or \\(lwh<0.5\\mathrm{m}^{3}\\). The heading \\(\\theta\\) is set to match the \"forward-axis\" of the motion, i.e. we orient the boxes along the direction of the residual flow \\(\\mathbf{f}_{i,\\mathrm{dyn}}\\). The confidence \\(c\\) is set to 1.
**Tracking:** We run a simple flow based tracker: Since we have accurate ego-motion available, we can track accurately in a fixed coordinate system w.r.t. the world. First, using the residual flow, we can compute for each box at \\(t\\) a rigid body transform that transforms the proposed box \\(\\mathrm{B}_{i}^{t}\\) forward in time towards its suspected location at \\(t+1\\), \\(\\mathrm{\\dot{B}_{i}^{t+1}}\\). The propagated boxes \\(\\dot{\\mathcal{B}}^{t+1}\\) are matched greedily against the actual boxes \\(\\mathcal{B}^{t+1}\\) based on box-center distance to the new detections. Unmatched boxes in \\(\\mathcal{B}^{t+1}\\) which are further away than 1.5m from propagated boxes spawn new tracklets. Unmatched tracklets from \\(\\mathcal{B}^{t}\\) are propagated according to the last observed box motion for up to one time step, after that unmatched tracklets are terminated. We run tracking forward and reverse in time, connecting tracklets from forward and reverse tracking to tracks.
The resulting set of tracks is post-filtered: We discard tracks that are shorter than 4 time steps or that have a median box confidence \\(c\\) lower than threshold 0.3 (note that the initial confidence \\(c=1\\) set during clustering is later replaced by detectors confidences, see Sec. 3.3). This retains only stable and high-confident tracks and avoids false positives to enter the pseudo ground truth.
**Track Optimization:** We reduce positional noise of the tracks by minimizing translational jerk on all tracks longer than 3m. Let \\(\\mathcal{X}_{\\text{obs}}\\) be the sequence of (noisy) observed box center positions \\(\\mathbf{x}_{i}\\) for consecutive time steps \\(i\\in\\{1, ,T\\}\\) of a track and their derivative \\(\\frac{d\\mathbf{x}_{i}}{dt}\\approx\\frac{\\mathbf{x}_{i+1}-\\mathbf{x}_{i}}{\\Delta t}\\). We compute smoothed track positions \\(\\mathcal{X}_{\\text{smooth}}\\) by initializing them to \\(\\mathcal{X}_{\\text{obs}}\\) and minimizing the following loss w.r.t. \\(\\mathbf{x}_{\\text{smooth}}\\):
\\[L=\\sum_{i=1}^{T}\\left\\|\\frac{d^{4}\\mathbf{x}_{i,\\text{smooth}}}{dt^{4}}\\right\\| _{2}^{2}+\\beta\\left\\|\\mathbf{x}_{i,\\text{smooth}}-\\mathbf{x}_{i,\\text{obs}} \\right\\|_{2}^{2} \\tag{2}\\]
I.e. we minimize the jerk \\(\\frac{d^{4}\\mathbf{x}}{dt^{4}}\\) and use a quadratic regularizer term (\\(\\beta=3\\)) on the positions. Experiments revealed that this simple optimization goal outperforms more sophisticated ones like fitting an unrolled bicycle model or adding terms for acceleration, which leads to tracks \"overshooting\" corners, especially when using aggressive optimization parameters required to run at reasonably fast optimization time.
We subsequently align the orientation \\(\\theta\\) of each detection in a track to the direction of the smoothed track at its position. Box dimensions \\(\\{l,w,h\\}\\) of all boxes in a track are adapted to the 90 percentile of observed box dimensions in a track. All hyperparameters related to clustering, tracking and track optimization have been tuned visually on two sample snippets from nuScenes.
### Trajectory-Regularized Self-Training
After having mined initial pseudo ground truth of _moving_ objects with a high precision and low recall, we now aim at iteratively improving our pseudo ground truth by training and using an object detector so that the pseudo ground truth generalizes to _movable_ objects. We achieve this by executing iterative trajectory-regularized self-training, which is composed of the following two steps:
**Training:** We train the target object detection network using the current pseudo ground truth in a supervised training setup. Any single-frame object detection network can be plugged into our pipeline. In our experiments we do not deviate from the basic training setup of our object detection networks. Like any SOTA object detection method, we apply standard augmentation techniques to a point cloud during network training: Random rotation, scaling (\\(\\pm 5\\%\\)), and random translation up to 5m around the origin. Furthermore, we randomly pick 1 to 15 objects from the pseudo ground truth database and insert a random subset of their points at random locations into the scene.
**Pseudo Ground Truth Regeneration:** After a certain amount of training steps (\\(s=30k\\) in the experiments) we stop the training and use the trained object detector to recreate new, improved pseudo ground truth: We run the trained detector in inference mode over all sequences in the training dataset to generate new box detections \\(\\mathcal{B}\\), thereby using the detectors' confidence as box confidence \\(c\\). We regularize these detections by running the flow-based tracker and tracksmoothing exactly like we do for the initial pseudo ground truth generation. Every \\(2nd\\) iteration (i.e. every \\(60k\\) steps in the experiments), we discard the network weights after regenerating the pseudo ground truth.
Fig. 4 illustrates the effect of our self-training: We see that the pseudo ground truth improves with each re-generation and that dropping network-weights allows the network to re-focus on the generalized pseudo ground truth. The pseudo ground truth has a consistently lower performance because we keep it conservative, keeping only highly certain boxes.
Two aspects are key to making our method perform well:
* Being restrictive when composing the pseudo ground truth (i.e. using plausible tracks of high-confidence network detections or clusters with significant flow only) avoids adding false positives into the pseudo ground truth and hence avoids that our network increasingly hallucinates with each iteration.
* Not using flow but only single-frame pointclouds as input for the detector allows the detector to focus on appearance of objects solely.
These allow our method to generalize from initially mined _moving_ objects to _movable_ objects in the scene.
Figure 4: **Left: Effect of self-supervised self-learning:** At the beginning of the training, only _moving_ objects are contained within the pseudo ground truth and the pseudo ground truth thus scores 0 on static objects. Thanks to the self-training, the network generalizes to _movable_ objects and the score of the pseudo ground truth on static objects starts to increase with every regeneration at each ”X”. The pseudo ground truth’s performance is measured on a subset of the train set of WOD dataset, while the network (here: Centerpoint) is evaluated on a small subset of the validation set. **Right: Evaluation of Self-Training Hyperparameters**: Network (Centerpoint) performance over the course of a training on WOD. \\(s\\) is the number of training steps between pseudo ground truth regenerations, \\(r\\) is the number of regenerations after which network weights are dropped and re-initialized.
## 4 Evaluation
We evaluate our method on multiple datasets across multiple SOTA networks. SLIM [3], the self-supervised flow network we use, is trained and inferred as in the published version, but Bird's-Eye-View (BEV) range is extended from \\(70\\times 70\\)m, \\(640\\times 640\\) pixels to \\(120\\times 120\\)m and \\(920\\times 920\\) pixels.
### Datasets and Metrics
We evaluate our method on four different AD datasets. For a fair comparison, we compute metrics for _movable_ objects by mapping all animate objects (Cars, Trucks, Trailers, Motorcycles, Cyclists, Pedestrians and other Vehicles) to a single category and discarding all inanimate objects (Barrier, Traffic Cone, ) since our method does not predict any class attributes. In Table 2 and Table 3 we also give class-based results for completeness. Class labels for true positives are taken from ground truth. For false positives, the predicted class label is assigned randomly according to the label distribution in the dataset.
**Waymo Open Dataset (WOD):**[22] is a large, geographically diverse dataset recorded with a proprietary high quality lidar. We evaluate using the protocol of [13, 17], using an area of whole 100m\\(\\times\\)40m BEV grid around the ego vehicle, artificially cropping the predictions of our method down to this reduced area.
**KITTI:**[8, 9] is recorded using a Velodyne HDL64 lidar sensor, where some parts have been annotated with 3D object boxes and tracking information in the forward facing camera field of view. A large portion of the published data is only available in raw format without annotations, which our method is able to leverage, due to not requiring any ground truth annotations for training.
Figure 5: **Qualitative Results on WOD. Red boxes are ground truth boxes, yellow are predictions. Left: OYSTER-CP Right: LISO-CP**
**Argoverse 2:** AV2 [27] is recorded using two stacked Velodyne VLP32 lidar sensors and annotated with 3D object boxes. On AV2 and KITTI, we evaluate both 2D (in BEV space) and 3D box IoU at IoU thresholds of 0.3 and 0.5 in the area of \\(100\\times 100\\)m around the ego vehicle.
**nuScenes:**[4] is recorded using a Velodyne VLP32 lidar sensor. We evaluate using the nuScenes protocol. Please note that models trained with our method get a high penalty on the Nuscenes Detection Score (NDS), because they cannot distinguish object classes and therefore score an Average Attribute Error of 1.0.
### Networks
We evaluate our proposed method with two SOTA lidar object detection networks of different architecture: Centerpoint [30] and Transfusion-L [2]. For both we do not make any modification to the published implementation or network architecture or losses. We train both networks with a batch size of four.
**CenterPoint:**[30] is based on 2D convolutions in BEV space. Object centers are represented as heatmap in the BEV, which are then reduced to a sensible amount of boxes using non-maximum-suppression.
**Transfusion-L:**[2] is a Transformer based architecture, which is applied after initial encoding of the LiDAR point cloud into a 2D BEV feature tensor.
\\begin{table}
\\begin{tabular}{l l|c c c c|c c} \\hline \\hline & & \\multicolumn{4}{c|}{**AV2**} & \\multicolumn{2}{c}{**nuScenes**} \\\\ & & \\multicolumn{2}{c}{BEV-IOU} & \\multicolumn{2}{c|}{3D-IOU} & \\multicolumn{2}{c}{} \\\\ & & [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) & AP\\(\\uparrow\\) & AOE\\(\\downarrow\\) \\\\ \\hline \\multirow{4}{*}{\\begin{tabular}{l} \\end{tabular} } & CP [30] & 0.778 & 0.778 & 0.736 & 0.517 & 0.484 & 0.560 \\\\ & TF [2] & 0.783 & 0.654 & 0.767 & 0.601 & 0.627 & 0.501 \\\\ \\hline \\multirow{4}{*}{
\\begin{tabular}{l} \\end{tabular} } & DBSCAN [7] & 0.054 & 0.026 & 0.020 & 0.003 & 0.008 & 3.120 \\\\ & DBSCAN(SF) & 0.070 & 0.026 & 0.065 & 0.015 & 0.003 & 2.623 \\\\ & DBSCAN(GF) & 0.105 & 0.041 & 0.088 & 0.024 & 0.000 & 1.000 \\\\ & RSF [5] & 0.074 & 0.026 & 0.055 & 0.014 & 0.019 & 1.003 \\\\ \\hline \\multicolumn{2}{l}{} & Oyster [32]\\(\\dagger\\) & 0.354 & 0.245 & - & - & - & - \\\\ & Oyster-CP [32] & 0.381 & 0.266 & 0.150 & 0.002 & 0.091 & 1.514 \\\\ & Oyster-TF [32] & 0.340 & 0.198 & 0.182 & 0.016 & 0.093 & 1.564 \\\\ & **LISO**-CP & **0.448** & **0.335** & **0.367** & **0.188** & 0.109 & 1.062 \\\\ & **LISO**-TF & 0.417 & 0.294 & 0.317 & 0.176 & **0.134** & **0.938** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: **Evaluation on AV2 [27] and nuScenes [4]**: CP, TF: network architecture, in the first two lines trained supervised for comparison. \\(\\dagger\\): Metrics as reported in [32]. SF: self-supervised lidar scene flow by SLIM, GF: ground truth lidar scene flow. Note that nuScenes uses a minimum precision and recall threshold of 0.1, and since the recall of GT flow clustering is lower than 0.1, all results are clipped away. For full nuScenes scores see supplementary material.
### Baselines
Besides the obvious baseline of training the object detection networks supervised from scratch on ground truth from the dataset, we also compare to multiple strong unsupervised baselines. For details, see Sec. 2.
**RSF:** We include RSF [5] in our evaluation as strong representative of methods doing object distilling from motion cues. We ran experiments using the published code of the authors.
**DBSCAN:** This algorithm [7] clusters points in the point cloud with similar 3D locations into objects. We additionally evaluate extending the cluster space to 6D by either using SLIM scene flow (SF) or ground-truth scene flow (GT). Thus, the \"DBSCAN(SF)\" baseline essentially corresponds to the raw detections used for our initial pseudo ground truth generation. We use the implementation from [15] with parameter values 1.0 for epsilon and 5 for the minimum number of points for a valid cluster.
**SeMoLi:**[17] is the most recent baseline. As it showed to outperform [13] as self-supervised object detection method we include only published results of [17] in our evaluation.
**Oyster:** We reimplemented [32] using PyTorch [14], and apply the proposed framework to the used networks CenterPoint and Transfusion-L (abbreviated Oyster-CP and Oyster-TF) to be as close as possible to our method and evaluation. We verify our re-implementation, by reproducing the reported metrics by [32] on AV2, see Table 1.
\\begin{table}
\\begin{tabular}{l l|c c|c c c|c c c} \\hline \\hline & & \\multicolumn{2}{c|}{Movable} & \\multicolumn{2}{c|}{Moving} & \\multicolumn{2}{c|}{Still} & \\multicolumn{2}{c}{Vehicle} & \\multicolumn{2}{c}{Pedestrian} & \\multicolumn{1}{c}{Cyclist} \\\\ \\cline{3-10} & & \\multicolumn{2}{c|}{[email protected]\\(\\uparrow\\)} & \\multicolumn{2}{c|}{[email protected]\\(\\uparrow\\)} & \\multicolumn{2}{c|}{[email protected]\\(\\uparrow\\)} & \\multicolumn{2}{c|}{[email protected]\\(\\uparrow\\)} & \\multicolumn{2}{c}{[email protected]\\(\\uparrow\\)} & \\multicolumn{2}{c}{[email protected]\\(\\uparrow\\)} & \\multicolumn{2}{c}{[email protected]\\(\\uparrow\\)} \\\\ & & \\multicolumn{2}{c|}{BEV} & \\multicolumn{2}{c|}{3D} & \\multicolumn{2}{c|}{BEV} & \\multicolumn{2}{c|}{3D} & \\multicolumn{2}{c|}{BEV} & \\multicolumn{2}{c}{BEV} & \\multicolumn{2}{c}{BEV} \\\\ \\hline \\multirow{3}{*}{\\(\\Sigma\\)} & CP [30] & 0.765 & 0.684 & 0.721 & 0.624 & 0.735 & 0.657 & 0.912 & 0.513 & 0.134 \\\\ & TF [2] & 0.746 & 0.723 & 0.714 & 0.668 & 0.733 & 0.710 & 0.918 & 0.457 & 0.216 \\\\ \\hline \\multirow{3}{*}{\\(\\Sigma\\)} & DBSCAN [7] & 0.027 & 0.008 & 0.009 & 0.000 & 0.027 & 0.006 & 0.184 & 0.002 & 0.001 \\\\ & DBSCAN(SF) & 0.026 & 0.010 & 0.064 & 0.041 & 0.000 & 0.000 & 0.073 & 0.010 & 0.009 \\\\ & DBSCAN(GF) & 0.114 & 0.071 & 0.318 & 0.120 & 0.000 & 0.000 & 0.113 & 0.111 & 0.240 \\\\ & RSF [5] & 0.030 & 0.020 & 0.080 & 0.055 & 0.000 & 0.000 & 0.109 & 0.000 & 0.002 \\\\ & SeMoLi [17]\\(\\dagger\\) & - & 0.195 & - & **0.575** & - & - & - & - & - \\\\ & **LISO**-CP & 0.292 & 0.211 & 0.272 & 0.204 & 0.208 & 0.140 & 0.607 & 0.029 & 0.010 \\\\ \\hline \\multirow{3}{*}{\\(\\Sigma\\)} & Oyster-CP [32] & 0.217 & 0.084 & 0.151 & 0.062 & 0.176 & 0.056 & 0.562 & 0.000 & 0.000 \\\\ & Oyster-TF [32] & 0.121 & 0.015 & 0.051 & 0.007 & 0.098 & 0.010 & 0.475 & 0.000 & 0.000 \\\\ \\cline{1-1} & **LISO**-CP & **0.380** & **0.308** & **0.350** & 0.296 & **0.322** & **0.255** & **0.695** & **0.055** & **0.022** \\\\ \\cline{1-1} & **LISO**-TF & 0.327 & 0.208 & 0.349 & 0.245 & 0.233 & 0.126 & 0.669 & 0.024 & 0.012 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: **Evaluation on WOD dataset**: We evaluate using the protocol of [13, 17], using an area of whole 100m\\(\\times\\)40m BEV grid around the ego vehicle, considering objects that move faster than 1m/s to be _moving_ (difficulty level L2). CP, TF: network architecture, in the first two lines trained supervised for comparison. \\(\\dagger\\): Results taken from [17]. SF: lidar scene flow by SLIM, GF: ground truth lidar scene flow. For class-specific 3D detection scores see supplementary material.
### Results
**Quantitative Results:** On all four datasets we see that our method consistently outperforms all self-supervised baselines in all metrics for _movable_ objects, see Table 1, Table 2, and Table 3. Only SeMoLi [17] beats LISO on _moving_ objects (see Table 2) but significantly drops below LISO's performance on _movable_ objects, hinting to difficulties with generalization. Our method does not suffer from this performance gap and performs nearly equally well on _movable_ and _moving_ objects. We also observe that, despite mostly better performance in the supervised case, Transfusion-L responds less favorable than Centerpoint to the self-supervised training approaches. We suspect that Centerpoint, due to its convolutional architecture and its lack of positional encoding compared to Transfusion-L, is less susceptible to overfitting to where _moving_ objects have been observed and where not to expect them.
We also observe that the nuScenes dataset seems to be the most challenging for the self-supervised methods, leading to the biggest gap between supervised and unsupervised object detection performance. We attribute this to the sparseness of the lidar sensor used in the dataset with only 32 layers. Common to all self-supervised methods is the difficulty in estimating the forward orientation of objects (AOE on nuScenes, \\(2\\pi\\) periodic, see Table 1), which the pure IoU based metrics cannot reveal.
**Qualitative Results:** In Fig. 5 we compare our method to the best-performing baseline, Oyster, using Centerpoint as network architecture. It can be seen, that our method has higher accuracy and especially predicts box orientations more accurately.
Fig. 6, in contrast, showcases an important failure-case and general problem of all unsupervised methods: ground truth, especially in the WOD dataset, is also annotated in very sparse and distant regions. It is very challenging to generate pseudo ground truth for such regions whereas supervised trainings do not have to generalize from _moving_ to such _movable_ (but very sparse) objects. This may be the main reason for a still noticeable gap in evaluation results between supervised and unsupervised methods.
\\begin{table}
\\begin{tabular}{l|c c c c|c c c c c c} \\hline \\hline & \\multicolumn{3}{c|}{Movable (Moving \\& Still)} & \\multicolumn{2}{c}{Car} & \\multicolumn{2}{c}{Pedestrian} & \\multicolumn{2}{c}{Cyclist} \\\\ \\cline{2-11} & \\multicolumn{3}{c|}{BEV-IOU} & \\multicolumn{3}{c|}{3D-IOU} & \\multicolumn{3}{c}{BEV-IOU} & \\multicolumn{3}{c}{BEV-IOU} & \\multicolumn{3}{c}{BEV-IOU} \\\\ & [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) & [email protected]\\(\\uparrow\\) [email protected]\\(\\uparrow\\) \\\\ \\hline CP [30] & 0.755 & 0.690 & 0.736 & 0.601 & 0.814 & 0.794 & 0.370 & 0.128 & 0.409 & 0.157 \\\\ TF [2] & 0.747 & 0.665 & 0.729 & 0.582 & 0.820 & 0.776 & 0.311 & 0.096 & 0.263 & 0.032 \\\\ \\hline DBSCAN [7] & 0.023 & 0.002 & 0.010 & 0.000 & 0.026 & 0.005 & 0.000 & 0.000 & 0.064 & 0.007 \\\\ RSF [5] & 0.029 & 0.019 & 0.029 & 0.011 & 0.066 & 0.049 & 0.000 & 0.000 & 0.192 & 0.043 \\\\ \\hline Oyster-CP [32] & 0.235 & 0.098 & 0.114 & 0.000 & 0.327 & 0.135 & 0.000 & 0.000 & 0.000 & 0.000 \\\\ Oyster-TF [32] & 0.273 & 0.088 & 0.128 & 0.000 & 0.364 & 0.121 & 0.000 & 0.000 & 0.019 & 0.000 \\\\
**LISO**-CP & **0.446** & **0.330** & **0.419** & **0.159** & **0.520** & **0.411** & **0.097** & **0.019** & **0.445** & **0.053** \\\\
**LISO**-TF & 0.361 & 0.207 & 0.294 & 0.036 & 0.425 & 0.297 & 0.084 & 0.014 & 0.348 & 0.003 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Evaluation on KITTI dataset: We evaluate on the forward facing field of view where GT annotations are available, but run inference on the whole \\(100\\times 100\\)m BEV grid. Also note that flow annotations are not available for KITTI Object. CP, TF: network architecture, in the first two lines trained supervised for comparison.
### Ablation Study
We investigated the influence of different choices for the components in our pipeline, see Table 4 and cf. Fig. 2.
**Motion Cues as Clustering Input:** When adding motion cues to box clustering for Oyster we observe that detection performance already increases noticeably (comparing #3 to #4, Table 4). This demonstrates that self-supervised motion cues are a key ingredient to get high-quality initial pseudo ground truth as opposed to just using clustered 3D points.
**Motion Cues for Tracking:** Our lidar scene flow-based tracker is the biggest contributing factor to the success of our method: the ablation reveals that leveraging motion cues greatly improves tracking (comparing #4 to #6) and thus ultimately improves self-training, i.e. more accurate tracking allows for much more strictness when matching and filtering tracklets. (The Oyster tracker uses a matching threshold of 5m, our threshold is only 1.5m). This strictness leads to higher quality pseudo ground truth.
\\begin{table}
\\begin{tabular}{l l l c c c} \\hline \\hline Tag & Cluster & Input & Method & Self Train Track Optim. & [email protected](BEV) & [email protected](3D) \\\\ \\hline \\#1 & P, SF & no tracker & & 0.177 & 0.086 \\\\ \\#2 & P, SF & no tracker & ✓ & 0.201 & 0.131 \\\\ \\#3 & P & Oyster & ✓ & 0.217 & 0.084 \\\\ \\#4 & P, SF & Oyster & ✓ & 0.255 & 0.104 \\\\ \\#5 & P, SF & LISO(K, SF) & & 0.279 & 0.209 \\\\ \\#6 & P, SF & LISO(K, SF) & ✓ & 0.360 & 0.290 \\\\ \\#7 & P, SF & LISO(K, SF) & & ✓ & 0.292 & 0.211 \\\\ \\#8 & P, SF & LISO(K, SF) & ✓ & ✓ & **0.380** & **0.308** \\\\ \\hline \\#9 & P, SF & LISO(G, SF) & ✓ & ✓ & 0.411 & 0.339 \\\\ \\#10 & P, GF & LISO(G, GF) & ✓ & ✓ & 0.423 & 0.349 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Ablation study of different parts of our pipeline on WOD with Center-point [30]. **Abbreviations:** P: points, SF: self-supervised lidar scene flow (SLIM), GF: ground truth lidar scene flow. K: KISS-ICP ego-motion, G: ground truth ego-motion.
Figure 6: Missed ground truth boxes in the WOD dataset.
Effect of Self Training:Even without using any tracker to filter tracklets self training has a benefit (comparing #1 to #2). This is due to the network generalizing to new instances and confidence thresholding the detections. However, having an additional way to discard implausible detections (by tracking) amplifies the positive effect of self training (comparing #5 to #6 and #7 to #8), as it can better prevent false positives from entering the pseudo ground truth.
Track Optimization:The added benefit of Track Optimization is more independent of Self Training (comparing going from #5 to #7 with going from #6 to #8). Finally we notice, that with the combination of a strict tracker, track optimization and self training, the performance is very close to using ground truth flow and ego-motion (comparing #8 to #9 and #10).
Hyperparameters for Iterative Training:Fig. 4 demonstrates that the performance of our proposed method is relatively consistent across a range of hyperparameters. However, using too few steps between regeneration of pseudo ground truth (\\(s=20k\\)) leads to degrading performance, because the network does not have enough time to generalize and stabilize using the pseudo ground truth, which then has negative effects during regeneration of the pseudo ground truth. The experiment reveals that doing multiple iterations of self-learning and dropping network-weights to allow the network to adjust is beneficial for a good performance. We selected \\(r=2\\) and \\(s=30k\\) as a good compromise between performance and speed for all other experiments.
## 5 Conclusion
We have proposed a novel framework for self-supervised 3D lidar object detection based on self-supervised lidar scene flow. Using simple clustering on lidar scene flow in combination with a novel flow-based tracker, we were able to generate pseudo ground truth with _moving_ objects at high precision. This enabled us to bootstrap SOTA 3D lidar object detectors, without using any human labels. The detector generalizes from detecting _moving_ to detecting _movable_ objects over multiple self-training iterations. Experiments revealed that our trajectory-regularized self-learning, based on our scene flow-based tracker, is key to the success of our method. We have demonstrated the effectiveness of our approach on two SOTA lidar object detectors as well as four AD datasets. All experiments were conducted with the same set of hyperparameters, demonstrating the robustness of our method. Our method achieves a significant improvement in the state-of-the-art of self-supervised lidar object detection on all four datasets. Code will be released.
The biggest limitation of our approach is that our method is not able to distinguish different classes. Future work could be around generating pseudo ground truth for class labels, e.g. based on motion or size characteristics.
## References
* [1] Amini, M.R., Feofanov, V., Pauletto, L., Hadjadj, L., Devijver, E., Maximov, Y.: Self-training: A survey (2023) 2
* [2] Bai, X., Hu, Z., Zhu, X., Huang, Q., Chen, Y., Fu, H., Tai, C.: Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. pp. 1080-1089. IEEE (2022). [https://doi.org/10.1109/CVPR52688.2022.00116](https://doi.org/10.1109/CVPR52688.2022.00116), [https://doi.org/10.1109/CVPR52688.2022.00116](https://doi.org/10.1109/CVPR52688.2022.00116)
* [3] Baur, S.A., Emmerichs, D.J., Moosmann, F., Pinggera, P., Ommer, B., Geiger, A.: SLIM: self-supervised lidar scene flow and motion segmentation. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021. pp. 13106-13116. IEEE (2021). [https://doi.org/10.1109/ICCV48922.2021.01288](https://doi.org/10.1109/ICCV48922.2021.01288), [https://doi.org/10.1109/ICCV48922.2021.01288](https://doi.org/10.1109/ICCV48922.2021.01288)
* [4] Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., Beijbom, O.: nuscenes: A multimodal dataset for autonomous driving. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. pp. 11618-11628. IEEE (2020). [https://doi.org/10.1109/CVPR42600.2020.01164](https://doi.org/10.1109/CVPR42600.2020.01164), [https://doi.org/10.1109/CVPR42600.2020.01164](https://doi.org/10.1109/CVPR42600.2020.01164)
* [5] Deng, D., Zakhor, A.: Rsf: Optimizing rigid scene flow from 3d point clouds without labels. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). pp. 1277-1286 (January 2023) 3
* [6] Dewan, A., Caselitz, T., Tipaldi, G.D., Burgard, W.: Motion-based detection and tracking in 3d lidar scans. In: Kragic, D., Bicchi, A., Luca, A.D. (eds.) 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, Stockholm, Sweden, May 16-21, 2016. pp. 4508-4513. IEEE (2016). [https://doi.org/10.1109/ICRA.2016.7487649](https://doi.org/10.1109/ICRA.2016.7487649), [https://doi.org/10.1109/ICRA.2016.7487649](https://doi.org/10.1109/ICRA.2016.7487649)
* [7] Ester, M., Kriegel, H., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Simoudis, E., Han, J., Fayyad, U.M. (eds.) Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96), Portland, Oregon, USA. pp. 226-231. AAAI Press (1996), [http://www.aaai.org/Library/KDD/1996/kdd96-037.php](http://www.aaai.org/Library/KDD/1996/kdd96-037.php)
* [8] Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The KITTI dataset. Int. J. Robotics Res. **32**(11), 1231-1237 (2013). [https://doi.org/10.1177/0278364913491297](https://doi.org/10.1177/0278364913491297), [https://doi.org/10.1177/0278364913491297](https://doi.org/10.1177/0278364913491297)
* [9] Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 16-21, 2012. pp. 3354-3361. IEEE Computer Society (2012). [https://doi.org/10.1109/CVPR.2012.6248074](https://doi.org/10.1109/CVPR.2012.6248074), [https://doi.org/10.1109/CVPR.2012.6248074](https://doi.org/10.1109/CVPR.2012.6248074)
* [10] Harley, A.W., Zuo, Y., Wen, J., Mangal, A., Potdar, S., Chaudhry, R., Fragkiadaki, K.: Track, check, repeat: An EM approach to unsupervised tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021. pp. 16581-16591. Computer Vision Foundation / IEEE (2021). [https://doi.org/10.1109/CVPR46437.2021.01631](https://doi.org/10.1109/CVPR46437.2021.01631), [https://openaccess.thecvf.com/content/CVPR2021/html/Harley_Track_Check_Repeat_An_EM_Approach_to_Unsupervised_Tracking_CVPR_2021_paper.html](https://openaccess.thecvf.com/content/CVPR2021/html/Harley_Track_Check_Repeat_An_EM_Approach_to_Unsupervised_Tracking_CVPR_2021_paper.html)* [11] Liang, Z., Zhang, Z., Zhang, M., Zhao, X., Pu, S.: Rangeioudet: Range image based real-time 3d object detector optimized by intersection over union. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021. pp. 7140-7149. Computer Vision Foundation / IEEE (2021). [https://doi.org/10.1109/CVPR46437.2021.00706](https://doi.org/10.1109/CVPR46437.2021.00706), [https://openaccess.thecvf.com/content/CVPR2021/html/Liang_RangeIoUDet_Range_Image_Based_Real-Time_3D_Object_Detector_Optimized_by_CVPR_2021_paper.html](https://openaccess.thecvf.com/content/CVPR2021/html/Liang_RangeIoUDet_Range_Image_Based_Real-Time_3D_Object_Detector_Optimized_by_CVPR_2021_paper.html)
* [12] Meyer, G.P., Laddha, A., Kee, E., Vallespi-Gonzalez, C., Wellington, C.K.: Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. pp. 12677-12686. Computer Vision Foundation / IEEE (2019). [https://doi.org/10.1109/CVPR.2019.01296](https://doi.org/10.1109/CVPR.2019.01296), [http://openaccess.thecvf.com/content_CVPR_2019/html/Meyer_LaserNet_An_Efficient_Probabilistic_3D_Object_Detector_for_Autonomous_Driving_CVPR_2019_paper.html](http://openaccess.thecvf.com/content_CVPR_2019/html/Meyer_LaserNet_An_Efficient_Probabilistic_3D_Object_Detector_for_Autonomous_Driving_CVPR_2019_paper.html)
* ECCV 2022
- 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXVIII. Lecture Notes in Computer Science, vol. 13698, pp. 424-443. Springer (2022). [https://doi.org/10.1007/978-3-031-19839-7_25](https://doi.org/10.1007/978-3-031-19839-7_25), [https://doi.org/10.1007/978-3-031-19839-7_25](https://doi.org/10.1007/978-3-031-19839-7_25)
* [14] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E.Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep learning library. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E.B., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada. pp. 8024-8035 (2019), [https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html](https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html)
* [15] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research **12**, 2825-2830 (2011)
* [16] Rist, C.B., Enzweiler, M., Gavrila, D.M.: Cross-sensor deep domain adaptation for lidar detection and segmentation. In: 2019 IEEE Intelligent Vehicles Symposium, IV 2019, Paris, France, June 9-12, 2019. pp. 1535-1542. IEEE (2019). [https://doi.org/10.1109/IVS.2019.8814047](https://doi.org/10.1109/IVS.2019.8814047), [https://doi.org/10.1109/IVS.2019.8814047](https://doi.org/10.1109/IVS.2019.8814047)
* [17] Seidenschwarz, J., Osep, A., Ferroni, F., Lucey, S., Leal-Taixe, L.: Semoli: What moves together belongs together (2024), [https://arxiv.org/abs/2402.19463](https://arxiv.org/abs/2402.19463)
* [18] Shen, Z., Liang, H., Lin, L., Wang, Z., Huang, W., Yu, J.: Fast ground segmentation for 3d lidar point cloud based on jump-convolution-process. Remote. Sens. **13**(16), 3239 (2021). [https://doi.org/10.3390/rs13163239](https://doi.org/10.3390/rs13163239), [https://doi.org/10.3390/rs13163239](https://doi.org/10.3390/rs13163239)* [19] Shi, S., Wang, X., Li, H.: Pointrcnn: 3d object proposal generation and detection from point cloud. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. pp. 770-779. Computer Vision Foundation / IEEE (2019). [https://doi.org/10.1109/CVPR.2019.00086](https://doi.org/10.1109/CVPR.2019.00086), [http://openaccess.thecvf.com/content_CVPR_2019/html/Shi_PointRCNN_3D_Object_Proposal_Generation_and_Detection_From_Point_Cloud_CVPR_2019_paper.html](http://openaccess.thecvf.com/content_CVPR_2019/html/Shi_PointRCNN_3D_Object_Proposal_Generation_and_Detection_From_Point_Cloud_CVPR_2019_paper.html)
* [20] Shin, S., Golodetz, S., Vankadari, M., Zhou, K., Markham, A., Trigoni, N.: Sample, crop, track: Self-supervised mobile 3d object detection for urban driving lidar. CoRR **abs/2209.10471** (2022). [https://doi.org/10.48550/arXiv.2209.10471](https://doi.org/10.48550/arXiv.2209.10471), [https://doi.org/10.48550/arXiv.2209.10471](https://doi.org/10.48550/arXiv.2209.10471)
* [21] Song, Z., Yang, B.: OGC: Unsupervised 3d object segmentation from rigid dynamics of point clouds. In: Oh, A.H., Agarwal, A., Belgrave, D., Cho, K. (eds.) Advances in Neural Information Processing Systems (2022), [https://openreview.net/forum?id=ecNbE00tqBU](https://openreview.net/forum?id=ecNbE00tqBU)
* [22] Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., Vasudevan, V., Han, W., Ngiam, J., Zhao, H., Timofeev, A., Ettinger, S., Krivokon, M., Gao, A., Joshi, A., Zhang, Y., Shlens, J., Chen, Z., Anguelov, D.: Scalability in perception for autonomous driving: Waymo open dataset. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. pp. 2443-2451. IEEE (2020). [https://doi.org/10.1109/CVPR42600.2020.00252](https://doi.org/10.1109/CVPR42600.2020.00252), [https://doi.org/10.1109/CVPR42600.2020.00252](https://doi.org/10.1109/CVPR42600.2020.00252)
* [23] Sun, P., Wang, W., Chai, Y., Elsayed, G., Bewley, A., Zhang, X., Sminchisescu, C., Anguelov, D.: RSN: range sparse net for efficient, accurate lidar 3d object detection. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021. pp. 5725-5734. Computer Vision Foundation / IEEE (2021). [https://doi.org/10.1109/CVPR46437.2021.00567](https://doi.org/10.1109/CVPR46437.2021.00567), [https://openaccess.thecvf.com/content/CVPR2021/html/Sun_RSN_Range_Sparse_Net_for_Efficient_Accurate_LiDAR_3D_Object_CVPR_2021_paper.html](https://openaccess.thecvf.com/content/CVPR2021/html/Sun_RSN_Range_Sparse_Net_for_Efficient_Accurate_LiDAR_3D_Object_CVPR_2021_paper.html)
* [24] Trauble, B., Pauen, S., Poulin-Dubois, D.: Speed and direction changes induce the perception of animacy in 7-month-old infants. Frontiers in Psychology **5** (2014). [https://doi.org/10.3389/fpsyg.2014.01141](https://doi.org/10.3389/fpsyg.2014.01141), [https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01141](https://www.frontiersin.org/articles/10.3389/fpsyg.2014.01141)
* simple, accurate, and robust registration if done the right way. IEEE Robotics Autom. Lett. **8**(2), 1029-1036 (2023). [https://doi.org/10.1109/LRA.2023.3236571](https://doi.org/10.1109/LRA.2023.3236571), [https://doi.org/10.1109/LRA.2023.3236571](https://doi.org/10.1109/LRA.2023.3236571)
* December 9, 2022 (2022), [http://papers.nips.cc/paper_files/paper/2022/hash/e7407ab5e895c405d28ff6807ffec594a-Abstract-Conference.html](http://papers.nips.cc/paper_files/paper/2022/hash/e7407ab5e895c405d28ff6807ffec594a-Abstract-Conference.html)
* [27] Wilson, B., Qi, W., Agarwal, T., Lambert, J., Singh, J., Khandelwal, S., Pan, B., Kumar, R., Hartnett, A., Pontes, J.K., Ramanan, D., Carr, P., Hays, J.: Argoverse 2: Next generation datasets for self-driving perception and forecasting. In: Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021) (2021) 10* [28] Xu, J., Waslander, S.L.: Hypermodest: Self-supervised 3d object detection with confidence score filtering. CoRR **abs/2304.14446** (2023). [https://doi.org/10.48550/arXiv.2304.14446](https://doi.org/10.48550/arXiv.2304.14446), [https://doi.org/10.48550/arXiv.2304.14446](https://doi.org/10.48550/arXiv.2304.14446)
* [29] Yang, Z., Sun, Y., Liu, S., Jia, J.: 3dssd: Point-based 3d single stage object detector. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. pp. 11037-11045. Computer Vision Foundation / IEEE (2020). [https://doi.org/10.1109/CVPR42600.2020.01105](https://doi.org/10.1109/CVPR42600.2020.01105), [https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_3DSSD_Point-Based_3D_Single_Stage_Object_Detector_CVPR_2020_paper.html](https://openaccess.thecvf.com/content_CVPR_2020/html/Yang_3DSSD_Point-Based_3D_Single_Stage_Object_Detector_CVPR_2020_paper.html)
* [30] Yin, T., Zhou, X., Krahenbuhl, P.: Center-based 3d object detection and tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021. pp. 11784-11793. Computer Vision Foundation / IEEE (2021). [https://doi.org/10.1109/CVPR46437.2021.01161](https://doi.org/10.1109/CVPR46437.2021.01161), [https://openaccess.thecvf.com/content/CVPR2021/html/Yin_Center-Based_3D_Object_Detection_and_Tracking_CVPR_2021_paper.html](https://openaccess.thecvf.com/content/CVPR2021/html/Yin_Center-Based_3D_Object_Detection_and_Tracking_CVPR_2021_paper.html)
* [31] You, Y., Luo, K., Phoo, C.P., Chao, W., Sun, W., Hariharan, B., Campbell, M.E., Weinberger, K.Q.: Learning to detect mobile objects from lidar scans without labels. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. pp. 1120-1130. IEEE (2022). [https://doi.org/10.1109/CVPR52688.2022.00120](https://doi.org/10.1109/CVPR52688.2022.00120), [https://doi.org/10.1109/CVPR52688.2022.00120](https://doi.org/10.1109/CVPR52688.2022.00120)
* [32] Zhang, L., Yang, A.J., Xiong, Y., Casas, S., Yang, B., Ren, M., Urtasun, R.: Towards unsupervised object detection from lidar point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9317-9328 (June 2023)
* [33] Zhang, Y., Hu, Q., Xu, G., Ma, Y., Wan, J., Guo, Y.: Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. pp. 18931-18940. IEEE (2022). [https://doi.org/10.1109/CVPR52688.2022.01838](https://doi.org/10.1109/CVPR52688.2022.01838), [https://doi.org/10.1109/CVPR52688.2022.01838](https://doi.org/10.1109/CVPR52688.2022.01838)
* [34] Zhou, Y., Tuzel, O.: Voxelnet: End-to-end learning for point cloud based 3d object detection. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. pp. 4490-4499. IEEE Computer Society (2018). [https://doi.org/10.1109/CVPR.2018.00472](https://doi.org/10.1109/CVPR.2018.00472), [http://openaccess.thecvf.com/content_cvpr_2018/html/Zhou_VoxelNet_End-to-End_Learning_CVPR_2018_paper.html](http://openaccess.thecvf.com/content_cvpr_2018/html/Zhou_VoxelNet_End-to-End_Learning_CVPR_2018_paper.html)
**Supplementary Material**
## Appendix 0.A Implementation Details
We run the networks Centerpoint [30] and Transfusion-L [2] on \\(100\\times 100\\)m BEV grids around the ego vehicle. We use non-maximum suppression with a threshold of \\(0.1\\) (2D BEV IoU) for the detections. The optimizers, as well as their learning rate schedules are kept from the respective original implementations, but the schedules are shortened to match the lifecycle of the network weights during the iterative rounds of self-training. For the zero-shot generalization required by Oyster [32] after the first round, we found that starting from an initial BEV range of \\(50\\times 50\\)m, and then extending to \\(100\\times 100\\)m, gave the best results. For DBSCAN we used \\(\\varepsilon=1.0\\), and minPts = \\(5\\). We optimize all tracks using Adam optimizer with learning rate \\(0.1\\) for 2000 steps for a complete point cloud sequence batched (at the same time), which takes less than \\(2\\)s per nuScenes session on a Nvidia V100 GPU.
## Appendix 0.B Self-supervised lidar scene flow
As mentioned in Sec. 4, we extend the BEV range of SLIM [3] from \\(70\\times 70\\)m, \\(640\\times 640\\) pixels to \\(120\\times 120\\)m and \\(920\\times 920\\) pixels, but make no further modifications to the network. This results in the SOTA scene flow quality described in Table 1.
The small performance gap of our method between using ground truth and SLIM lidar scene flow (comparing the last and the second-to-last row of Table 4) demonstrates that SLIM lidar scene flow has suitable quality for our method, and also that our method does not require absolutely perfect lidar scene flow estimates to work well. Ground truth lidar scene flow is generated using the recorded vehicle egomotion for static points and the tracking information (bounding boxes) of moving objects.
\\begin{table}
\\begin{tabular}{l l c c} \\hline \\hline Train Data & Val Data & AEE(moving)[m] \\(\\downarrow\\) AEE(static)[m]\\(\\downarrow\\) \\\\ \\hline AV2 Train & AV2 Val & 0.075 & 0.079 \\\\ KITTI Raw & KITTI Tracking & 0.092 & 0.104 \\\\ nuScenes Train & nuScenes Val & 0.132 & 0.077 \\\\ WOD Train & WOD Val & 0.091 & 0.085 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: **Lidar scene flow metrics of SLIM [3] on the datasets (evaluated on val split), for a BEV range of \\(120\\times 120\\)m.** Note that for KITTI, we only evaluate the forward-facing field of view (FoV) which has been annotated with tracked objects. Objects faster than \\(1\\)m/s are considered _moving_. AEE refers to the average endpoint error across either all moving or static points.
Performance of lidar scene flow clustering on nuScenes
In the evaluation on nuScenes (see Table 2), the worse performance of using DBSCAN [7] clustering on ground truth lidar scene flow compared to using DBSCAN on SLIM lidar scene flow is surprising.
However, this peculiar effect is explained by Fig. 1, which shows the full precision-recall curves, generated using the official nuScenes protocol on the validation split [4]. The nuScenes protocol uses minimum precision and recall value thresholds of 0.1, discarding all results below these thresholds. As mentioned in Section 3.2, we assign confidence score of 1.0 to all clusters discovered by DBSCAN. This causes all detections generated using DBSCAN on ground truth lidar scene flow to be discarded.
## Appendix 0.D Quality of pseudo ground truth during Iterative Self-Training
One critical aspect of iterative self-training is the quality of pseudo ground truth on the training dynamics, as depicted in Fig. 2. Finding the right balance between precision and recall in the pseudo ground truth is crucial for achieving optimal performance during self-training iterations: In our experiments, we find that having initially a small subset of high precision training samples is superior to having a larger set with higher recall but worse precision, because it allows the model to learn from a smaller but more reliable set of labeled data. A larger set of pseudo ground truth that is collected with less rigorous clustering, tracking and filtering, includes more noisy and mislabeled data. As discussed in [31, 32], the limited model capacity does prevent the model from overfitting to the inconsistent noises in the pseudo ground truth to some extent and the model generalizes
Figure 1: **Performance comparison of clustering ground truth lidar scene flow(top) and SLIM [3] lidar scene flow (bottom) on the nuScenes dataset.** The methods are evaluated according to the official nuScenes protocol on the validation split. The dashed line represents the minimum threshold for precision and recall of 0.1, all results below these two thresholds are discarded. This leads to the surprising effect that the AP score is higher when using SLIM lidar scene flow, but this is only a result of the clipping dictated by the nuScenes evaluation protocol.
\\begin{table}
\\begin{tabular}{l l c c c c c} \\hline \\hline \\multicolumn{2}{c}{method} & \\multicolumn{2}{c}{AP\\(\\uparrow\\)} & \\multicolumn{2}{c}{NDS\\(\\uparrow\\)} & \\multicolumn{2}{c}{ATE\\(\\downarrow\\)} & \\multicolumn{2}{c}{AOE \\(\\downarrow\\)} & \\multicolumn{2}{c}{ASE\\(\\downarrow\\)} \\\\ \\hline \\multirow{4}{*}{CIF} & CP [30] & 0.484 & 0.524 & 0.357 & 0.560 & 0.263 \\\\ & TF [2] & 0.627 & 0.614 & 0.287 & 0.501 & 0.207 \\\\ \\hline \\multirow{4}{*}{CIF} & DBSCAN [7] & 0.008 & 0.109 & 0.987 & 3.120 & 0.962 \\\\ & DBSCAN(SF) & 0.003 & 0.106 & 1.186 & 2.623 & 0.952 \\\\ & DBSCAN(GF) & 0.000 & 0.000 & 1.000 & 1.0 & 1.0 \\\\ & RSF [5] & 0.019 & 0.183 & 0.774 & 1.003 & 0.507 \\\\ \\hline \\multirow{4}{*}{CIF} & Oyster-CP [32] & 0.091 & 0.215 & 0.784 & 1.514 & 0.521 \\\\ & Oyster-TF [32] & 0.093 & 0.233 & 0.708 & 1.564 & 0.448 \\\\ \\cline{1-1} & **LISO**-CP & 0.109 & 0.224 & 0.750 & 1.062 & 0.409 \\\\ \\cline{1-1} & **LISO**-TF & **0.134** & **0.270** & **0.628** & **0.938** & **0.408** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Full evaluation results on nuScenes dataset: We compare **LISO** with two different network architectures (TF [2], CP [30]) against different baselines and also give supervised training results as reference (two top lines). Along the AP score we report the nuScenes detection score NDS, which is a combination of the AP score, average translation, orientation, scale, attribute error/score (ATE, AOE, ASE, AEE respectively). Note that nuScenes uses a minimum precision and recall threshold of 0.1, and since the recall of GT flow clustering is lower than 0.1, all results are clipped away. SF: lidar scene flow by SLIM, GF: ground truth lidar scene flow.
Figure 3: Clustering results for the initial pseudo ground truth generation on WOD. Red boxes are ground truth boxes, yellow are predictions. **Left: Oyster** Clustering result on points, with high recall but low precision. **Right: LISO** Clustering result on points and SLIM lidar scene flow, resulting in in high precision pseudo ground truth (LISO). Points are colored according to flow direction and magnitude.
[MISSING_PAGE_POST]
\\begin{table}
\\begin{tabular}{l l|c c c c|c c c c c c c} \\hline \\hline & \\multicolumn{2}{c|}{Movable} & \\multicolumn{2}{c|}{Moving} & \\multicolumn{2}{c|}{Still} & \\multicolumn{2}{c}{Vehicle} & \\multicolumn{2}{c}{Pedestrian} & \\multicolumn{2}{c}{Cyclist} \\\\ \\cline{3-14} & \\multicolumn{2}{c|}{[email protected]} & \\multicolumn{2}{c|}{[email protected]} & \\multicolumn{2}{c|}{[email protected]} & \\multicolumn{2}{c|}{[email protected]} & \\multicolumn{2}{c}{[email protected]} & \\multicolumn{2}{c}{[email protected]} & \\multicolumn{2}{c}{[email protected]} \\\\ & \\multicolumn{2}{c|}{BEV} & \\multicolumn{2}{c|}{3D} & \\multicolumn{2}{c|}{BEV} & \\multicolumn{2}{c|}{3D} & \\multicolumn{2}{c|}{BEV} & \\multicolumn{2}{c}{3D} & \\multicolumn{2}{c}{BEV} & \\multicolumn{2}{c}{3D} & \\multicolumn{2}{c}{BEV} & \\multicolumn{2}{c}{3D} \\\\ \\hline \\multirow{6}{*}{\\begin{tabular}{} \\end{tabular} } & CP [30] & 0.765 & 0.684 & 0.721 & 0.624 & 0.735 & 0.657 & 0.912 & 0.841 & 0.513 & 0.413 & 0.134 & 0.094 \\\\ & TF [2] & 0.746 & 0.723 & 0.714 & 0.668 & 0.733 & 0.710 & 0.918 & 0.899 & 0.457 & 0.429 & 0.216 & 0.187 \\\\ \\cline{2-14} & DBSCAN [7] & 0.027 & 0.008 & 0.009 & 0.000 & 0.027 & 0.006 & 0.184 & 0.048 & 0.002 & 0.000 & 0.001 & 0.000 \\\\ & DBSCAN(SF) & 0.026 & 0.010 & 0.064 & 0.041 & 0.000 & 0.000 & 0.073 & 0.046 & 0.010 & 0.006 & 0.009 & 0.006 \\\\ & DBSCAN(GF) & 0.114 & 0.071 & 0.318 & 0.120 & 0.000 & 0.000 & 0.113 & 0.075 & 0.111 & 0.063 & 0.240 & 0.151 \\\\ & RSF [5] & 0.030 & 0.020 & 0.080 & 0.055 & 0.000 & 0.000 & 0.109 & 0.074 & 0.000 & 0.000 & 0.002 & 0.000 \\\\ & SeMoLi [17] \\(\\dagger\\) & - & 0.195 & - & **0.575** & - & - & - & - & - & - & - \\\\ & **LISO**-CP & 0.292 & 0.211 & 0.272 & 0.204 & 0.208 & 0.140 & 0.607 & 0.440 & 0.029 & 0.009 & 0.010 & 0.004 \\\\ \\hline \\multirow{6}{*}{
\\begin{tabular}{} \\end{tabular} } & Oyster-CP [32] & 0.217 & 0.084 & 0.151 & 0.062 & 0.176 & 0.056 & 0.562 & 0.204 & 0.000 & 0.000 & 0.000 & 0.000 \\\\ & Oyster-TF [32] & 0.121 & 0.015 & 0.051 & 0.007 & 0.098 & 0.010 & 0.475 & 0.058 & 0.000 & 0.000 & 0.000 \\\\ & **LISO**-CP & **0.380** & **0.308** & **0.350** & 0.296 & **0.322** & **0.255** & **0.695** & **0.543** & **0.055** & **0.037** & **0.022** & **0.016** \\\\ & **LISO**-TF & 0.327 & 0.208 & 0.349 & 0.245 & 0.233 & 0.126 & 0.669 & 0.408 & 0.024 & 0.008 & 0.012 & 0.005 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Full evaluation results on WOD dataset: We evaluate using the protocol of [13, 17], using an area of whole 100m\\(\\times\\)40m BEV grid around the ego vehicle, considering objects that move faster than 1m/s to be _moving_ (difficulty level L2). CP, TF: network architecture, in the first two lines trained supervised for comparison. \\(\\dagger\\): Results taken from [17]. SF: lidar scene flow by SLIM, GF: ground truth lidar scene flow. | 3D object detection is one of the most important components in any Self-Driving stack, but current state-of-the-art (SOTA) lidar object detectors require costly & slow manual annotation of 3D bounding boxes to perform well. Recently, several methods emerged to generate pseudo ground truth without human supervision, however, all of these methods have various drawbacks: Some methods require sensor rigs with full camera coverage and accurate calibration, partly supplemented by an auxiliary optical flow engine. Others require expensive high-precision localization to find objects that disappeared over multiple drives.
We introduce a novel self-supervised method to train SOTA lidar object detection networks which works on unlabeled sequences of lidar point clouds only, which we call trajectory-regularized self-training. It utilizes a SOTA self-supervised lidar scene flow network under the hood to generate, track, and iteratively refine pseudo ground truth. We demonstrate the effectiveness of our approach for multiple SOTA object detection networks across multiple real-world datasets. Code will be released.
Keywords:Self-Supervised LiDAR Object Detection | Write a summary of the passage below. | 216 |
mdpi/02a9f6da_76b9_47b6_a2b4_68bac9581903.md | Investigation of Strategic Changes Using Patent Co-Inventor Network Analysis: The Case of Samsung Electronics
Sungchul Choi
1Department of Industrial and Management Engineering, Gachon University, Seongnam-si,
Gyeonggi-do 13120, Korea; [email protected]
Hyunseok Park
2Department of Information System, Hanyang University, Seoul 04763, Korea
2
Footnote 2: email: [email protected]; Tel.: +82-2-2220-2396
## 1 Introduction
In intense international competition and rapid technological change, companies have to build dynamic capabilities and continually adapt their corporate strategy to achieve sustainable competitive advantages. Since competitive advantage is achieved based on relative competitiveness, strategic changes of competitors are one essential input for a company to adjust its corporate strategy [1; 2; 3].
Technologies or technological capabilities are a critical resource for competitive advantage [4; 5]. The success of many companies is derived from their outstanding technological capability [6]. Even though there are many ways to secure technological capability from inside and outside the company, internal R&D is basically the most significant mechanism to gain technological capabilities. It is, therefore, logical that specific technological capabilities a company focuses its R&D effort on are aligned with the strategic direction of the company and so changes of technological capabilities or R&D efforts can explain strategic changes of a company, particularly with technology-oriented companies [7; 8].
Most previous research has tried to assess technological capabilities to identify signals of strategic changes of competitors and patents have been mainly used as a proxy data source [5; 9; 10; 11; 12; 13]. However, objective measuring of changes of a company's technological capability level is difficult in that patent statistics-based quantitative approaches have high possibilities to provide inappropriate results in capability evaluation. For example, even though Samsung electrics has a large number of patents on user interface technology, its technological capabilities on user interface are not competitive (see our result in Section 4).
Therefore, this research tries to identify a firm's strategic changes from a different angle, i.e., changes in R&D effort. R&D effort--strategic resource allocation to R&D projects to improve technological capabilities in specific technological domains--is directly aligned with strategic directions of a firm, because the goal of R&D is basically to obtain technological capabilities for sustaining competitive advantage and companies never spend money on--or allocate their resources to--any business or technology that deviates from their strategic direction. From the resource-based view, intangible resources, mostly human resources, are a major determinant of a firm's decision for internal R&D and R&D human resources generally occupy the most important portion of R&D expenditure [6; 9; 14; 15; 16; 17; 18; 19]. Therefore, R&D human resource allocation can be a proxy variable to represent a firm's R&D efforts on specific technological domains. In particular, key R&D personnel, usually leaders of R&D projects, can be directly aligned with strategic directions.
This paper proposes a method to investigate a firm's strategic changes using inventor information in patents. Specifically, the method analyzes the changes of a company's R&D effort by identifying the key R&D personnel in a patent co-inventor network using social network analysis (SNA) and their changes. In a patent co-inventor network, degree and betweenness centrality can analyze an inventor's research activeness and broadness, which are key factors to identify key R&D personnel. We conducted an empirical analysis using the patents of Samsung Electronics. Samsung Electronics as one of the largest high-tech companies in the world had a clear strategic change in its major business from semiconductor devices to smart phones in early 2000s. Our method analyzed the current and future strategies of Samsung Electronics and the result shows the clear strategic changes in their R&D and business.
This paper describes a literature review on the related search. We then provide a detailed description on the method and empirical analysis of Samsung Electronics case. Lastly, a conclusion will be drawn.
## 2 Theoretical Background
### Important R&D Projects from a Human Resource Perspective
Literatures have studied the relationship between R&D human resources and R&D projects. The following are the underlying characteristics for the proposed method.
First, an R&D project conducted by relatively many researchers is usually an important project in the organization. Researchers are an important R&D human resource in organizations [8] and allocation of researchers to R&D projects is directly influenced by the relative significance of the project. For this reason, an organization tends to allocate more resources to an important project than other projects. In a similar vein, Acedo et al. [20] showed that there is a high possibility that the more authors are involved, the more important the research generated is.
Second, a researcher involved in various R&D projects has high possibility to be a core researcher having R&D leadership. Hamel and Prahalad [21] asserted that capability is a vitally important factor for successful implementation of a strategy. In the view of R&D perspective, if a specific R&D project is strategically important, core researchers are firstly allocated to the project.
Finally, organizations usually provide a reward to researchers who have made a big contribution to important R&D projects; monetary and positional incentive are general rewards in human resource management [22; 23]. A successful researcher is more likely to get a higher position in the organization and may want to have a reputation in a research community. For example, Shuji Nakamura, who is an inventor of the Blue Light Emission Diode (LED), had moved from Nichia Corporation to University of California, Santa Barbara as a professor of engineering because of his major breakthrough in lighting technology.
### Patent Co-Inventor Network Analysis
SNA techniques have been widely adopted as one of the prevailing patent analysis tools [24]. In patent bibliometric analysis, citation and co-inventor information, which represent knowledge flows and inventive relationships among inventors, are suitable for formulating a network. There are three major categories of studies using patent-based network analysis: technology transfer, research collaboration, and research performance Table 1.
The first category is research for analyzing knowledge transfer. Research in this category describes how research groups or patent assignees refer other patents based on patent citation information. Han and Park [25] suggested an exploratory method for measuring inter-industrial knowledge flows by analyzing a citation network using United States Patent Class (USPC). The research discovered how much traditional industry technology affects emerging industry technology by analyzing co-citation degrees between related patents. Chao-Chih and Chun-Chieh [26] investigated knowledge diffusion between institutions and countries by analyzing patents related to Liquid Crystal Display (LCD). They identified key-players, knowledge spillover patterns, and overall knowledge spillover efficiency by analyzing SNA techniques and a patent citation network using assignee information.
The second category is about research analyzing how researchers (or inventors) collaborate by using co-inventor or co-assignee analysis. Cantner and Graf [27] analyzed a local inventor network of Jena, a German university town, by using SNA techniques. They tried to explain the job mobility of scientists and the technological overlap between assignees by means of network regression techniques. Lei, Zhao, Zhang, Chen, Huang, Zheng, Liu, Zhang, and Zhao [31] examined technological collaboration patterns in the solar cell industry by patent analysis using assignee and inventor information. This study categorized three different collaborative types such as same city, same country, and international collaboration, and analyzed how technological collaboration occurs between assignees or inventors.
The third category is about research analyzing research performance. Research in this category focuses on which patents are important in a patent network. Chang, Lai, and Chang [32] utilized a patent citation network analysis for finding basic patents and relationship between cited patents. This study tried to analyze groups of technology diffusion from basic patents by using the hierarchical cluster analysis. Wang, Chiang, and Lin [33] suggested a method to predict patent quality by analyzing brokerage or closure patents. In this research, they applied SNA techniques to analyze the network structure of patent citation and analyzed the relationship between forward patent citations and the brokerage and closure measures. They asserted that the quality of a patent is determined by the position in a citation network.
Even though the aforementioned studies are mainly studies conducted using patent data and SNA techniques, use of patent inventor information can provide a new chance to analyze different technological perspectives. Because inventor information of a patent document represents how much R&D human resources are allocated in a specific R&D project, one can infer strategic directions of an R&D organization by analyzing inventor information. In addition, SNA techniques can derive information on corporate strategies by considering technological context information with
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Category** & **Description** & **Related Works** \\\\ \\hline \\multirow{3}{*}{Knowledge Transfer} & To analyze how technology knowledge of patents & \\multirow{3}{*}{Han and Park [25],} \\\\ & transfers between R\\&D groups, based on \\\\ & a citation network with inventor information & \\\\ \\hline \\multirow{3}{*}{Research Collaboration} & \\multirow{3}{*}{To analyze how research groups are formulated} & Cantner and Graf [27], \\\\ & & Vidgen et al. [28], \\\\ \\cline{1-1} & & Lobo and Strumsky [29], \\\\ \\cline{1-1} & & Sternitzke et al. [30], \\\\ \\cline{1-1} & & Lei et al. [31] \\\\ \\hline \\multirow{3}{*}{Research Performance} & To analyze how important a patent is in an area & Chang et al. [32], \\\\ \\cline{1-1} & of technology by using citation-network analysis & Wang et al. [33], \\\\ \\cline{1-1} & & Abbasi and Altmann [34] \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Related research using SNA techniques.
the inventors, such as technological fields from patent classification, invention date, and assignee's business domain. Moehrle, Walter, Geritz, and Muller (2016) suggested a method for supporting human resource decision making in R&D organization from the perspective of technological strategy. The research extracted researcher's technological profiles, technological knowledge, and their clusters by analyzing technological description in patents, and then they also analyzed technological context information, patent classification, and technological description of the classification, to complement the analysis results.
This paper proposes a method to investigate R&D efforts of a firm based on patent co-inventor network analysis. To identify the R&D effort and strategic direction, the proposed method utilizes SNA indicators, degree and centrality, and technological context information, such as IPC code, application date, and assignees' business environment information. As a result, the proposed method produces information on core inventors, the technology portfolio, and technological strategic intent of an R&D organization.
## 3 Method
### Procedure
The procedure for the proposed method consists of three steps:
1. Collection of patent set;
2. Analyzing patent co-inventor network;
3. Analyzing key R&D personnel.
First step is to collect the patent set for analysis. Since this research aims at analyzing specific organizations, a patent set can be easily identified by using the assignee name of the target organization and collected from the United States Patent and Trademark Office (USPTO) repository. To consider both current and future perspectives, patent applications and granted patents should be separately collected.
The second step is to analyze a patent co-inventor network. For this, a patent co-inventor network should be constructed. To extract node to node data set, \\(\\left(\\begin{array}{c}n\\\\ 2\\end{array}\\right)\\) combinations are generated, where \\(n\\) is a number of inventors in one patent document. For example, consider the inventor set of Inventors \\(=\\{I_{1},I_{2},I_{3}\\}\\). The number of relationship among inventors are three which are from a Cartesian Product of set Inventors represented as Inventors \\(\\times\\) Inventors \\(=\\{(a,b)\\mid a\\in\\) Inventors and \\(b\\in\\) Inventors, except \\(a=b\\}\\). In this case, each node \\(I_{i}\\) has two relationships with other inventors and there is no direction between nodes. Using the extracted relationship data, a patent co-inventor network is generated using graph analysis tools, and this research utilizes R, which is a computer language for statistics, and iGraph, which is a network analysis package for R (Randel, 2015; Randel, 2015). Key R&D personnel of the target organization are identified based on inventor's research activeness and broadness which are measured by degree and betweenness centrality analysis; the details on the metrics will be described in Section 3.2.
The last step is to analyze the identified key R&D personnel to relate them to corporate strategy. Basically, technological fields where the key R&D personnel belong to or are focused can be identified by patent classification, e.g., international patent classification (IPC). Analysis of major IPCs of key R&D personnel's patents can show their major technological fields and these technological fields are aligned with corporate strategy. In addition, contextual information from social network services, news, blogs, or a company's annual reports is helpful to verify the identified strategy. For example, information on the position or promotion of key R&D personnel could be evidence to assure the result, because, in many cases, the leaders of major R&D projects are appointed or promoted to high-level executive positions.
All analysis procedure does not proceed at one time, but is looped. After the first analysis, the result should be compared to relevant contextual information and analyzed again to complement the first result. The analysis process is continuously repeated and completed though interactive analysis.
### Metrics
The proposed method is aimed to analyze R&D effort and its changes by analyzing a co-inventor network. Based on the focused capabilities, individual and organizational capability, and the targeted timeframe, current and future, this research can analyze four points of information on key R&D personnel and technology portfolios (Figure 1).
For the analysis, two data sources and three methods are utilized. Patent data sources are divided into two types, patent application and granted patent. Patent application is a submitted patent which is just open to the public after a certain time regardless of legal right, whereas a granted patent is a publication patent which has the legal right of its claim. Generally, after 18 months from the filing date, patents are opened to the public as a patent application, and about one to two years later, patents are usually granted if it is qualified to be granted [37]. Therefore, granted patents can be the source for R&D projects of the current important business domain of an R&D organization, and patent applications are sources for preparing future important technology.
To identify key R&D personnel, we utilize the degree and betweenness centrality analysis. In network theory, degree centrality indicates the number of links incident to a node [38]. The measure for degree centrality is defined as:
\\[DEGREE=d(n_{i}), \\tag{1}\\]
where function \\(d\\) is to calculate the number of other inventor-nodes that are connected to the focal inventor-node \\(n_{i}\\)[39]. In SNA, degree can be used to represent the activity or influence of a specific node or actor in a network. An actor having a high degree can secure considerable reliability of information and is a highly influential actor in a network. Based on this concept, an inventor having a high degree centrality in co-inventor network can be interpreted as an active and core researcher in the organization.
The next indicator is betweenness centrality. The betweenness measures how a node is located between other nodes [40] and the normalized betweenness centrality can be calculated as:
\\[b_{i}=\\sum_{j,k^{*}:i\
eq j\
eq k}^{n}\\frac{g_{jk}}{g_{jk}}/\\frac{(n-1)(n-2)}{2} \\tag{2}\\]
where function \\(n\\) is the number nodes, \\(b_{i}\\) is betweenness centrality of node \\(i\\), \\(g_{jk}\\) is the number of shortest paths from node \\(j\\) to node \\(k\\), and \\(g_{jik}\\) is the number of shortest paths from node \\(j\\) to node \\(k\\) that pass
Figure 1: Framework for analysis.
throughout node \\(i\\). A node having high betweenness centrality is likely to be a broker or gate keeper of information in the network. The node can control the flow of information as an intermediate channel of information distribution. Generally, an actor which has high betweenness centrality in a network appears as a team leader of the network [41]. Based on this concept, betweenness centrality represents how various inventors are co-worked with a specific inventor in co-inventor network. Generally, high degree nodes have high betweenness centrality, however they are different. For example, if one node has 10 degrees with only one connecting node, another node has 10 degrees with nine connecting nodes, they have same degree value but the latter node has higher betweenness centrality value than the first one. That is to say, in a co-inventor network, an inventor who has high betweenness is likely to co-work with various inventors. The meanings of the indicators are summarized in Table 2.
## 4 Empirical Analysis: Samsung Electronics
### Data
This research analyzes the patents of Samsung Electronics in the last five years. Samsung Electronics has rapidly grown up through clear strategic changes: from semiconductor devices to smart phones. Before the emergence of the smart phone market, the R&D portfolio of Samsung Electronics heavily focused on developing semiconductor device related technologies. However, with the emergence of the smart phone market, Samsung Electronics' R&D portfolio has been sharply diffused to smart phone related technologies. This research tries to empirically validate the strategic changes of Samsung Electronics through the proposed method.
The publication date of the patent set is from January 2010 to December 2014. In the case of granted patent, the application date is from January 2008 to December 2014, and in case of the patent applications, the application date is from January 2010 to December 2014. Because granted patents are a source for analyzing past R&D effort for current business and patent applications are a source for current R&D effort for future business, the range of the period is different. Since it takes six years from the patent application to commercialization [42] and 1.5-2.5 years, on average, from application to granted patent [43] considering patent application for current strategy and granted patent for future strategy seems to be acceptable. In the case of Samsung Electronics, the granted patents include the technologies prior to smart phone technologies and applications include the technologies for post smart phone technologies.
In our patent set (Table 3), the total number of Samsung Electronics' patent applications are 19,877 and granted patents are 18,045. Although this research focuses only on Samsung Electronics' patents. The number of co-assignee patents are only 770 in grant and 1059 in patent applications, so this research added other co-assignee patents with Samsung Electronics. The total number of inventors are 11,975 and 13,348 for patents granted and applied for, respectively. The total number of IPCs are 4259 and 4826 for patents granted and applied for, respectively, excluding duplicated IPCs. Based on the patent data, the co-inventor networks are generated for grant and application. The grant co-inventor network has 11,975 nodes and 49,291 edges, and the application co-inventor network has 13,348 nodes and 56,693 edges. A node is an inventor and an edge is a co-inventing relationship between inventors.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Degree Type** & \\multicolumn{2}{c}{**Organizational Implications**} \\\\ \\hline High degree centrality & - & To conduct research actively \\\\ & - & To be likely to involve many R\\&D projects \\\\ \\hline High betweenness centrality & - & To be likely to conduct R\\&D projects with various other researchers \\\\ & - & To be likely to be a leader of an R\\&D project or R\\&D group \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Degree and betweenness centrality of co-inventor network.
### Analysis of Granted Patents
This research formulates and analyzes co-inventor network of granted patents by extracting node-to-node data set, and iGraph package is utilized to calculate values of SNA indicators. The summary for results of co-inventor network analysis of granted patents is as shown in Table 4. First, network density is 0.0002 which is very low. Because Samsung Electronics has collaborated with many other R&D departments for several business units, such as mobile, home appliance, digital television, System LSI, and memory, with many inventors, they are not tightly connected between different business units. This is a common phenomenon of a bulky organization. Second, the mean value of the degree is 5.285 and its standard variation is 4.66. It means five inventors are co-worked for one patent on average and from 2 to 9 inventors generally are co-worked. Finally, the mean value of betweenness centrality is 24,590 and its standard variation is 101.505. The value of standard variation shows relatively very higher than degree values. This means only a few inventors' betweenness centrality is extremely high and they are highly connected with other inventors.
The top 20 IPC codes in granted patents of Samsung Electronics are shown in Table 5. In granted patent data, it appears that Samsung Electronics input researchers on R&D projects related with semiconductor device technology in priority. Among top 20 IPC codes, 10 IPC codes are related with Memory, System LSI, and LED areas. The total number of researchers related to technology are 17,884. Outside of that, many researchers are allocated on the R&D projects related with wireless systems and cellular phone business. Before the time of releasing a smart phone, Samsung Electronics had run a business for cellular phones with competing NOKIA which is a cellular phone company acquired by Microsoft.
The next analysis is for analyzing inventors who have a high degree and betweenness centrality. This research measures betweenness centrality of inventors and makes up a list of inventors having the highest betweenness centrality. Then, the current positions of the identified inventors were searched through a web search by using their name information. Because Samsung Electronics' high executive positions and recent promotions are open to the public, we can find an inventor name easily if an inventor is promoted to high executive positions. Also, the technological domains that the inventors are involved in are identified from the IPC codes of patent documents which are described by the inventors in Table 6.
From the analysis, one interesting fact is that the current positions of almost all inventors who have high betweenness centrality are executives in Samsung Electronics or eminent researchers of other departments. This means that if betweenness centrality is high, the possibility of having R&D leadership is also high. Most inventors having high betweenness centrality was a leader of R&D project or R&D group in Samsung Electronics. The technological domains of the inventors are more related with semiconductor devices than smart phone devices. At that time, the main business of Samsung was a semiconductor device and semiconductor related technologies were more important than smart phone related technologies. However, the technologies related with cellular phones, not for
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline & **Density** & **Degree** & **Betweenness Centrality** \\\\ \\hline
**Mean** & 0.0002 & 5.285 & 24,950 \\\\
**Standard Deviation** & & 4.66 & 101,505 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Summary for results of a co-inventor network analysis of granted patent.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline
**Publication Date** & **Type** & **Application Date** & \\begin{tabular}{c} **Number of** \\\\ **Patents** \\\\ \\end{tabular} & \\begin{tabular}{c} **Number of** \\\\ **Inventors** \\\\ \\end{tabular} &
\\begin{tabular}{c} **Number of** \\\\ **IPC Types** \\\\ \\end{tabular} \\\\ \\hline January 2010– & Grant (B1, B2, S1) & January 2008–December 2014 & 18,045 & 11,975 & 4259 \\\\ December 2014 & Application (A1, A9) & January 2010–December 2014 & 19,877 & 13,348 & 4826 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Target data set for the case study of Samsung Electronics’ patents in the last five years.
smart phones, were actively researched, for example, wireless communication and image processing. In the case of analyzing granted patents, the results of IPC codes and individual inventor are almost same. At that time, Samsung Electronics had concentrated more on semiconductor devices than smart phone related technologies. Also, more inventors were allocated to developing semiconductor device related technologies.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Inventor Name** & **Degree** & **Betweenness** & **Current Position** & **Technology Area** \\\\ \\hline Jung-Hwan KIM & 52 & 2,517,903.782 & N/A & N/A (Because of name ambiguity such as inventors having same name, it is hard to distinguish a specific inventor.) \\\\ \\hline Ki-Hyun HWANG & 32 & 1,682,745.153 & Vice President & Memory \\\\ \\hline Sung Tae KIM & 36 & 1,314,883.601 & Vice President & LED \\\\ \\hline Kyoung Lae CHO & 37 & 1,303,646.368 & N/A & Flash Memory \\\\ \\hline Yun-Je OH & 58 & 1,284,607.899 & Vice President & Image Processing \\\\ \\hline Bruno Clerckx ([http://goo.gl/1cw3hz](http://goo.gl/1cw3hz)) & 41 & 1,254,201.867 & \\begin{tabular}{c} Lecturer at Imperial \\\\ College \\\\ \\end{tabular} & Wireless Communication \\\\ \\hline Sung Jin KIM ([http://goo.gl/jSVYIT](http://goo.gl/jSVYIT)) & 37 & 1,250,645.616 & Master & Communication (LTE) \\\\ \\hline Jae-Hee CHO ([http://goo.gl/WnzKZG](http://goo.gl/WnzKZG)) & 53 & 1,115,631.863 &
\\begin{tabular}{c} Professor at Chonbuk \\\\ National University \\\\ \\end{tabular} & Optoelectronics \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Inventors having highest betweenness centrality in granted patents of Samsung Electronics.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**IPC** & **Number of Inventors** & **Technology for IPC** & **Business Area** \\\\ \\hline H04W4/00 & 4819 & Wireless channel access & Wireless Network Solution \\\\ \\hline G06K9/00 & 2798 & User Interface related methods & Smart Phone \\\\ \\hline G11C16/04 & 2736 & Read-only memories & \\\\ \\hline H01L21/00 & 2725 & Manufacture or treatment of semiconductor & \\\\ \\hline H01L21/336 & 2507 & or solid state devices & Memory \\\\ \\hline H01L23/48 & 2316 & & \\\\ \\hline H01L29/66 & 2024 & & \\\\ \\hline G09C5/00 & 1769 & Circuits for visual indicators & TV Appliance \\\\ \\hline H01L33/00 & 1756 & Semiconductor devices for light emission & LED \\\\ \\hline H01L27/108 & 1691 & \\begin{tabular}{c} A plurality of semiconductors \\\\ or other solid-state \\\\ \\end{tabular} & Memory \\\\ \\hline G06F15/16 & 1620 & Digital computers in general & Personal Computer \\\\ \\hline H04B7/00 & 1525 & Radio transmission systems & Wireless Network Solution \\\\ \\hline H04N7/12 & 1492 & Television systems & TV Appliance \\\\ \\hline G06F13/00 & 1361 & Interconnection of circuit units & Memory System LSI \\\\ \\hline H04W36/00 & 1337 & Handoff or reselecting arrangements & \\\\ \\hline H04W72/00 & 1334 &
\\begin{tabular}{c} Management of wireless resources \\\\ or wireless traffic \\\\ \\end{tabular} & Wireless Network Solution \\\\ \\hline H01L29/78 & 1322 & Semiconductor devices & Memory \\\\ \\hline H03M13/00 & 1293 & Electronics data decoding & Base technology \\\\ \\hline H04L29/06 & 1217 & Circuits or systems & Base technology \\\\ \\hline H01L21/8258 & 1202 & Semiconductor or solid state devices & Memory \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Top 20 IPC codes allocated to inventors in granted patents of Samsung Electronics.
### Analysis of Patent applications
In case of patent applications, a co-inventor network is formulated based on the patents which are applied from 2010 to 2014. Because of characteristics of patent applications, the patents are not fully open to the public and some of them are opened one or two years later. This might cause a situation that some values from the co-inventor network can be unstable. Because of this reason, as shown in Table 7, the mean values of the results are lower than the results of granted patents and the standard deviation values are extremely high. This is incomplete information and the summary results are not meaningful to analyze R&D capabilities of an organization.
The top 20 IPC codes of target patent applications are shown in Table 8. In contrast with granted patents, the major IPC codes are changed from semiconductor devices to electronic set products such as smart phones and TV appliances. In particular, many inventors are arranged for user interface related technology. This seems that Samsung Electronics prepared patent portfolios of user interface related technology after the Patent War with Apple (See Wikipedia ([http://en.wikipedia.org/wiki/Apple_Inc_v_Samsung_Electronics_Co](http://en.wikipedia.org/wiki/Apple_Inc_v_Samsung_Electronics_Co).)).
From the view of analyzing individual inventors, different with granted patents, it is hard to find information of inventors having a high degree and betweenness centrality in the Internet Tables 9 and 10). The reason is that patent applications was relatively recent research compared to granted
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline & **Density** & **Degree** & **Betweenness Centrality** \\\\ \\hline
**Mean** & & & \\\\
**Standard Deviation** & 0.000149143 & 4.649 & 17,393 \\\\ & & 12.86 & 8,618,822,950 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 7: The summary results of a co-inventor network of patent applications.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**IPC** & **Number of Inventors** & **Technology for IPC** & **Business Area** \\\\ \\hline H04W72/04 & 3352 & Wireless channel access & Wireless Network Solution \\\\ \\hline G06F3/041 & 2796 & & \\\\ G06F3/0488 & 2326 & & \\\\ G06F3/01 & 2308 & User Interface related methods & Smart Phone \\\\ G06F3/0481 & 2162 & & \\\\ G06F3/0484 & 2099 & & \\\\ \\hline H04I29/06 & 2030 & System On Chips & System LSI \\\\ \\hline H01L29/78 & 1815 & Semiconductor devices & Memory \\\\ \\hline H04N5/232 & 1802 & Television systems & TV Appliance \\\\ \\hline H01L29/66 & 1626 & Semiconductor devices & Memory \\\\ \\hline G09G5/00 & 1479 & Imaging Unit & Smart Phone Imaging \\\\ \\hline G06F17/30 & 1395 & Computing equipment & Set Products \\\\ \\hline G06F3/0482 & 1357 & User Interaction with computing equipment & Smart Phone TV Appliance \\\\ \\hline H04I29/08 & 1353 & System On Chips & System LSI \\\\ \\hline G06K9/00 & 1251 & Recognizing patterns (i.e., characters) & Smart Phone \\\\ \\hline H04L5/00 & 1209 & System On Chips & System LSI \\\\ \\hline H01I21/02 & 1197 & Solid State Devices & Memory \\\\ \\hline H04N13/00 & 1137 & Television systems & TV Appliance \\\\ \\hline H01F38/14 & 1109 & Electronic Devices & Set Products \\\\ \\hline H02J7/00 & 1104 & Batteries & Smart Phone \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 8: Top 20 IPC codes allocated to inventors in patent applications of Samsung Electronics.
ponents and there was not enough time to get a reputation from research communities. Therefore, the inventors might not be rewarded yet and they might not be exposed to the public news. In contrast with granted patents, the most important characteristic of patent applications is that the inventors conducting research which is not the main business of Samsung Electronics have a high degree and betweenness centrality. For example, there are optics thin film, battery material, and carbon material which are material related technologies. Even though many inventors are arranged for user interface related R&D projects, there is no inventor related to the R&D project in the highest degree or betweenness centrality. Considering the characteristics of betweenness centrality value, it can be inferred that the size of R&D projects related to user interface are small, but there are many of the total number of inventors related with R&D projects. On the other hand, the number of patents related with material technology are relatively few, but the size of the R&D projects is larger than user interface related R&D projects. One interesting point is the inventor Jae-young CHOI who was the vice president of Samsung Advanced Institute of Technology (SAIT) having the highest degree in the co-inventor network of patent application, but a relatively low betweenness centrality. This means the R&D projects he joined are not conducted by major project teams, but very active with lots of patents. Considering that the topics of the R&D projects are related with future technologies, such as carbon material, thermoelectric elements, and inorganic material, it can be inferred that Samsung Electronics has relatively small R&D groups for preparing future technologies.
### Discussion
From the empirical analysis of the Samsung Electronics case, we extracted the following implications:
* In the case of granted patents, the results show the tendency that if inventors have a high betweenness centrality, they have a high R&D leadership. Because betweenness centrality is high when the inventor is linked to many other inventors, there is a high possibility that the inventor is at the center of the R&D projects and the leader of the R&D group. This kind of inventor has
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Inventor Name** & **Degree** & **Betweenness Centrality** & **Technology Area** \\\\ \\hline Jae-young CHOI & 53 & 1,115,306 & \\begin{tabular}{c} Carbon material, thermoelectric \\\\ elements, inorganic material \\\\ \\end{tabular} \\\\ \\hline Chang-soo LEE ([http://goo.gl/OXWsdU](http://goo.gl/OXWsdU)) & 33 & 455,627.5 & Transferring data \\\\ \\hline Hyeong-sik CHO & 33 & 564,297.8 &
\\begin{tabular}{c} Fibre optic devices, structural \\\\ combinations of lighting devices \\\\ \\end{tabular} \\\\ \\hline Boon Loong Ng ([http://goo.gl/se4A7F](http://goo.gl/se4A7F)) & 31 & 115,543.2 & LTE, LTE-Advanced \\\\ \\hline Chi-Woo LIM & 31 & 593,338 & Wireless communication \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 10: The results of a co-inventor network of patent applications (Top five highest degree).
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Inventor Name** & **Degree** & **Betweenness Centrality** & **Technology Area** \\\\ \\hline Sang Ho PARK & 25 & 2,558,598.32 &
\\begin{tabular}{c} Optics Thin Film \\\\ (Laudry Machine) \\\\ \\end{tabular} \\\\ \\hline Hyun-jin KIM ([http://goo.gl/OXWsdU](http://goo.gl/OXWsdU)) & 31 & 2,355,259.44 & Battery \\\\ \\hline Sung-Ho CHOI & 19 & 2,005,281.401 & Wireless Communication \\\\ \\hline Do-Hyun KIM & 13 & 1,944,638.685 & Optics Thin Film \\\\ \\hline Eun-Hui BAE & 12 & 1,768,819.409 & Wireless Communication \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 9: The results of a co-inventor network of patent applications (Top five highest betweenness centrality).
a high potential to be promoted to an executive in the organization or to move to another R&D department as a core researcher.
* In the case of patent applications, the proposed method helps to find organization R&D effort with the hidden information which is not extracted for IPC codes. Considering the number of inventors arranged on each IPC code, Samsung Electronics have allocated many researchers to R&D projects related to user interface technologies to construct the patent portfolio for preparing patent suit against Apple. However, it appears that the inventors related to material technology have a relatively high degree and betweenness centrality. This means that Samsung Electronics' strategy to build a patent portfolio for user interface is just to apply for many patents with many small-sized R&D teams, on the other hand, they prepare core capabilities related to material technology with a few big-sized R&D teams for supporting current business or preparing emerging business.
* Finally, the preparation for future direction is also captured by the proposed method. In an analysis of patent applications, the inventor who has high degree but relatively low betweenness centrality is identified. He works for emerging technologies related to carbon material, inorganic material, and thermoelectric element. This fact shows that Samsung Electronics has constructed small size, but active R&D teams for preparing future technology and has crafted various patent portfolios. We think these are the strategic directions of Samsung Electronics.
## 5 Conclusions
This research proposes a method to investigate the strategic changes of a company by analyzing inventor information in patent documents and technological context information. The proposed method helps to understand the core inventors, R&D portfolio, and hidden R&D strategies using patent co-inventor network and SNA technique. In addition, this research conducted the case study of Samsung Electronics' R&D organization and identified strategic changes of them.
The main contribution of this research can be divided into two parts: empirical and methodological perspectives. From an empirical perspective, this research can identify a firm's strategic changes, which are difficult to identify by measuring technological capabilities. Therefore, the method can help understand strategic changes of competitors. About the methodological contribution, this research uses inventor information as an input for analyzing R&D effort and strategic changes of a firm. Considering that most existing patent analysis-based research methods using inventor information or SNA techniques have focused on knowledge transfer or research collaboration, this research is a meaningful attempt to link patent inventor information to strategic changes.
To improve the proposed method, some issues must be resolved in the future. First, the name ambiguity problem should be solved for more efficient analysis. In a patent document, sometimes, inventor's names are the same for different people or written wrongly by human error. This can generate inappropriate results. For example, if two inventors having same name and relatively high betweenness centralities are considered as one inventor, this inventor will be overestimated. To resolve this problem, the method for distinguishing inventor names should be developed by using a data mining approach such as an association rule with IPC codes or co-inventor information. Second, the scope of case study needs to be extended. Even though this research only analyzed the Samsung Electronics case, other IT companies, including Apple, Nokia, Blackberry, and Huwei, would be good to analyze by the proposed method to verify the applicability and reliability.
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2015R1C1A1A01056185), and by the Gachon University research fund of 2014 (GCU-2014-0108).
Sungchul Choi and Hyunseok Park designed the framework and experiments, Sungchul Choi performed the experiments, Sungchul Choi and Hyunseok Park wrote the manuscript.
The authors declare no conflict of interest.
## References
* Barney (1991) Barney, J. Firm resources and sustained competitive advantage. _J. Manag._**1991**, _17_, 99-120.
* Andrews and David (1987) Andrews, K.R.; David, D.K. _The Concept of Corporate Strategy_; Richard D Irwin: New York, NY, USA, 1987.
* Wernerfelt (1984) Wernerfelt, B. A resource-based view of the firm. _Strateg. Manag. J._**1984**, \\(5\\), 171-180.
* Bohn (1994) Bohn, R.E. Measuring and managing technological knowledge. _Sloan Manag. Rev._**1994**, _36_, 61.
* Ernst (2003) Ernst, H. Patent information for strategic technology management. _World Pat. Inf._**2003**, _25_, 233-242. [CrossRef]
* Del Canto and Gonzalez (1999) Del Canto, J.G.; Gonzalez, I.S. A resource-based analysis of the factors determining a firm's R&D activities. _Res. Policy_**1999**, _28_, 891-905.
* Roussel et al. (1991) Roussel, P.A.; Saad, K.N.; Erickson, T.J. _Third Generation R&D: Managing the Link to Corporate Strategy_; Harvard Business Press: Brighton, MA, USA, 1991.
* Allen and Katz (1992) Allen, T.J.; Katz, R. Age, education and the technical ladder. _IEEE Trans. Eng. Manag._**1992**, _39_, 237-245. [CrossRef]
* Jung and Imm (2002) Jung, S.; Imm, K.-Y. The patent activities of Korea and Taiwan: A comparative case study of patent statistics. _World Pat. Inf._**2002**, _24_, 303-311. [CrossRef]
* Schoenecker and Swanson (2002) Schoenecker, T.; Swanson, L. Indicators of firm technological capability: Validity and performance implications. _IEEE Trans. Eng. Manag._**2002**, _49_, 36-44. [CrossRef]
* Chakrabarti (1991) Chakrabarti, A.K. Competition in high technology: Analysis of patents of US, Japan, UK, France, West Germany, and Canada. _IEEE Trans. Eng. Manag._**1991**, _38_, 78-84. [CrossRef]
* Park et al. (2013) Park, H.; Yoon, J.; Kim, K. Identification and evaluation of corporations for merger and acquisition strategies using patent information and text mining. _Scientometrics_**2013**, _97_, 883-909. [CrossRef]
* Scherer (1983) Scherer, F.M. The propensity to patent. _Int. J. Ind. Organ._**1983**, \\(1\\), 107-128. [CrossRef]
* Armstrong (2001) Armstrong, M. _A Handbook of Human Resource Management Practice_; Kogan Page: London, UK, 2001.
* Price (2004) Price, A. _Human Resource Management in a Business Context_; Cengage Learning EMEA: London, UK, 2004.
* Moehrle et al. (2005) Moehrle, M.G.; Walter, L.; Geritz, A.; Muller, S. Patent-based inventor profiles as a basis for human resource decisions in research and development. _R&D Manag._**2005**, _35_, 513-524.
* Kim and Cha (2000) Kim, Y.; Cha, J. Career orientations of R&D professionals in korea. _R&D Manag._**2000**, _30_, 121-138.
* Hendriks et al. (1999) Hendriks, M.; Voeten, B.; Kroep, L. Human resource allocation in a multi-project R&D environment: Resource capacity allocation and project portfolio planning in practice. _Int. J. Proj. Manag._**1999**, _17_, 181-188.
* Huemann et al. (2007) Huemann, M.; Keegan, A.; Turner, J.R. Human resource management in the project-oriented company: A review. _Int. J. Proj. Manag._**2007**, _25_, 315-323. [CrossRef]
* Acedo et al. (2006) Acedo, F.J.; Barroso, C.; Casanueva, C.; Galan, J.L. Co-authorship in management and organizational studies: An empirical and network analysis. _J. Manag. Stud._**2006**, _43_, 957-983. [CrossRef]
* Hamel and Prahalad (2005) Hamel, G.; Prahalad, C.K. _Strategic Intent_; Harvard Business School Publishing Corporation: Boston, MA, USA, 2005; Volume 83, p. 14.
* Legge (1995) Legge, K. What is human resource management? In _Human Resource Management_; Springer: Berlin, Germany, 1995; pp. 62-95.
* Storey (2007) Storey, J. _Human Resource Management: A Critical Text_; Cengage Learning EMEA: London, UK, 2007.
* Argyres and Silverman (2004) Argyres, N.S.; Silverman, B.S. R&D, organization structure, and the development of corporate technological knowledge. _Strateg. Manag. J._**2004**, _25_, 929-958.
* Han and Park (2006) Han, Y.-J.; Park, Y. Patent network analysis of inter-industrial knowledge flows: The case of korea between traditional and emerging industries. _World Pat. Inf._**2006**, _28_, 235-247. [CrossRef]
* Chao-Chih and Chun-Chieh (2009) Chao-Chih, H.; Chun-Chieh, W. The use of social network analysis in knowledge diffusion research from patent data. In Proceedings of the International Conference on Advances in Social Network Analysis and Mining (ASONAM '09), Athens, Greece, 20-22 July 2009; pp. 393-398.
* Cantner and Graf (2006) Cantner, U.; Graf, H. The network of innovators in Jena: An application of social network analysis. _Res. Policy_**2006**, _35_, 463-480. [CrossRef]
* Vidgen et al. (2007) Vidgen, R.; Henneberg, S.; Naude, P. What sort of community is the European Conference on Information Systems? A social network analysis 1993-2005. _Eur. J. Inf. Syst._**2007**, _16_, 5-19. [CrossRef]
* Lobo and Strumsky (2008) Lobo, J.; Strumsky, D. Metropolitan patenting, inventor agglomeration and social networks: A tale of two effects. _J. Urban Econ._**2008**, _63_, 871-884. [CrossRef]* (30) Sternitzke, C.; Bartkowski, A.; Schramm, R. Visualizing patent statistics by means of social network analysis tools. _World Pat. Inf._**2008**, _30_, 115-131. [CrossRef]
* (31) Lei, X.-P.; Zhao, Z.-Y.; Zhang, X.; Chen, D.-Z.; Huang, M.-H.; Zheng, J.; Liu, R.-S.; Zhang, J.; Zhao, Y.-H. Technological collaboration patterns in solar cell industry based on patent inventors and assignees analysis. _Scientometrics_**2013**, _96_, 427-441. [CrossRef]
* (32) Chang, S.-B.; Lai, K.-K.; Chang, S.-M. Exploring technology diffusion and classification of business methods: Using the patent citation network. _Technol. Forecast. Soc. Chang._**2009**, _76_, 107-117. [CrossRef]
* (33) Wang, J.-C.; Chiang, C.-H.; Lin, S.-W. Network structure of innovation: Can brokerage or closure predict patent quality? _Scientometrics_**2010**, _84_, 735-748. [CrossRef]
* (34) Abbasi, A.; Altmann, J. On the correlation between research performance and social network analysis measures applied to research collaboration networks. In Proceedings of the 44th Hawaii International Conference on System Sciences (HICSS), Kauai, HI, USA, 4-7 January 2011.
* (35) Csardi, G.; Nepusz, T. The igraph software package for complex network research. _Int. J. Complex Syst._**2006**, _1695_, 1-9.
* (36) R Core Development Team. _R: A Language and Environment for Statistical Computing_, The R Foundation for Statistical Computing: Vienna, Austria, 2011.
* (37) Popp, D.; Juhl, T.; Johnson, D.K. Time in purgatory: Examining the grant lag for us patent applications. _Top. Econ. Anal. Policy_**2004**, \\(4\\), 1-45. [CrossRef]
* (38) Diestel, R. _Graph Theory_; Springer: Berlin, Germany, 2005.
* (39) Carrington, P.J. _Models and Methods in Social Network Analysis_; Cambridge University Press: Cambridge, UK, 2005.
* (40) Freeman, L.C. Centrality in social networks conceptual clarification. _Soc. Netw._**1978**, \\(1\\), 215-239. [CrossRef]
* (41) Freeman, L.C.; Roeder, D.; Mulholland, R.R. Centrality in social networks: II. Experimental results. _Soc. Netw._**1979**, \\(2\\), 119-141. [CrossRef]
* (42) Napolitano, G.; Sirilli, G. The patent system and the exploitation of inventions: Results of a statistical survey conducted in Italy. _Technovation_**1990**, _10_, 5-16. [CrossRef]
* (43) Hall, B.H.; Jaffe, A.B.; Trajtenberg, M. _The NBER Patent Citation Data File: Lessons, Insights and Methodological Tools_; National Bureau of Economic Research: Cambridge, MA, USA, 2001. | The aim of this paper is to propose a method to investigate a firm's strategic changes. Technologies or technological capabilities are a major resource for achieving competitive advantages, so a firm's R&D effort to improve capabilities on specific technologies is aligned with strategic direction. Therefore, this research analyzes changes in R&D efforts by identifying key R&D personnel using patent co-inventor network and social network analysis. Based on characteristics of application and granted patents, the method analyzes current and future R&D efforts and so identifies strategic changes of a firm. We conducted an empirical analysis using the patents of Samsung Electronics. Our method analyzed the current and future strategies of Samsung Electronics and the result shows clear strategic changes in their focal technologies and business.
patent co-inventor analysis; technological capability; social network analysis +
Footnote †: journal: Nucl. Instr. Meth. A | Summarize the following text. | 173 |
isprs/970ac455_8293_4f39_9364_3a1e9b5dc24d.md | Visualization of Spaceborne Synthetic Aperture Radar Doppler Parameter Estimation and Its Application in Topographic Surveying and Mapping
## 1 Introduction
SAR is an advanced senor with full weather and full time, and because of its unique performance, it is now wide spread in many fields like topography, agriculture, mining, water resource, etc. However, no matter in which field or application, we need to get accuracy enough SAR image, and in some fields, it should be interpreted correctly. In the process of SAR imaging, there are two main factors affect the quality of SAR image-Doppler parameter and Range migration. In fact, according to reason the range migration formula, it will be found out that range migration is close related to Doppler parameters (Z. Bao et al., 2006). And in the topographic surveying and mapping, the accuracy and the image quality is required (J. X. Zhang et al., 2004).
The structure of this paper is as follows: section 2 gives an introduction of Doppler parameter, section 3 gives an introduction of range migration, section 4 shows the visualization approach and results and finally, section 5 presents the conclusion an further research.
## 2 Doppler Parameter
Doppler parameter affects the image quality directly; the image will be defocused if there are errors presented in the estimation of Doppler centroid frequency, because of range migration, the accuracy of Doppler parameter estimated plays an important role in the correction of range migration.
Firstly, let us review the phenomena of Doppler. The Radar antenna received frequency is not the original frequency \\(\\boldsymbol{f}_{0}\\) transmitted by the Radar antenna, but plus an additional frequency \\(\\boldsymbol{f}_{d}\\), that is to say, the frequency received by Radar antenna is \\(\\boldsymbol{f}_{0}+\\boldsymbol{f}_{d}\\), the \\(\\boldsymbol{f}_{d}\\) is Doppler frequency offset. Figure 1 show the relationship between them:
\\(f_{d}\\) close to Eq.(1)(C. Z. Han et al., 2006):
\\[f_{d}\\ =\\ \\frac{2\\,V_{{}_{,}}}{\\lambda}=\\ \\frac{2\\,V}{\\lambda}\\cos\\ \\ \\boldsymbol{\\phi} \\tag{1}\\]
If the \\(V_{{}_{,}}\\) is much less than the velocity of light \\({}^{\\boldsymbol{C}}\\), \\(\\lambda\\) is the signal wave length, and from Eq.(1) knows that: 1) \\(f_{d}=\\)0, if \\(\\boldsymbol{\\phi}=\\) 90 \\({}^{0}\\), it is perpendicular side-looking, that means the instance frequency the radar received just is the frequency the radar system transmitted; 2) the frequency received by radar will be various because of different ground target has different \\(f_{d}\\), if \\(\\boldsymbol{\\phi}\
eq\\) 90 \\({}^{0}\\), that's non-perpendicular side
Figure 1: the geometric between ground target and Radar
\\[R_{ij}\\approx R_{0}-\\sin\\ \\theta_{0}V_{I}+\\frac{\\cos^{2}\\theta_{0}}{2R_{0}}(V_{I})^{2} \\tag{6}\\]
So if \\(I_{0}\\) is center of a synthetic time, the instance phase \\(\\Phi(t)\\) of the echo wave signal received by the radar in a synthetic time can be viewed as:
\\[\\Phi(t)=\\frac{4\\pi}{\\lambda}R_{ij}=\\frac{4\\pi}{\\lambda}(R_{0}-\\sin\\ \\theta_{0}V_{I}+\\frac{\\cos^{2}\\theta_{0}}{2R_{0}}(V_{I})^{2}) \\tag{7}\\]
According to the derivative of \\(\\Phi(t)\\) and divide it by \\(2\\pi\\), we get the instance Doppler frequency:
\\[f_{D}(t)=-\\frac{1}{2\\pi}\\frac{\\partial\\Phi(t)}{\\partial t}=\\frac{2\\sin\\ \\theta_{0}V}{\\lambda}-\\frac{2\\cos^{2}\\theta_{0}V^{2}}{\\lambda R_{0}}t \\tag{8}\\]
In the center of a synthetic time, the Doppler frequency is the Doppler centroid frequency, the value is:
\\[f_{DC}=-\\frac{2}{\\lambda}\\times\\frac{\\partial R_{ij}}{\\partial t}\\Big{|}_{+} =\\frac{2\\sin\\ \\theta_{0}V}{\\lambda} \\tag{9}\\]
And we get the Doppler :
\\[f_{DA}=-\\frac{2}{\\lambda}\\times\\frac{\\partial R_{ij}^{2}}{\\partial t}\\Big{|}_ {+}=-\\frac{2\\cos^{2}\\theta_{0}V^{2}}{\\lambda R_{0}} \\tag{10}\\]
Substituting (9). (10) into (8) we get:
\\[f_{D}(t)=f_{DC}+f_{DN}t \\tag{11}\\]
Substituting (9). (10) into (5) we get(Z. Bao et al., 2006, C. Z. Han et al., 2006):
\\[\\Delta R=-\\frac{\\dot{M}_{DC}}{2}t-\\frac{\\dot{M}_{BR}}{4}t^{2}=\\Delta R_{1}+ \\Delta R_{2} \\tag{12}\\]
From formula (5) to formula (12) we get the close relation between Doppler parameter and the range migration definitely, so before correcting the range migration, get better Doppler parameter is guaranteed.
## 4 Visualization approach and results
According to the above discuss, we use RD algorithm and visualization of the Doppler parameter, we get the following results, figure 4 is the SAR image without the estimation of Doppler parameter; figure 5 is the SAR image with the result of Doppler parameter estimation. Finally, these Doppler data is well applied in the topographic surveying and mapping.
## 5 Conclusions and further research
Through the experiment and analysis, Doppler parameter estimation is a complex process, not only the Doppler parameter itself, but also it is related to the range migration, and it is also play an important role in topographic surveying and mapping. And the results we get approve that visualization of Doppler parameter estimation is a feasible approach in practice.
In conclusion, Doppler parameter estimation is a hard task, but it is the fundament of SAR imaging it needs further research.
Figure 4: Without Doppler parameter estimation
Figure 5: With Doppler parameter estimation
## References
* [1]Y. H. Zhang, J. X. Zhang, Z. J. Lin, 2000. The mathematics principle of Synthetic Aperture Radar imaging process. _Remote Sensing Information_,4, pp. 13-15
* [2]J. X. Zhang, M. H. Yang, G. M. Huang, 2004. Prospect of airborne SAR technique in terrain mapping application. _Science of Surveying and Mapping_,28(6), pp. 24-26
* [3]X. K. Yuan, 1994. Synthetic Aperture Radar imaging data process. _Shang Hai Hang Tian_, 3, pp. 2-8.
* [4]Z. Bao, M. D. Xing, T. Wang, 2006. _Radar Imaging Technology_, Publishing House of Electronics Industry, Beijing, P.R China, pp. 124-151
* [5]C. Z. Han et al., 2006. _Synthetic Aperture Radar:Ysystems and Signal Preprocessing_, Publishing House of Electronics Industry, Beijing, P.R China, pp. 9-17 | Synthetic Aperture Radar (SAR) image is not easy to interpretation because of its image unique characters, and there is a close relationship between the Doppler parameter and the quality of SAR image, these factors actually impact the application of SAR image in topographic surveying and mapping, although it is a active sensor and it is less affected by the weather and time. This paper focuses on the impact of the SAR image quality and SAR image interpretation in the application of topographic surveying and mapping, which caused by the low accuracy of SAR Doppler parameter and range migration, and as well as on a more practical way to improve the precise of SAR Doppler parameter with a visualization approach, so a better Doppler parameter can be utilized in topographic surveying and mapping.
Remote Sensing, SAR, Image, Correction, Geodesy, Application | Write a summary of the passage below. | 171 |
arxiv-format/2204_09387v1.md | # Attentive Dual Stream Siamese U-Net for Flood Detection on Multi-Temporal Sentinel-1 Data
## 1 Introduction
Natural disasters cost billions of dollars worth of economy every year, and floods are responsible for a major part of that. Because of floods, millions of people abandon their property, moreover poor and middle-class people are most affected by floods. The loss of lives and property due to natural disasters brings people towards poverty and it takes them decades to recover. With climate change, the developed countries are also at high risk. Prediction of floods and evacuation before the event is not quick enough and still improving. In such a scenario, accurate and reliable flood mapping after the disaster can help in rescue missions, re-routing traffic, delivering aids, and many more.
Satellites are a leading technology in gathering quick information on a large scale. Compared to optical data, Synthetic Aperture Radar (SAR) imagery is preferred for flood mapping from space. Unlike optical sensors, SAR has the capability of imaging day and night, irrespective of the weather conditions.
SAR data acquired from various satellites has been explored for water detection and flood mapping. Historically, Flood mapping on SAR data is performed using manual thresholding, fuzzy logic, difference images, filtering, log-ratio, and others [2, 3]. More recently, multiple studies investigated the potential of Deep Learning algorithms for the flood detection task, mainly using uni-temporal data. For example, in [4] work, authors experimented on support vector machine and basic neural networks. In paper [5] and [6], different flood events are considered and water segmentation is conducted on uni-temporal data using U-Net architecture. Moreover, previous works investigated DL networks on smaller sites, hence lacking training data and generalization. In 2020, a large-scale flood dataset Sen1Flood11 [1] was launched as a free and open benchmark dataset helping researcher in experimenting DL methods in flood detection tasks. This dataset has been explored in some studies [7, 8] and [9]. In study [7], authors experimented on optical Sentinel-2 data for the domain adaptation and flood segmentation task. In [8] and [9], SAR and optical data are fused to segment flood areas on uni-temporal data.
In this work, flood detection is performed as a change detection task on bi-temporal data. For the experiments, we used the post-flood SAR images from Sen1Flood11 [1] and pre-flood images are collected separately. In Fig. 1, an example of the input pre and post images and the reference label map are illustrated.
## 2 Proposed Method
### Data Preparation
The Sen1Flood11 [1] dataset is used for training and evaluation of the model. The dataset consists of 446 non-overlapped Sentinel-1 tiles. The samples are from 11 different flood events. Each sample is a patch of 512x512 pixels with 10-meter ground resolution. A wide variety of geographical areas are covered in the data, making it a good dataset for investigating the model's generalization capability. Each sample is composed of two bands VV(vertical transmit, vertical receive) and VH(vertical transmit, horizontal receive).
he dataset is also associated with Pixel-wise classification ground truth. Each pixel is classified into three categories, 0, 1, and -1. Class 0 represents the absence of water, class 1 represents water, and -1 indicates missing data.
The flood event samples in the Sen1Food11 dataset are post-flood images. We strengthen the dataset by adding pre-flood images considering the Sentinel-1 images acquired with the same SAR geometry. We fetched the geometry and the orbit of the post-flood images, and downloaded all the available Sentinel-1 images over a span of 1 year before the flood event date. These Sentinel-1 images are downloaded using Google Earth Engine's python API [10]. The pixel-wise median of all the past year images is considered as the pre-flood image.
The dataset is divided into training and validation sets as specified in the Sen1Food11 dataset. The VV and VH backscatters of both pre and post-flood images are clipped in range (-23, 0)dB and (-28, -5)dB respectively. At last, all the images are normalized before feeding to the network. Few samples of pre-flood, post-flood images, and the corresponding flood mask are visualized in Figure1.
### Network
In this work, we propose a dual-stream Siamese network for flood detection. The network is shown in Figure2. The architecture of the proposed network is inspired by the encoder-decoder segmentation networks. In such architectures, the encoder encodes the salient features of the input into a smaller representation named feature maps. These maps are then up-sampled and decoded into a segmentation map in multiple steps. The size(width x height) of the segmentation map is equal to the size of the network's input.
In the presented network, two encoders are used to encode pre-flood and post-flood images. The encoders used are inspired by siamese networks hence, share weights. The network takes 3 channel input, the first 2 channels are VV and VH SAR backscatter. The third channel is kept blank(all zeros). Since, We are using pre-trained 3 channel backbone, We are bound to use 3 channel input.
At different levels of the encoder, there are multiple-scale feature maps. At each scale, different level of semantic information is captured. From the two encoders, four feature maps of different scales are extracted. The size of the extracted feature maps are (256x256), (128x128), (64x64), and (32x32).
Since the dataset is acquired in regions with different terrain morphologies and land covers, the VV and VH backscatter behavior is not uniform. Depending on the surface, the significance of VV and VH channel varies. This phenomenon is taken into account by adding a channel-wise attention block to the network. The attention block is also complemented with the spatial attention and to achieve this we used Concurrent Spatial and Channel 'Squeeze & Excitation'(scSE) blocks [11].
The feature maps from the two encoders are now enhanced and weighted channel-wise. These features from the pre-flood and post-flood images are fused using concatenation operation. The feature maps are then fed into the decoder, where the output flood map is generated after applying, a series of convolution, upsampling, padding and normalization operations.
Figure 1: Data Samples. From left to right, pre-flood, post-flood images and Ground truth labels are visualized. In the ground truth blue color indicates water and the background is in white.
### Implementation and Training
The pixel-wise change detection is handled as a binary classification task with two classes \"change\" and \"no change\". These two classes can be interpreted as flood and no flood. The problem of severe imbalance between changed and unchanged pixels is well known in the remote sensing field. To overcome this problem we used a combination of focal loss and dice loss to train the network. The loss combination used in the network is shown in equation(1).
\\[Loss=\\alpha*DiceLoss+(1-\\alpha)*FocalLoss \\tag{1}\\]
For better convergence of the model, the learning rate is decayed in steps. The initial learning rate is 0.001 and decayed until it is at 0.00001. The decay steps are controlled with the \"reduce on plateau\" method, which decays the rate when the learning curve is stuck at a plateau. We conducted the experiments with multiple backbones and the best results are recorded with Resnet50 encoder. In the learning process, 'GeoTIFF' images of size 512x512 pixels are used. With the help of augmentation, the data size and add geometric robustness to the model is increased. The augmentation methods used are horizontal and vertical flip. All the experiments are implemented in Keras and the network is trained on one google colab GPU. The network's training time is 2hours and inference time is 5 images per second. The Code will be made publicly available.
## 3 Results and Evaluation
For the quantitative evaluation of the proposed method, we used intersection over union(IOU) and F1-Score metrics. To the best of our knowledge, there are no works on this Sen1Flood11 or any available large-scale SAR dataset, which explore deep learning methods on multi-temporal data for flood detection. There are two existing works conducted using uni-temporal post-flood data from the Sen1Flood11 dataset. Our results are compared with these methods referred here as 'DL Method 1' and 'DL Method 2'. Both of these methods work on post-flood data and the detection is performed as a segmentation task. The quantitative performance comparison is shown in Table 1.
From the comparison, we can see that our proposed method on multi-temporal SAR data outperformed the previous benchmark methods. The proposed method achieved 6% better IOU in comparison to the 'DL Method 1' and 21% better F1-score compared with 'DL Method 2'. A few samples of the flood detection results are visualized in Figure 3 for qualitative analysis. The results prove that when the detection is done as a comparison between pre-flood and post-flood acquisitions, additional information is learned by the neural network improving the overall accuracy. This additional information contributes towards more accurate flood area detection.
## 4 Conclusion
In this work, we propose a dual-stream model to utilize pre-flood images along with post-flood images to detect the flood areas as a change detection task. From the evaluations,
\\begin{table}
\\begin{tabular}{l|c|c}
**Methods** & **IOU** & **F1-Score** \\\\ \\hline \\hline Uni-Temporal DL Method 1[9] & 0.64 & – \\\\ Uni-Temporal DL Method 2 S1[8] & – & 0.62 \\\\ Bi-temporal Flood Detection(**ours**) & 0.70 & 0.83 \\\\ \\end{tabular}
\\end{table}
Table 1: Performance comparison with existing methods.
Figure 2: Attentive Dual Stream Siamese Network.
we found that with the help of pre-flood images, flood areas can be detected more accurately. Also, the Sentinel-1 data is freely available to download, hence utilizing beforehand data adds no cost to the task and improves the flood detection results.In the next step, we will be extending our work to semi-supervised and unsupervised multi-temporal methods as labeled data are often not readily available and time-consuming to generate. We will aim to understand better the pros and cons in comparison to supervised methods. The over-reaching goal of our ongoing research is to provide robust and automatic methods for flood emergency mapping.
## References
* [1]D. Amitrano, G. Di Martino, A. Iodice, D. Riccio, and G. Ruello (2018) Unsupervised rapid flood mapping using sentinel-1 grd sar images. IEEE Transactions on Geoscience and Remote Sensing56 (6), pp. 3290-3299. Cited by: SS1.
* [2]D. Amitrano, G. Di Martino, A. Iodice, D. Riccio, and G. Ruello (2018) Unsupervised rapid flood mapping using sentinel-1 grd sar images. IEEE Transactions on Geoscience and Remote Sensing56 (6), pp. 3290-3299. Cited by: SS1.
* [3]D. Amitrano, G. Di Mart
Conference on Applications of Computer Vision_, 2021, pp. 111-122.
* [8] Goutam Konapala, Sujay V Kumar, and Shahryar Khalique Ahmad, \"Exploring sentinel-1 and sentinel-2 diversity for flood inundation mapping using deep learning,\" _ISPRS Journal of Photogrammetry and Remote Sensing_, vol. 180, pp. 163-173, 2021.
* [9] Yanbing Bai, Wenqi Wu, Zhengxin Yang, Jinze Yu, Bo Zhao, Xing Liu, Hanfang Yang, Erick Mas, and Shunichi Koshimura, \"Enhancement of detecting permanent water and temporary water in flood disasters by fusing sentinel-1 and sentinel-2 imagery using deep learning algorithms: Demonstration of sen1 floods11 benchmark datasets,\" _Remote Sensing_, vol. 13, no. 11, pp. 2220, 2021.
* [10] Noel Gorelick, Matt Hancher, Mike Dixon, Simon Ilyushchenko, David Thau, and Rebecca Moore, \"Google earth engine: Planetary-scale geospatial analysis for everyone,\" _Remote Sensing of Environment_, vol. 202, pp. 18-27, 2017, Big Remotely Sensed Data: tools, applications and experiences.
* [11] Abhijit Guha Roy, Nassir Navab, and Christian Wachinger, \"Concurrent spatial and channel'squeeze & excitation' in fully convolutional networks,\" in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2018, pp. 421-429. | Due to climate and land-use change, natural disasters such as flooding have been increasing in recent years. Timely and reliable flood detection and mapping can help emergency response and disaster management. In this work, we propose a flood detection network using bi-temporal SAR acquisitions. The proposed segmentation network has an encoder-decoder architecture with two Siamese encoders for pre and post-flood images. The network's feature maps are fused and enhanced using attention blocks to achieve more accurate detection of the flooded areas. Our proposed network is evaluated on publicly available Sen1Flood11 [1] benchmark dataset. The network outperformed the existing state-of-the-art (uni-temporal) flood detection method by 6% IOU. The experiments highlight that the combination of bi-temporal SAR data with an effective network architecture achieves more accurate flood detection than uni-temporal methods.
Ritu Yadav., Andrea Nasceti, Yifang Ban Division of Geoinformatics, KTH Royal Institute of Technology, Sweden Flood Detection, bi-temporal, Change Detection, SAR, Siamese, Deep Learning, Encoder-Decoder, Attention. | Summarize the following text. | 228 |
arxiv-format/2409_10931v1.md | # Frontier Shepherding: A Bio-Mimetic Multi-robot Framework for Large-Scale Exploration
John Lewis\\({}^{1}\\), Meysam Basiri\\({}^{1}\\), and Pedro U. Lima\\({}^{1}\\)
* C645727867: 0000066) and ISR/ARSS Strategic Funding through the FCT project DOI: 10.5499/UPDIS/S0009/2020, DOI: 10.54499/UPDIS/S0009/2020, DOI: 10.54499/LA/P/0083/2020
## I Introduction
Robotic applications involving exploration or surveillance of a large-scale area can be taxing for a single robot. Delegation of such tasks across multiple robots improves overall efficiency. However, multi-robot solutions often require prior planning, training, or optimization techniques to improve coordination, minimize overlapped exploration, and overcome communication constraints. Such planning might be complex in scenarios such as search and rescue, disaster response, and large-scale exploration of unknown, unstructured areas. Quick and robust deployment of multiple robots for such scenarios while ensuring complete coverage is thus vital.
### _Related Work_
Fast and robust autonomous exploration is an essential aspect of outdoor robotics, crucially to achieve complete autonomy in scenarios such as search and rescue [1], disaster response [2], and mapping of large-scale unknown areas [3]. A conventional autonomous exploration task is carried out by defining _frontiers_[4] as the boundary between the known/mapped and unknown/unmapped areas. Exploration is achieved by pushing this boundary or mathematically by minimizing the perimeter or length of the frontier. However, this straightforward solution renders inadequate and increasingly complex in the presence of obstacles, energy and time constraints, and completeness of the exploration. Thus, prioritizing viewpoints from the knowledge of frontier, the path planned and the perception sensor can optimize the flight time required for exploration [3, 4, 5], where viewpoints are the points at which the frontier can be altered.
Frontier exploration strategies include greedy exploration [6], proximal exploration [7] or exploration based on prior training [8]. Proximal exploration strategies require conditional supervision to prevent the robot from being stuck in local minima. In contrast, greedy exploration strategies can often lead to sub-par time management, especially in highly cluttered environments. Deep learning or reinforcement learning strategies require prior training and substantial data, resulting in slow deployment. Furthermore, reliance on training on extensive data may not capture the complexities of unforeseen environments. Fast deployment is crucial in disaster response and search and rescue scenarios. Markov decision process (MDP) based exploration strategies [9, 10, 11] often guarantee safety but can be computationally intensive.
Increasing the speed of exploration robots may offer benefits in terms of time efficiency but introduces several challenges. These include increased collision risks, limitations in perception and sensing, control and stability issues, and the need for faster data processing and communication. An alternative solution would be segregating the overall exploration task across multiple robots. Over the years, multi-agent exploration solutions have improved from a general strategy [12, 13] to tackle specific exploration tasks such as forests [3], caves [14] and indoors [15]. Bartolomei et al. [3] enhances exploration by utilizing a dual mode, namely explorer and collector. In explorer, the agents push the frontiers, and in collector, the \"trails\" or leftover unknown pockets are explored. The dual mode enables a variable velocity, with a higher velocity in collector mode to capture the \"trails\" faster, thereby guaranteeing speed and safety.
An uncoordinated multi-agent system can often lead to multiple sweeps of the same area to achieve complete coverage. Thus, communication and coordination are vital to extract the full effectiveness of a multi-robot system. This requirement increases complexity with the increase in number of robots. In a communication-constrained scenario, Yunnan Gao et al. [16] proposed a framework where the agents initially coordinate a meeting point to share the map. The agents then explore an area and reconvene at the predetermined meeting point to merge maps and determinethe meeting points and areas to explore. In bandwidth-constrained instances, sparse information can be transmitted over a long-range communication protocol, and complete information transfer can be instigated utilizing a short-range communication protocol. In such scenarios, the method proposed by Lewis et al. [17] enables sharing minimal point-cloud, and GNSS coordinates over long-range communication, while short-range communication is used for sharing a complete map. This minimal information can assist in exploration as the robots are clued in on the possible explored areas. A coordinated exploration utilizing a decentralized approach by relying on point-to-point communication [18] ensures that the agents are spread out, thereby minimizing the overlap with previously explored areas of other agents.
Naturally occurring behaviors such as sheep herding [19], ant foraging [20], and fish swarming [21] can be heuristically modeled to incorporate swarm-like behavior. These minimal heuristic inter-agent relations lead to emergent behaviors and can be used for rapid deployment and control of robotic swarms [22]. Additional swarm control can be attained by integrating heuristics into the agent behavior, which acts as a reaction to an external agent. The added change in swarm dynamics can be predatory [19, 23] or leader-like [24, 25], depending on the nature of the task at hand. The ability to control a large swarm of robots by manipulating a few robots is a favorable option as it minimizes the control, communication, and coordination requirements. In the context of exploration, prior works [25, 26, 27] have explored or can be extended to include exploration tasks utilizing swarms.
### _Contributions_
In this article, we present Frontier Shepherding (_FroShe_), a novel bio-mimetic multi-robot framework for large-scale exploration. The proposed exploration framework, _(i)_ utilizes heuristic bio-mimetism to explore frontiers; _(ii)_ achieves frontier prioritization using virtual bio-mimetic agents and behavior; _(iii)_ provides robust performances across varying environments and coverage areas. The proposed modular, online, and decentralized strategy enables robust and scalable exploration suitable for quickly deploying computationally constrained robot(s) for mapping unknown and hazardous terrains.
The article's outline is as follows: Section II details the proposed multi-agent exploration framework. The experiments and results are presented in Section III. Finally, Section IV concludes the findings and pitches possible improvements to the proposed method.
## II Methodology
A team of \\(n_{r}\\) robots, \\(\\mathcal{R}=[R_{1}\\ldots,R_{n_{r}}]\\), is tasked to explore and map an unknown environment of area, \\(\\mathcal{A}\\). The robots are equipped with a perception sensor(eg., Ouster, Depth sensors), with a perception range of \\(L\\), to generate a terrain representation. The proposed methodology is broadly grouped into three stages, as shown in Fig. 1.
### _Frontier Processor_
Each \\(R_{i}\\in\\mathcal{R}\\), with pose \\(\\bar{R}_{i}\\), runs a continuous onboard mapping algorithm [28, 29, 30] to generate and update its own map [31], \\(m_{i}\\). In the absence of a global communication topology, a communication-constrained map merging algorithm [16, 17] is utilized to merge \\(m_{i}\\) with \\(m_{j}\\) (\\(\\forall j\\in[1,n_{r}],i\
eq j\\)). The merged map, \\(\\mathcal{M}\\), is equivalent to the global view the whole team perceives. The set of \\(n_{f}\\) map frontiers, \\(\\mathcal{F}=[f_{1},f_{2}, ,f_{n_{f}}]\\), in \\(\\mathcal{M}\\) is calculated [32, 33]. \\(\\mathcal{F}\\) and \\(\\mathcal{M}\\) are continuously updated, at a rate of \\(r_{f}\\) Hz, throughout the exploration process. The modular framework of the frontier processor stage provides flexibility in choosing a specific method for the various building blocks depending on the communication constraints, robot parameters, and levels of cooperation required. Given the computationally intensive modules of SLAM, communication delays, map merging, and frontier detection, the output of the frontier processor is expected to be slow.
### _Swarm Processor_
The Swarm Processor module heuristically models \\(\\mathcal{F}\\) as a virtual sheep swarm at a rate of \\(r_{s}(>>r_{f})\\) Hz. In [19], the behavior of the sheep swarm is heuristically emergent from \\(5\\) forces, as represented in Table I. Out of these \\(5\\) forces, inertial, erroneous, and inter-agent repulsion forces consistently act upon each sheep, while clustering and predatory-response forces are triggered when a sheep is within the sphere of influence of a predator. Adapting to this emergent behavior of a sheep swarm, the shepherding dog's behavior is heuristically modeled [19] to control the sheep swarm. The various forces involved in the heuristic model [19] for a swarm of \\(n\\) sheep, \\(\\mathcal{S}=[S_{1}\\ldots,S_{n}]\\), in the presence of \\(m\\) shepherding dogs \\(\\mathcal{P}=[P_{1}\\ldots,P_{m}]\\) is summarized in Table I. The heuristic predator model either collects or herds the sheep swarm depending upon the compactness of the swarm. \\(h\\), \\(\\rho_{a}\\), \\(c\\), and \\(\\rho_{s}\\) are constants that determine the strength of each force that acts upon sheep \\(S_{i}\\), with pose \\(\\bar{S}_{i}\\) and shepherds' pose \\(\\bar{P}_{j}\\). \\(e\\) is a small random erroneous number emulating noise in the swarm.
Fig. 1: The FroShe framework A primary contribution of FroShe is modeling the frontiers of a map as sheep swarm and the exploration agents as shepherding dogs. The task of herding \\(\\mathcal{S}\\) sheep with \\(\\mathcal{P}\\) shepherding dogs in [19], is mimicked as a task of exploring frontier \\(\\mathcal{F}\\) with team \\(\\mathcal{R}\\). This helps the robot estimate the frontier behavior when the output of the frontier processor is delayed. To successfully emulate shepherding dogs, the robots must perceive the frontiers as sheep swarm.
#### Iii-B1 Virtual Sheep Allocator
The virtual sheep allocator converts the set of frontiers, \\(\\mathcal{F}\\), to a set of \\(n_{v}(<n_{f})\\) virtual sheep, \\(\\mathcal{V}(=[(v_{1},w_{1}),(v_{2},w_{2}), ,(v_{n_{v}},w_{n_{v}})])\\) with enumerated (pose,weight) tuples. In this module, \\(R_{i}\\) initializes a virtual sheep of unit weight for each detected frontier, \\(f_{k}\\in\\mathcal{F}\\). The swarm processor module uses a resolution, \\(f_{res}\\), to discretize continuous frontiers and reduce the initialized virtual sheep count. The virtual sheep falling within \\(f_{res}\\) of each other will be merged in terms of weight and moved to the center of mass of the sheep involved. This approach ensures that multiple frontiers within close proximity are weighed according to the exposed unexplored area of the combined frontiers. This also relates to inter-agent repulsive force [19], as no two virtual sheep can be closer than \\(f_{res}\\). A simple \\(2\\)D representation of the resultant map frontiers for a single robot and the allocated virtual sheep is shown in Fig. 1(a) and Fig. 1(b), respectively.
#### Iii-B2 Swarm Estimator
The swarm estimator estimates the behavior of \\(\\mathcal{V}\\) as \\(R_{i}\\) moves through the terrain and generates the estimated swarm, \\(\\hat{\\mathcal{V}}\\). This is done by modeling the forces of a sheep swarm with respect to the behavior of a frontier. The analogous forces are presented in Table I and explained below.
Unlike sheep, the frontiers do not have an inertial component, wherein the frontiers move in the previously traveled direction. Thus, we are equating the analogous frontier inertial force to \\(0\\). Erroneous force mimics the minor perturbations in the sheep swarm and is directly translated into our modeling.
The sphere of influence of \\(R_{i}\\) is the range, \\(L\\), of the perception sensor. When a virtual sheep is within the sensory range, it activates the clustering and predatory response of the virtual sheep. The predatory response force is similar to [19]. The expected response of the sheep and frontiers, when in range of a predator/robot is to repel away. Along similar lines, we model the frontiers to disperse in the presence of a robot, thus the clustering force has a negative element to its shepherding counterpart.
This approach to modeling frontiers, \\(\\mathcal{F}\\), with a sheep swarm, \\(\\mathcal{V}\\), allows the subsequent shepherding module to heuristically estimate the next possible frontier by predicting the behavior of virtual sheep. When a new update of \\(\\mathcal{F}\\) is acquired from the frontier processor, \\(\\mathcal{V}\\) is updated and \\(\\hat{\\mathcal{V}}\\) is reset to \\(\\mathcal{V}\\). The output of the swarm estimator, and eventually the swarm processor, is at a rate of \\(r_{s}>>r_{f}\\) Hz. The increased rate allows the swarm processor to act as a buffer between the frontier and the predator processor modules. This ensures plausible delays from the frontier processor do not negatively impact the subsequent path planning.
#### Iii-B3 Swarm Batching
\\(\\hat{\\mathcal{V}}\\) can be discontinuous due to occlusions(Fig. 1(b)) and the dispersive nature of the swarm estimator(Fig. 2(b) and Fig. 3(b)). In such scenarios, considering the whole swarm as a single entity can lead to preventable inter-robot collision, chasing similar frontiers, and repeated path planning. In the swarm batching module, \\(\\hat{\\mathcal{V}}\\) is segregated into \\(n_{b}\\) batches using a clustering algorithm [34]. Each cluster is described as a tuple (\\(v_{b}\\),\\(w_{b}\\)), where \\(v_{b}\\) is the center of mass of a cluster of \\(\\hat{\\mathcal{V}}\\), and \\(w_{b}\\) is the corresponding cumulative weight of the cluster. The set of virtual sheep swarm cluster descriptors, \\(\\mathcal{V}_{b}=[(v_{b1},w_{b1}),(v_{b2},w_{b2}), ,(v_{bn_{b}},w_{bn_{b}})]\\), is polled amongst \\(\\mathcal{R}\\).
Each tuple in \\(\\mathcal{V}_{b}\\) uniquely maps to a cluster of sheep in \\(\\hat{\\mathcal{V}}\\) or \\(\\hat{\\mathcal{V}}(\\mathcal{V}_{b}(k))\\) represents a unique sheep cluster, \\(\\forall k\\in[1,n_{b}]\\). Exploring the batch of virtual sheep with the maximum cumulative weight is ideal for maximum exploration gain. We incorporate a distance penalty to ensure each agent prefers a closer swarm batch to prevent unwarranted switching between batches and multiple robots chasing the same frontiers. An allocated batch is removed from the poll and updated across the team, thus enforcing a one-to-one swarm batch to robot mapping. Each robot within proximity calculates the distance to each \\(v_{b}\\) and the corresponding weight. Normalization with respect to the maximum of both distance, \\(d_{max}\\) and weight, \\(w_{max}\\) is carried out to bound the values. Each robot, \\(R_{i}\\) chooses the cluster, \\(\\hat{\\mathcal{V}}_{s}=\\hat{\\mathcal{V}}(\\mathcal{V}_{b}(B_{i}))\\)
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline
**Force** & **on Sheep, \\(S_{i}\\)[19]** & **on virtual sheep, \\(v_{i}\\)** \\\\ \\hline Inertial Force & \\(h\\hat{S}_{i}(t)\\) & \\(0\\) \\\\ \\hline Inter-Agent & \\(\\rho_{a}\\frac{(S_{i}-\\hat{S}_{j})}{||S_{i}-S_{j}||}\\) & \\(f_{res}\\) \\\\ Requisive Force & & \\(e\\) \\\\ \\hline Clustering Force & \\(\\frac{c}{m}\\sum_{j=1,\
eq i}^{n}\\hat{S}_{j}\\) & \\(\\frac{-c}{m}\\sum_{j=1,\
eq i}^{n}v_{j}\\) \\\\ \\hline Predatory Force & \\(\\sum_{j=1}^{m}\\frac{\\rho_{a}}{||P_{j}^{a}-S_{j}||}\\) & \\(\\sum_{j=1}^{n_{c}}\\frac{\\rho_{f}}{||R_{j}-v_{i}||}\\) \\\\ \\hline Predator & & & \\\\ detection range & \\(r_{p}\\) & & \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Modelling frontiers analogous to sheep swarm. The cumulative sum of the forces determines the behavior of the \\(S_{i}\\) and \\(v_{i}\\).
Fig. 2: Virtual sheep are represented by blobs, with the radius depicting the weight of each virtual sheep. The larger blob sizes at the corners portray multiple virtual sheep merging into one heavier virtual sheep.
where \\(B_{i}\\) maximizes Eq. (1). The distance coefficient \\(\\lambda_{d}\\) and the weight coefficient \\(\\lambda_{m}\\) are tunable parameters that can be tuned according to the robotic team involved. For UAVs, a lower \\(\\lambda_{d}\\) promises a reduced impact on distance in the maximization problem. For UGVs, traversing longer distances can be more taxing and thus increasing \\(\\lambda_{d}\\) will favour nearby heavier batches.
\\[\\begin{split} B_{i}&\\quad\\quad\\quad\\quad
other words, \\(d_{t}\\) is changed, to prefer the strategy alternate to the current dominant one. When SMA falls below FMA, the recent strategies have been outperforming the expected \\(\\Delta E\\), within the time window. This is used as a flag to use the current \\(d_{t}\\), till FMA falls below SMA. Fig. 5, depicts a single-agent exploration scenario in a \\(6400m^{2}\\) forest-like environment and various time instances at which the change in \\(d_{t}\\) is triggered. The auto-regressive moving average model tracks \\(\\Delta E\\) to tune \\(d_{t}\\) to ensure \\(\\Delta E\\) consistently meets the overall long-term expectation.
## III Results
Section III-A provides an in-depth simulation analysis, carried out to compare the FroShe with SOTA multi-agent exploration strategies. Details of an implementation of the proposed algorithm in a real-world a forest-like scenario is showcased in Section III-B.
### _Simulation Results_
#### Iii-A1 Setup
Simulations are carried out in the MRS UAV system [37] on ROS Noetic, which provides an end-to-end multi-UAV framework complete with path planning enabled with collision avoidance and realistic-sensor integration along with various \\(3\\)-D mapping algorithms [38, 31]. Additionally, [37] replicates realistic simulations that can be transferred to real-world implementation in a multi-UAV system.
The exploration methods are tested on \\(2\\) environments of varying degrees of obstacle clutter as shown in Fig. 6. We define \\(1600\\)m\\({}^{2}\\), \\(3600\\)m\\({}^{2}\\), and \\(6400\\)m\\({}^{2}\\) square patches for exploration in each environment to analyze the effect of varying areas of interest. The experiment is considered a failure if exploration is not completed within a predefined duration of \\(3600\\)s. Results are analyzed across \\(25\\) runs with the drones spawned within \\(5\\)m of each other, in one corner of the area of interest.
For the simulation setup, we simulate the agents as \\(\\mathrm{f}550\\) UAVs enabled with an Ouster (OS1-\\(128\\)) LIDAR, with laser range limited to \\(10\\)m. The maximum horizontal speed, maximum vertical speed, maximum acceleration and maximum jerk is limited to (4m/s, 2m/s, 2m/s\\({}^{2}\\), 40m/s\\({}^{3}\\)) for grass plane and (1m/s, 1m/s, 1m/s\\({}^{2}\\), 20m/s\\({}^{3}\\)) for forest, to ensure safety. The maximum allowed flight height is limited to \\(4\\)m and the minimum is bounded at \\(1\\)m. We utilize octomapping [31] to generate the \\(3\\)-D map of the environment. The resultant local occupancy grid is globally shared across other UAVs.
The proposed algorithm is compared against FAME [3] and Burgard et al. [13] exploration algorithms. FAME provides a dual-mode approach to exploration incorporating a traveling salesman-based optimization. The algorithm, FAME1, originally developed for a depth camera with a limited field of view, has been incorporated to include a \\(360^{\\circ}\\) LIDAR. Burgard et al. approach explores with a utility-value function updated along the path to the target poses. The generated target poses are shared to MRS UAV framework[37], to generate the trajectory after incorporating collision avoidance. No communication constraints are imposed on the agents.
Footnote 1: [https://github.com/VIS4ROB-lab/fast_multi_robot_exploration](https://github.com/VIS4ROB-lab/fast_multi_robot_exploration)
#### Iii-A2 Analysis
Fig. 7 shows the time box plots for a varying number of agents \\((1to3)\\). Each subplot depicts a different combination of environment and area, with each bar plot representing FroShe, Burgard et al., Greedy exploration and FAME. FAME's notable inconsistent performance deterioration with respect to what is reported in [3] could be attributed to the switch to the \\(360^{\\circ}\\) LiDAR and switch to Gazebo simulator with Software-In-The-Loop (SITL) simulation, accurate sensor and actuator models, as well as realistic physics simulator and sensor noises. A much-needed ablation study of [3] with respect to FroShe is needed but is beyond the current article's scope.
As expected and denoted in Fig. 7 and Fig. 8, there is a consistent improvement in total exploration time with the number of agents. However, the decrease in variance with the number of agents is more evident in FroShe, demonstrating that the proposed algorithm consistently covers the area within a given time. This consistency across all areas and environments demonstrates the robustness of the proposed algorithm compared to others.
Fig. 7 shows that, for a single agent scenario, both Burgard et al. and greedy strategies outperform frontier shepherding. The continuous switching between herding and collecting in a single agent scenario makes the proposed algorithm more time-consuming as more distance is to be covered for reduced exploration gain. However, as the number of agents increases, initial delegation of segregated frontier clusters (Sec II-A) via swarm allocation (Sec II-B3) improves the efficiency of frontier shepherding. This reduces the expected distance travelled while switching between shepherding modes, thereby improving time taken. With \\(3\\) agents, we can see a average improvement of \\(25\\%\\) in time-taken, with respect to the other approaches for all areas across all environments. It is also essential to point out that FAME performs best in \\(1600\\)m\\({}^{2}\\) forest, which could be attributed to the fact that the initial algorithm was tuned to suit a \\(2500\\)m\\({}^{2}\\) forest. To check the robustness and flexibility of each algorithm with minimal intervention, no parameters
Fig. 6: Simulation Environments. (a) Grass plane: This environment is devoid of obstacles, allowing for straightforward path planning without obstacle avoidance. (b) Forest: This cluttered environment has an average tree density of 0.05 trees/m\\({}^{2}\\), providing a more challenging scenario for testing exploration strategies.
were changed throughout the simulation analysis.
FroShe shows an evident improvement in the time taken, with respect to the other strategies, with the increase in the coverage area, with a minimum of \\(12\\%\\) in forest and \\(48\\%\\) in grass plane for a \\(3\\) agent scenario. The fast decline of overall exploration time with the increase in the number of UAVs is portrayed in Fig. 8. FAME was developed to explore forests and capture \"trails\" quickly. Thus, it was not expected to perform exceptionally in a no-obstacle course. The improvement is compared against FAME, both promising a consistent decline in flight time required to cover the area. The reduced variance in Fig. 8 showcases the robustness of FroShe.
### _Real World Experiments_
The real-world experiments were carried out with the help of AeroSTREAM Open Remote Laboratory2. We leveraged the seamless integration of the MRS UAV framework [37] onto a real-world scenario in a forest-like environment, pictured in Fig. 8(b). The experiments involved two X500 drones, shown in Fig. 8(a), each equipped with an Ouster OS1 LiDAR. The experiments were conducted in a forest-like environment to generate a scenario that demands obstacle avoidance. Single UAV and dual UAV exploration experiments were conducted, with the UAV(s) tasked to explore a \\(400\\)m\\({}^{2}\\) and \\(600\\)m\\({}^{2}\\) area, with the lidar range clipped at \\(10\\)m. The exploration trend for both experiments is showcased in Fig. 10.
Footnote 2: [https://fly4future.com/aerostream-open-remote-laboratory/](https://fly4future.com/aerostream-open-remote-laboratory/)
## IV Conclusion
In this work, we propose _Frontier Shepherding (FroShe)_, a bio-mimetic multi-robot framework for large scale exploration. The proposed framework models frontiers as virtual sheep that is spread across the map. The multi-robot framework is then tasked to coordinate and collect and herd the virtual sheep, consequently exploring the environment. The heuristic approach to exploration ensures a fast deployment of robots and can be scaled to incorporate with minimal complexity. FroShe framework is tested in a forest and grass plane in simulations and a real-world execution is carried out in a forest patch. Results showcase a robust algorithm that is invariant to size or occlusion of the environment.
Fig. 8: Exploration rates over different numbers of UAVs for FroShe and FAME [3] in an \\(6400m^{2}\\) forest.
Fig. 7: Time-taken analysis for varying number of agents and varying area-environments for different exploration strategies. The numbers within the bars depict the percentage change with respect to the time taken by FroShe, with an increase or decrease in performance colored accordingly. Due to varying time taken across each subplot, it is to be noted that the time axis (in seconds) is not shared across the subplots.
Fig. 9: Real World Experiments with \\(2\\) X\\(500\\) drones. The drones were separated by \\(10\\)m.
Furthermore, with the increase in the number of agents, FroShe is shown to outperform other SOTA exploration methods.
For future work, we aim to implement a heterogeneous team of robots that can utilize FroShe depending on each agent's features. Similar to [3], a dual velocity approach to FroShe might improve its performance, specifically in a heterogeneous team. The current method employs an egocentric exploration rate monitor that can be improved to incorporate the entire map, rather than the individual map. Such an approach can enable individual switching between herding and collecting based on collective performance and can improve collaboration.
## V Acknowledgement
The authors would like to acknowledge all the help extended by the Multi-robot Systems (MRS) group at the Department of Cybernetics, Faculty of Electrical Engineering, of Czech Technical University in Prague. The support extended by the team helped acquire valuable real-world forest results.
## References
* [1]M. Basiri et al. (2021) A multipurpose mobile manipulator for autonomous firefighting and construction of outdoor structures. In Field Robot 1, pp. 102-126. Cited by: SSI.
* [2]P. Ghassemi and S. Chowdhury (2022) Multi-robot task allocation in disaster response: addressing dynamic tasks with deadlines and robots with range and payload constraints. In Robotics and Autonomous Systems, pp. 103905. Cited by: SSI.
* [3]L. Bartolomei, L. Teixeira, and M. Chli (2023) Fast multi-UAV decentralized exploration of forests. In IEEE Robotics and Automation Letters, Cited by: SSI.
* [4]A. Bircher et al. (2016) Receding horizon next-best-view\" planner for 3d exploration. In 2016 IEEE international conference on robotics and automation (ICRA), pp. 1462-1468. Cited by: SSI.
* [5]J. Hu et al. (2020) Voronoi-based multi-robot autonomous exploration in unknown environments via deep reinforcement learning. In IEEE Transactions on Vehicular Technology, pp. 14413-14423. Cited by: SSI.
* [6]M. Budd et al. (2020) Maskov decision processes with unknown state feature values for safe exploration using gaussian processes. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7344-7350. Cited by: SSI.
* [7]M. Budd et al. (2021) Bayesian reinforcement learning for single-episode missions in partially unknown environments. In Conference on Robot Learning, pp. 1189-1198. Cited by: SSI.
* [8]M. Budd et al. (2021) Maskov decision processes with unknown state feature values for safe exploration using gaussian processes. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7344-7350. Cited by: SSI.
* [9]M. Budd et al. (2021) Maskov decision processes with unknown state feature values for safe exploration using gaussian processes. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7344-7350. Cited by: SSI.
* [10]W. Burgard et al. (2005) Coordinated multi-robot exploration. In IEEE Transactions on robotics, pp. 376-386. Cited by: SSI.
* [11]W. Burgard et al. (2005) Multi-robot task allocation in disaster response: addressing dynamic tasks with deadlines and robots with range and payload constraints. In Robotics and Autonomous Systems, pp. 103905. Cited by: SSI.
* [12]W. Burgard et al. (2005) Collaborative multi-robot exploration. In Proceedings 2000 ICRA, Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), Vol. 1, pp. 476-481. Cited by: SSI.
* [13]W. Burgard et al. (2005) Coordinated multi-robot exploration. In IEEE Transactions on robotics, pp. 376-386. Cited by: SSI.
* [14]P. Petracek et al. (2021) Large-scale exploration of cave environments by unmanned aerial vehicles. In IEEE Robotics and Automation Letters, pp. 7596-7603. Cited by: SSI.
[MISSING_PAGE_POST]
* [17] John Lewis, Pedro U Lima, and Meysam Basiri. \"Collaborative 3D Scene Reconstruction in Large Outdoor Environments Using a Fleet of Mobile Ground Robots\". In: _Sensors_ 23.1 (2022), p. 375.
* [18] Sean Bone et al. \"Decentralised Multi-Robot Exploration using Monte Carlo Tree Search\". In: _2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE. 2023, pp. 7354-7361.
* [19] Daniel Strombom et al. \"Solving the shepherding problem: heuristics for herding autonomous, interacting agents\". In: _Journal of the royal society interface_ 11.100 (2014), p. 20140719.
* [20] Marco Dorigo, Mauro Birattari, and Thomas Stutzle. \"Ant colony optimization\". In: _IEEE computational intelligence magazine_ 1.4 (2006), pp. 28-39.
* [21] Iain D Couzin et al. \"Collective memory and spatial sorting in animal groups\". In: _Journal of theoretical biology_ 218.1 (2002), pp. 1-11.
* [22] William M Spears et al. \"An overview of physomiimetics\". In: _Swarm Robotics: SAB 2004 International Workshop, Santa Monica, CA, USA, July 17, 2004, Revised Selected Papers 1_. Springer. 2005, pp. 84-97.
* [23] Kelly J Benoit-Bird and Whitlow WL Au. \"Cooperative prey herding by the pelagic dolphin, Stenella longirostris\". In: _The Journal of the Acoustical Society of America_ 125.1 (2009), pp. 125-137.
* [24] M Dorigo et al. \"Influence of Leaders and Predators on Steering a Large-Scale Robot Swarm\". In: _Swarm Intelligence: 11th International Conference, ANTS 2018, Rome, Italy, October 29-31, 2018, Proceedings_. Vol. 11172. Springer. 2018, p. 429.
* [25] Raghav Goel et al. \"Leader and predator based swarm steering for multiple tasks\". In: _2019 ieee international conference on systems, man and cybernetics (smc)_. IEEE. 2019, pp. 3791-3798.
* [26] Alireza Dirafzoon and Edgar Lobaton. \"Topological mapping of unknown environments using an unlocalized robotic swarm\". In: _2013 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE. 2013, pp. 5545-5551.
* [27] Jixuan Zhi and Jyh-Ming Lien. \"Learning to herd agents amongst obstacles: Training robust shepherding behaviors using deep reinforcement learning\". In: _IEEE Robotics and Automation Letters_ 6.2 (2021), pp. 4163-4168.
* [28] Andrzej Reinke et al. \"Locus 2.0: Robust and computationally efficient lidar odometry for real-time 3d mapping\". In: _IEEE Robotics and Automation Letters_ 7.4 (2022), pp. 9043-9050.
* [29] Matteo Palieri et al. \"Locus: A multi-sensor lidar-centric solution for high-precision odometry and 3d mapping in real-time\". In: _IEEE Robotics and Automation Letters_ 6.2 (2020), pp. 421-428.
* [30] Tixiao Shan et al. \"Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping\". In: _2020 IEEE/RSJ international conference on intelligent robots and systems (IROS)_. IEEE. 2020, pp. 5135-5142.
* [31] Armin Hornung et al. \"OctoMap: An efficient probabilistic 3D mapping framework based on octrees\". In: _Autonomous robots_ 34 (2013), pp. 189-206.
* [32] Matan Keidar and Gal A Kaminka. \"Efficient frontier detection for robot exploration\". In: _The International Journal of Robotics Research_ 33.2 (2014), pp. 215-236.
* [33] PGCN Senarathne et al. \"Efficient frontier detection and management for robot exploration\". In: _2013 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems_. IEEE. 2013, pp. 114-119.
* [34] Trupti M Kodinariya, Prashant R Makwana, et al. \"Review on determining number of Cluster in K-Means Clustering\". In: _International Journal_ 1.6 (2013), pp. 90-95.
* [35] Garima Jain and Bhawna Mallick. \"A study of time series models ARIMA and ETS\". In: _Available at SSRN 2898968_ (2017).
* [36] Adebiyi A Ariyo, Adewumi O Adewumi, and Charles K Ayo. \"Stock price prediction using the ARIMA model\". In: _2014 UKSim-AMSS 16th international conference on computer modelling and simulation_. IEEE. 2014, pp. 106-112.
* [37] Tomas Baca et al. \"The MRS UAV system: Pushing the frontiers of reproducible research, real-world deployment, and education with autonomous unmanned aerial vehicles\". In: _Journal of Intelligent & Robotic Systems_ 102.1 (2021), p. 26.
* [38] Ji Zhang and Sanjiv Singh. \"LOAM: Lidar Odometry and Mapping in Real-time.\" In: _Robotics: Science and Systems_. Berkeley, CA. 2014, pp. 1-9. | Efficient exploration of large-scale environments remains a critical challenge in robotics, with applications ranging from environmental monitoring to search and rescue operations. This article proposes a bio-mimetic multi-robot framework, _Frontier Shepherding (FroShe)_, for large-scale exploration. The presented bio-inspired framework heuristically models frontier exploration similar to the shepherding behavior of herding dogs. This is achieved by modeling frontiers as a sheep swarm reacting to robots modeled as shepherding dogs. The framework is robust across varying environment sizes and obstacle densities and can be easily deployed across multiple agents. Simulation results showcase that the proposed method consistently performed irrespective of the simulated environment's varying sizes and obstacle densities. With the increase in the number of agents, the proposed method outperforms other state-of-the-art exploration methods, with an average improvement of \\(20\\%\\) with the next-best approach(for \\(3\\) UAVs). The proposed technique was implemented and tested in a single and dual drone scenario in a real-world forest-like environment. | Give a concise overview of the text below. | 213 |
arxiv-format/0010068v1.md | # The no-sticking effect in ultra-cold collisions
Areez Mody, Michael Haggerty and Eric J. Heller
Department of Physics, Harvard University, Cambridge, MA 02138
August 2000
## I Introduction
The problem of low energy sticking to surfaces has attracted much attention over the years [1, 2, 3, 4, 5]. The controversial question has been the ultralow energy limit of the incoming species, for either warm or cold surfaces. A battle has ensued between two countervailing effects, which we will call classical sticking and quantum reflection. The concept of quantum reflection is intimately tied into threshold laws, and was recognized in the 1930's by Lennard-Jones [1]. Essentially, flux is reflected from a purely attractive potential with a probability which goes as \\(1-\\alpha\\sqrt{\\epsilon}\\), as \\(\\epsilon\\to 0\\), where \\(\\alpha\\) is a constant and \\(\\epsilon\\) is the translational energy of the particle incident on the surface. Classically the transmission probability is unity. Reflection at long range prevents inelastic processes from occurring, but if the incoming particle should penetrate into the strongly attractive region, the ensuing acceleration and hard collision with the repulsive short range part of the potential leads to a high probability of inelastic processes and sticking.
The blame for the quantum reflection can be laid at the feet of the WKB approximation, which breaks down in the long range attractive part of the potential at low energy. Very far out, the WKB is good even for low energy, because the potential is so nearly flat. Close in, the kinetic energy is high, because of the attractive potential, even if the asymptotic energy is very low, and again WKB is accurate. But in between there is a breakdown, which has been recognized and exploited by several groups [6, 7, 8, 9, 10, 11]. We show in the paper filling this one that the breakdown occurs in a region around \\(|V|\\approx\\epsilon\\); i.e. aproximately where the kinetic and potential energies are equal.
It would seem that quantum reflection would settle the issues of sticking, since if the particle doesn't make it in close to the surface there is no sticking. (Fig 1) There is one caveat, however, which must be considered: quantum reflection can be defeated by the existence of a resonance in the internal region, i.e. a threshold resonance. (Fig 2)
The situation is very analogous to a high Q Fabry-Perot cavity, where using nearly 100% reflective, parallel mirrors gives near 100% reflection except at very specific wavelengths.
At these specific energies a resonace buildup occurs in the interior of the cavity, permitting near 100% transmission. Such resonances are rare in a one dimensional world, but the huge number of degrees of freedom in a macroscopic solid particle makes resonance ubiquitous. Indeed, the act of colliding with the surface, creating a phonon and dropping into a local bound state of the attractive potential describes a Feshbach resonance. Thus, the resonances are just the sticking we are investigating, and we must not treat them lightly! Perhaps it is not obvious after all whether sticking occurs.
After the considerable burst of activity surrounding the sticking issue on the surface of liquid Helium [12, 13], and after a very well executed theoretical study by Clougherty and Kohn [4], the controversy has settled down, and the common wisdom has grown that sticking does not occur at sufficiently low energy. While we agree with this conclusion, we believe the theoretical foundation for it is not complete, nor stated in a wide enough domain of physical situations. For example, Ref. [4] treats only a harmonic slab with one or two phonon excitation. It is not clear whether the results apply to a warm surface. On the experimental side, even though quantum reflection was observed from a liquid Helium surface, that surface has a very low density of available states (essentially only the ripplons) which could be a special case with respect to sticking. Thus, the need for more rigorous and clear proof of non-sticking in general circumstances is evident. This paper gives such an analysis. In a following paper, application is made to specific atom-surface and slab combinations, and the rollover to the sticking regime as energy is increased (which can be treated essentially analytically) is given.
The strategy we use puts a very general and exact scattering formalism to work, providing a template into which to insert the properties of our target and scatterer. Then very general results emerge, such as the non-sticking theorem at zero energy. The usual procedure of defining model potentials and considering one phonon processes etc. is not necessary. All such model potentials and Hamiltonians wind up as parameters in the R-matrix formalism. The details of a particular potential are of course important for quantitative results, but the range of possible results can be much more easily examined by inserting various parameters into the R-matrix formalism. All the possible choices of R-matrix parameters give the correct threshold laws. Certain trends are built into the R-matrix formalism which are essentially independent of the details of the potentials.
Before commencing with the R matrix treatment, we briefly consider the problem perturbatively in order to better elucidate the role played by quantum reflection. We emphasize that none of the perturbation section is actually necessary for our final conclusions.
In a perturbative treatment for our slab geometry, quantum reflection simply results in the entrance channels' wave function (at threshold) having its amplitude in the interaction region go to zero as \\(k_{e}\\sim\\sqrt{\\epsilon}\\) when normalized to have a fixed incoming flux. (\\(k_{e}\\) is the magnitude \\(|\\vec{k}_{e}|\\) of the incident wavevector of the incoming atom). The inelastic transition probabilities are proportional to the potential weighted overlap of the channel wavefunctions and this immediately leads to the conclusion that the inelastic probability itself vanishes as \\(k_{e}\\sim\\sqrt{\\epsilon}\\). As mentioned, this conclusion is shown to rigorously remain true using the R matrix. We show in this paper that in spite of the inherently many-body nature of the problem, in the ultra-cold limit we can correctly obtain the long-range form of the entrance channel's wavefunction by solving for the one-dimensional motion in the long-range surface-atom attraction (i.e. the diagonal element of the many-channel potential matrix). Thisallows quantitative predictions of the sticking probability, which we do in the following paper. There, we further exploit the perturbative point of view together with an analysis of WKB to predict a 'post-threshold' behavior as quantum reflection abates, when the incoming energy is increased.
## II Geometry and notation
The incident atom is treated as a point particle at position \\((x,y)\\). To keep the notation simple we leave out the \\(z\\)-coordinate and confine our discussion to two spatial dimensions. Thus a cross-section will have dimensions of length etc. It will be quite obvious how and where \\(z\\) may be inserted in all that follows. Let \\(u\\) represent all the bound degrees of freedom of the scattering target, which we take to be a slab of crystalline or amorphous material. Let \\(\\Omega_{c}(u)\\), \\(c=1,2,\\cdots\\), be the manybody target wave functions in the absence of interactions with the incident particle, and having energy \\(E_{c}^{\\rm target}\\). These are normalized as \\(\\int_{\\rm all\\ u}du\\;|\\Omega_{c}(u)|^{2}=1\\). \\(x\\) is the distance of the scatterer (atom) from the face of the slab which
Figure 2: A schematic view of a Feshbach resonance wherein the incident atom forms a long lived quasi-bound state with the target. The many body wavefunction in this situation (not shown) has a large amplitude in the ‘interior’ region near the slab.
Figure 1: The stationary state one body wavefunction of the incident atom moving in the \\(y\\)-independent mean potential felt by it. The amplitude inside the interaction region is supressed by \\(k_{e}\\sim\\sqrt{\\epsilon}\\). This is tantamount to the reflection of the atom.
is approximately (because the wall is rough) along the line \\(x=0\\). The internal constituents of the slab lie to the left of \\(x=0\\) and the scatterer is incident from the right with kinetic energy \\(\\epsilon=\\hbar^{2}k_{e}^{2}/2m\\). The total energy \\(E\\) of the system is
\\[E=\\epsilon+E_{e}^{\\rm target} \\tag{1}\\]
where \\(c=e\\) is the index of the 'entrance channel' i.e. the initial internal state of the slab before the collision is \\(\\Omega_{e}(u)\\). Notice that we say nothing about the value of \\(E_{e}^{\\rm target}\\) itself. In particular the slab need not be cold. \\(k_{c}\\) is the magnitude of the wave vector \\(\\vec{k}_{c}\\) of the particle when it leaves the target in the state \\(\\Omega_{c}(u)\\) after the collision. Our interest focusses on \\(k_{e}\\to 0\\). \\(k_{e}\\) is the magnitude of the wavevector of the incoming particle. For the open channels \\(c=1,\\cdots n\\) (this defines n) for which \\(E>E_{c}^{\\rm target}\\)
\\[k_{c}\\equiv\\sqrt{\\frac{2m(E-E_{c}^{\\rm target})}{\\hbar^{2}}}\\qquad(c\\leq n)\\;; \\tag{2}\\]
whereas for the closed channels (\\(c>n\\)), \\(E<E_{c}^{\\rm target}\\) and
\\[k_{c}\\equiv i\\sqrt{\\frac{2m(E_{c}^{\\rm target}-E)}{\\hbar^{2}}}\\equiv i\\kappa_ {c}\\qquad(c>n)\\;. \\tag{3}\\]
\\(\\kappa_{c}>0\\). We will use \\((k_{cx},k_{cy})\\) as the \\(x,y\\) components of \\(\\vec{k}_{c}\\). Let \\(U_{\\rm int}(x,y,u)=(2m/\\hbar^{2})V_{\\rm int}(x,y,u)\\), where \\(V_{\\rm int}(x,y,u)\\) describes quite generally the interaction potential between the incident atom and all the internal degrees of freedom of the slab. For simplicity we assume for the moment that there is no interaction between slab and atom for \\(x>a\\).
## III Preliminaries: Perturbation
As stated above, we excerise the perturbative treatment for insight only; our final conclusions are based on nonperturbative arguments.
We treat the interaction \\(U_{\\rm int}(x,y,u)\\) between slab and atom by separating out a'mean' potential felt by the atom that is independent of \\(y\\) and \\(u\\); call it \\(U^{(0)}(x)\\). The remainder \\(U^{(1)}(x,y,u)\\equiv U_{\\rm int}(x,y,u)-U^{(0)}(x)\\) is treated as a perturbation.
Now the incident beam is scattered by the entire length (say from \\(y=-L\\) to \\(L=2L\\)) of wall which it illuminates. If all measurements are made close to the wall so that its length \\(2L\\) is the largest scale in the problem, then it is appropriate to speak of a cross-section per unit length of wall, a dimensionless probability. More specifically, we will assume that the matrix elements \\(U^{(1)}_{cc^{\\prime}}(x,y)\\equiv\\int\\limits_{\\rm all\\ u}du\\,\\Omega_{c}^{*}(u)U^ {(1)}(x,y,u)\\Omega_{c^{\\prime}}(u)\\) of the perturbation \\(U^{(1)}(x,y,u)\\) in the \\(\\Omega_{c}(u)\\) basis are given by the simple form \\(U^{(1)}_{cc^{\\prime}}(x,y)=U^{(1)}_{cc^{\\prime}}(x)f(y)\\) for \\(y\\in[-L,L]\\) and \\(0\\) elsewhere. \\(f(y)\\) is a random persistent (does not die to \\(0\\) as \\(|L|\\to\\infty\\)) function that models the random roughness of the slab and is characterized by its so-called spectral density function \\(S\\), a smooth positive-valued non-random function, such that
\\[\\left|\\int\\limits_{-L}^{L}dy\\,e^{iky}f(y)\\right|^{2}\\equiv 2LS(k)\\quad\\forall k \\tag{4}\\]as \\(L\\to\\infty\\).
Now, applying either time-independent perturbation (equivalently the Born approximation for this geometry) or time-dependent perturbation theory via the Golden Rule, gives that the cross-section per unit length of wall for inelastic scattering to a final channel \\(c\\) is
\\[P_{c\\gets e}^{\\rm in}(\\theta)=\\frac{2\\pi}{k_{e}}\\left(\\int\\limits_{-\\infty }^{a}dx^{\\prime}\\phi(x^{\\prime};k_{ex})U_{ce}^{(1)}(x^{\\prime})\\phi(x^{\\prime} ;k_{ex})\\right)^{2}S(k_{cy}-k_{ey}) \\tag{5}\\]
where \\(\\phi(x;k_{x})\\) is the solution of the o.d.e.
\\[\\left(\\frac{d^{2}}{dx^{2}}-U^{(0)}(x)+k_{x}^{2}\\right)\\phi(x;k_{x})=0 \\tag{6}\\]
which is regular or goes to zero as \\(x\\to-\\infty\\) inside the slab and is normalized as
\\[\\phi(x;k_{x})\\sim\\sin(k_{x}x+\\delta)\\quad{\\rm as}x\\to\\infty \\tag{7}\\]
Accepting for the moment that as \\(k_{e}\\to 0\\) the amplitude of \\(\\phi(x;k_{ex})\\) in the internal region \\(x<a\\) goes to zero as \\(k_{e}\\sim\\sqrt{\\epsilon}\\), then the square of the overlap integral in Eq. (5) behaves as \\(k_{e}^{2}\\), because by our proposition the amplitude of \\(\\phi(x^{\\prime};k_{ex})\\sim k_{ex}\\sim k_{e}\\). Together with the \\(1/k_{e}\\) prefactor we get an overall behavior of \\(k_{e}\\) for the inelastic probability as claimed.
To show that indeed as \\(k_{e}\\to 0\\) the amplitude of \\(\\phi(x;k_{ex})\\) in the internal region \\(x<a\\) goes to zero as \\(k_{e}\\sim\\sqrt{\\epsilon}\\), we temporarily disregard the required normalization of \\(\\phi(x;k_{x})\\) of Eq. (7) and fix its initial conditions (slope and value) at some point inside the interaction region \\(x<a\\) such that the regularity condition is ensured. We then integrate out to \\(x=a\\). Let us denote this unnormalized solution with a prime, as \\(\\phi^{\\prime}(x;k_{x})\\). The point is for \\(k_{x}\\) varying near 0, both \\(v\\)(the value) and \\(s\\)(the slope) that the solution emerges with at \\(x=a\\), are independent of \\(k_{x}\\) and in fact the interior solution thus obtained is itself independent of \\(k_{x}\\). This is because the local wave vector \\(k(x)=\\sqrt{2m(\\epsilon-U(x))/\\hbar^{2}}\\) essentially stays the same function of \\(x\\) for all \\(\\epsilon\\) near 0. Therefore for \\(x>a\\)\\(\\phi(x;k_{x})\\) continues onto
\\[v\\cos[k_{x}(x-a)]+\\frac{s}{k_{x}}\\sin[k_{x}(x-a)]\\quad x>a \\tag{8}\\]
This is a phase-shifted sine wave of amplitude \\(\\sim 1/k_{x}\\). We must enforce the normalization of Eq. (7) and get \\(\\phi(x;k_{x})\\sim k_{x}\\phi^{\\prime}(x;k_{x})\\). As a result, the interior solution gets multiplied by \\(k_{x}\\) and we thereby have our result. \\(\\phi(x;k_{x})\\) is the solution of a one-dimensional Schrodinger equation for the incoming particle in the one-dimensional long-range potential created by the slab. The suppression of its amplitude by \\(\\sqrt{\\epsilon}\\) near the slab is due to the reflection it suffers where the interaction turns on. Within the perturbative set-up the non-sticking conclusion is then already foregone [1].
The problem is whether we can really accept this verdict of the one-dimensional unperturbed solution, when in fact we know that the turning on of the perturbation (many body interactions) causes a multitude of resonances to be created, internal resonances being exactly the situation in which the Proposition above is known to badly fail. It appears that the perturbation is in no sense a small physical effect. Therefore a nonpeturbative approach is needed. Here we use R-matrix theory in its general form to accomplish the task.
## IV S-matrix and R-matrix
One point that the preceding section has made clear is that it is the energies (both initial and final) in the \\(x\\)-direction, perpendicular to the slab that are most relevant. In fact as regards the final form of our answers the motion of the \\(y\\) degree of freedom may as well have been the motion of another internal degree of freedom of the slab. In other words, mathematically speaking, the \\(y\\) degree of freedom may be subsumed by incorporating it as just another \\(u\\). For example, we may imagine the incident atom being confined in the \\(y\\)-direction by the walls of a wave-guide at \\(y=-L{\\rm and}L\\) that is large enough so that it could not possibly change the physics of sticking. Then we quite rigorously have a bound internal state of the form
\\[\\Omega_{c,n}(y,u)=\\Omega_{c}(u)\\sin\\frac{n\\pi y}{L} \\tag{9}\\]
\\(x\\) is now the only scattering degree of freedom. There will be no necessity in carrying along the extra index \\(n\\) and variable \\(y\\) as in Eq. (9), and we will simply continue to write \\(\\Omega_{c}(u)\\) instead. Thus with this understanding, the problem is essentially one-dimensional in the scattering degree of freedom.
We proceed to derive the expression for the \\({\\bf S}\\) matrix in terms of the so-called \\({\\bf R}\\) matrix, and derive the structure of the \\({\\bf R}\\) matrix. For simplicity we continue to assume for the moment that there is no interaction for \\(x>a\\). Then for \\(x>a\\), the scattering wavefunction of the interacting system corresponding to the scattering particle coming in on one entrance channel, say \\(c=e\\), with energy \\(\\epsilon=\\hbar^{2}k_{e}^{2}/(2m)\\) is
\\[\\psi(x,u)=\\sum_{c=1}^{\\infty}\\left(\\frac{e^{-ik_{e}x}}{\\sqrt{k_{e}}}\\delta_{ ce}-\\frac{e^{ik_{c}x}}{\\sqrt{k_{c}}}S_{ce}\\right)\\Omega_{c}(u)\\qquad x>a \\tag{10}\\]
where the sum must include all channels, even though the open channels are finite in number. The factors of \\(k_{c}^{-1/2}\\) in Eq. (10) mean that the flux in each channel is proportional only to the square of the coefficient and hence ensure the unitarity of \\({\\bf S}\\). With this convention, the open-open part of the \\({\\bf S}\\)-matrix--the \\(n\\times n\\) submatrix \\(S_{cc^{\\prime}}\\) with \\(c,c^{\\prime}=1,2,\\ldots,n\\)--is unitary. \\(\\sqrt{k_{c}}\\equiv e^{i\\pi/4}\\sqrt{\\kappa_{c}}\\) may be arbitrarily chosen since it cannot affect the open-open part of \\({\\bf S}\\).
\\({\\bf S}\\) is found in analogy to the one-dimensional case by introducing the matrix version of the inverse logarithmic derivative at \\(x=a\\) called \\({\\bf R}(E)\\) the Wigner \\({\\bf R}\\)-matrix defined by
\\[\\vec{v}={\\bf R}(E)\\ \\vec{s} \\tag{11}\\]
where the components of \\(\\vec{v}\\) and \\(\\vec{s}\\) are the expansion coefficients of \\(\\psi(x=a,u)\\) and \\(\\frac{\\partial\\psi(x=a,u)}{\\partial x}\\) respectively in the \\(\\Omega_{c}(u)\\) basis. Supposing \\(\\frac{\\partial\\psi(x=a,u)}{\\partial x}\\) to be known, we will (like in electrostatics) use the Neumann Green's function \\(G_{N}(x,u;x^{\\prime},u^{\\prime})\\) to construct \\(\\psi(x,u)\\) everywhere in the interior \\(x<a\\). \\(\\psi(x,u)\\) satisfies the full Schrodinger equation with energy \\(E\\). We need \\(\\chi_{\\lambda}(x,u)\\ \\lambda=1,2,\\cdots\\), the normalized eigenfunctions of the full Schrodinger equation in the interior \\(x<a\\) with energies \\(E_{\\lambda}\\), satisfying Neumann boundary conditions \\(\\frac{\\partial\\chi(x=a,u)}{\\partial x}=0\\). So
\\[\\left(\\frac{-\\hbar^{2}}{2m}\
abla^{2}+V_{\\rm int}(x,u)-E\\right)\\psi(x,u)=0 \\tag{12}\\]\\[\\left(\\frac{-\\hbar^{2}}{2m}\
abla^{2}+V_{\\rm int}(x,u)-E_{\\lambda} \\right)\\chi_{\\lambda}(x,u)=0 \\tag{13}\\] \\[\\left(\\frac{-\\hbar^{2}}{2m}\
abla^{2}+V_{\\rm int}(x,u)-E\\right)G_{N }(x,u;x^{\\prime},u^{\\prime})=\\delta(x-x^{\\prime})\\delta(u-u^{\\prime}) \\tag{14}\\]
where \\(\
abla^{2}\\equiv\\frac{\\partial^{2}}{\\partial x^{2}}+\\frac{\\partial^{2}}{ \\partial u^{2}}\\) and
\\[\\frac{\\partial G_{N}(x=a,u;x^{\\prime},u^{\\prime})}{\\partial x} = 0\\hskip 28.452756pt{\\rm and}\\hskip 28.452756pt\\frac{\\partial\\chi(x=a,u)} {\\partial x}=0 \\tag{15}\\] \\[\\Rightarrow G_{N}(x,u;x^{\\prime},u^{\\prime}) = \\sum_{\\lambda=1}^{\\infty}\\frac{\\chi_{\\lambda}(x,u)\\chi_{\\lambda}( x^{\\prime},u^{\\prime})}{E_{\\lambda}-E} \\tag{16}\\]
\\(G_{N}\\) is symmetric in the primed and unprimed variables. By Stokes' Theorem,
\\[(-\\hbar^{2}/2m)\\int\\limits_{x^{\\prime}<a}dx^{\\prime}\\int\\limits_{\\rm all\\ u^{ \\prime}}du^{\\prime}\\left(\\phi_{1}\
abla^{\\prime 2}\\phi_{2}-\\phi_{2}\
abla^{ \\prime 2}\\phi_{1}\\right)=(-\\hbar^{2}/2m)\\int\\limits_{\\rm x^{\\prime}=a,\\ all\\ u^{ \\prime}}du^{\\prime}\\left(\\phi_{1}\
abla^{\\prime}_{\\dot{n}}\\phi_{2}-\\phi_{2} \
abla^{\\prime}_{\\dot{n}}\\phi_{1}\\right) \\tag{17}\\]
where \\(\
abla^{\\prime}_{\\dot{n}}(\\cdot)\\equiv\\dot{x}^{\\prime}(\\cdot)\\cdot\
abla^{\\prime}\\) with \\(\\phi_{1}=\\psi(x^{\\prime},u^{\\prime})\\) and \\(\\phi_{2}=G_{N}(x,u;x^{\\prime},u^{\\prime})\\) gives
\\[\\psi(x,u)=\\frac{\\hbar^{2}}{2m}\\int\\limits_{\\rm all\\ u^{\\prime}}du^{\\prime}\\ G_{N}(x,u;x^{\\prime},u^{\\prime})\\frac{\\partial\\psi(x^{ \\prime}=a,u^{\\prime})}{\\partial x^{\\prime}}\\hskip 28.452756ptx<a \\tag{18}\\]
Put \\(x=a\\) and it is deduced using Eqs. (11) and (18) together that
\\[R_{cc^{\\prime}}(E)=\\sum_{\\lambda=1}^{\\infty}\\frac{\\gamma_{\\lambda c}\\gamma_{ \\lambda c^{\\prime}}}{E_{\\lambda}-E} \\tag{19}\\]
where \\(\\gamma_{\\lambda c}=\\sqrt{\\frac{\\hbar^{2}}{2m}}\\int\\limits_{\\rm all\\ u}du\\ \\chi_{ \\lambda}(a,u)\\Omega_{c}(u)\\).
### The S matrix
Now shifting attention to the outside (\\(x>a\\)), we see that we can compute both \\(\
abla_{\\dot{n}}\\psi(a,u)\\) and \\(\\psi(a,u)\\) on the surface \\(x=a\\) using the asymptotic form of Eq. (10) which automatically gives these expanded in the \\(\\Omega_{c}(u)\\) basis. Writing the matrix Eq. (11) is now simple. It is best to do it all in matrix notation, and thus be able to treat all possible independent asymptotic boundary conditions simultaneously.
Let \\(e^{ikx}\\), \\(\\sqrt{k}\\) and \\(1/\\sqrt{k}\\) be diagonal matrices with diagonal elements \\(e^{ik_{c}x}\\), \\(\\sqrt{k_{c}}\\) and \\(1/\\sqrt{k_{c}}\\). Then Eq. (11) reads
\\[\\frac{e^{-ika}}{\\sqrt{k}}-\\frac{e^{ika}}{\\sqrt{k}}{\\bf S}=i{\\bf R}k\\left(\\frac {-e^{-ika}}{\\sqrt{k}}-\\frac{e^{ika}}{\\sqrt{k}}{\\bf S}\\right)\\;. \\tag{20}\\]
Each column \\(c=1,\\ldots,n\\) of the matrix equation above is just Eq. (11) for the solution corresponding to an incoming wave only in channel \\(c\\) (For \\(c>n\\) the wavefunctions blow up as \\(x\\rightarrow\\infty\\)). Remembering that non-diagonal matrices don't commute, we solve for \\({\\bf S}\\) to get \\[{\\bf S}=e^{-ika}\\sqrt{k}\\frac{1}{1-i{\\bf R}k}(1+i{\\bf R}k)\\frac{1}{\\sqrt{k}}e^{-ika} \\tag{21}\\]
or, with some simple matrix manipulation,
\\[{\\bf S}=e^{-ika}\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}(1+i\\sqrt{k}{\\bf R}\\sqrt{k}) e^{-ika}\\;. \\tag{22}\\]
## V S Matrix Near a Resonance
As discussed in the introduction, the resonances are a key to the sticking issue. Sticking is essentially a long lived Feshbach resonance in which energy has been supplied to surface and bulk degrees of freedom, temporarily dropping the scattering particle into a bound state of the attractive potential. Thus we must study resonances in various circumstances in the low incident translational energy regime. We derive the approximation for \\({\\bf S}(E)\\) near \\(E=E_{0}\\), a resonant energy of the compound system. \\(E_{0}\\) is the total energy of the joined (resonant) system. Within the R-matrix approach, the \\(\\chi_{\\lambda}(x,u)\\) of section IV are bound, compound states with Neuman boundary conditions at \\(x=a\\). \\(R\\)-matrix theory properly couples these bound state to the continuum, but some of the eigenstates are nonetheless weakly coupled to the continuum, as evidenced by small values of the \\(\\gamma_{\\lambda c}\\)'s of section IV; these are the measure of the strength of the continuum couplings. While every one of the \\(R-\\)matrix bound states will result in a pole \\(E_{\\lambda}\\) in the \\(R\\) matrix expansion, only the weakly coupled ones are the true long lived Feshbach resonances of physical interest. It is also helpful to know that the values of these 'truly' resonant poles at \\(E_{\\lambda}\\) are the most stable to changes in the position \\(x=a\\) of the box. This in fact provides one unambiguous way to identify them. Our purpose here is to derive the resonant approximation to the \\({\\bf S}\\) matrix in the vicinity of one of these Feshbach resonances. We do so using the form of the \\({\\bf R}\\)-matrix in Eq. (19). Note that the energy density \\(\\rho(E)=1/D(E)\\) of these Feshbach resonances will be large because of the large number of degrees of freedom of the target. \\(D(E)\\) is the level spacing of the quasibound, resonant states.
### Isolated Resonance
As mentioned, the point of view we will take is to identify a resonant energy with a particular pole \\(E_{\\lambda}\\) in the \\({\\bf R}\\) matrix expansion of Eq. (19). Those \\(E_{\\lambda}\\) corresponding to resonances are a subsequence of the \\(E_{\\lambda}\\) appearing in the expansion in Eq. (19). For \\(E\\) near a well isolated resonance at \\(E_{\\lambda}\\) we separate the sum-over-poles expansion of the R-matrix into a single matrix term having elements \\(\\frac{\\gamma_{\\lambda c}\\gamma_{\\lambda c^{\\prime}}}{E_{\\lambda}-E}\\), plus a sum over all the remaining terms, call it \\(N\\). If the energy interval between \\(E_{\\lambda}\\) and all the other poles is large compared to the open-open residue at \\(E_{\\lambda}\\) then we may expect that the \\(n\\times n\\) open-open block of \\(N\\) will have all its elements to be small. Then rewriting the inverse in Eq. (22)
\\[\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}\\equiv\\frac{1}{1-i\\left(M+\\frac{V}{E_{ \\lambda}-E}\\right)} \\tag{23}\\]where \\(M\\equiv\\sqrt{k}N\\sqrt{k}\\) and \\(V_{cc^{\\prime}}\\equiv(\\sqrt{k_{c}}\\gamma_{\\lambda c})(\\sqrt{k_{c^{\\prime}}}\\gamma_ {\\lambda c^{\\prime}})\\), and setting \\(M=0\\) allows us to simplify the central term in Eq. (22) exactly. (We will return to the case \\(M\
eq\\)0.)
\\[\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}(1+i\\sqrt{k}{\\bf R}\\sqrt{k}) \\tag{24}\\] \\[= 1+\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}2i\\sqrt{k}{\\bf R}\\sqrt{k}\\] (25) \\[= 1+\\frac{1}{1-\\frac{iV}{E_{\\lambda}-E}}2i\\frac{V}{E_{\\lambda}-E} \\hskip 14.226378pt({\\rm with}M=0)\\] (26) \\[= 1+\\frac{1}{E_{\\lambda}-E-iV}2iV\\] (27) \\[= 1+\\frac{1}{E_{\\lambda}-E-i(\\Gamma_{\\lambda}/2+i\\Delta E)}2iVk \\tag{28}\\]
where we used
\\[V^{2} = \\left((\\gamma_{\\lambda 1}^{2}k_{1}+\\cdots+\\gamma_{\\lambda n}^{2}k_{n })+(\\gamma_{\\lambda(n+1)}^{2}\\kappa_{n+1}+\\cdots)\\right)V \\tag{29}\\] \\[\\equiv \\left(\\left(\\frac{\\Gamma_{\\lambda 1}}{2}+\\cdots+\\frac{\\Gamma_{ \\lambda n}}{2}\\right)+i(\\gamma_{\\lambda(n+1)}^{2}\\kappa_{n+1}+\\cdots)\\right)V\\] (30) \\[\\equiv \\left(\\frac{\\Gamma_{\\lambda}}{2}+i\\Delta E_{\\lambda}\\right)V \\tag{31}\\]
to get the identities
\\[[E_{\\lambda}-E-iV]V = [E_{\\lambda}-E-i(\\Gamma_{\\lambda}/2+i\\Delta E)]V \\tag{32}\\] \\[\\Rightarrow\\frac{1}{E_{\\lambda}-E-i(\\Gamma_{\\lambda}/2+i\\Delta E )}V = \\frac{1}{E_{\\lambda}-E-iV}V \\tag{33}\\]
Also define \\((\\Gamma_{\\lambda c}/2)^{1/2}\\equiv\\gamma_{\\lambda c}\\sqrt{k_{c}},c=1,2,\\cdots,n\\). This defines the sign of the square-root on the lhs. to be the sign of \\(\\gamma_{\\lambda c}\\) and allows the convenience of expressing things in terms of the \\(\\Gamma_{\\lambda c}\\)'s and their square-roots, and not having to use the \\(\\gamma_{\\lambda c}\\)'s themselves. Thus we arrive at
\\[S_{cc^{\\prime}}=e^{-ik_{c}a}\\left(\\delta_{cc^{\\prime}}+\\frac{i\\Gamma_{\\lambda c }^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{ \\lambda}/2}\\right)e^{-ik_{c^{\\prime}}a} \\tag{34}\\]
where \\(E_{\\lambda}^{(r)}\\equiv E_{\\lambda}+\\Delta E_{\\lambda}\\), for the \\(n\\times n\\) open-open unitary block of \\(S\\) in the neighbourhood of a single isolated resonance after neglecting the contribution of the background matrix \\(M\\). For us the essential point is that
\\[\\Gamma_{\\lambda c}=2\\,k_{c}(E)\\gamma_{\\lambda c}^{2}, \\tag{35}\\]
that the partial widths \\(\\Gamma_{\\lambda c}\\) depend on the energy \\(E\\), through the kinematic factor \\(k_{c}(E)\\). Mostly this energy dependence is small and irrelevant except where the \\(k_{c}\\)'s and hence \\(\\Gamma_{\\lambda c}\\)'s are varying near 0. These are the partial widths of the open channels near threshold. Hence \\(|S_{ce}|^{2}\\) (\\(c\
eq e\\)) an inelastic probability behaves like \\(k_{e}\\sim\\sqrt{\\epsilon}\\) when the entrance channel is at threshold. Including the background term (\\(M\
eq 0\\)) does not change this. To see this we may perform the inverse in Eq. (22) to first order in \\(M\\) and then get an additional contribution of the terms
\\[e^{-ika}\\left(\\frac{2i}{1-\\frac{iV}{E_{\\lambda}-E}}M+\\frac{1}{1-\\frac{iV}{E_{ \\lambda}-E}}+\\frac{1}{1-\\frac{iV}{E_{\\lambda}-E}}iM\\frac{1}{1-\\frac{iV}{E_{ \\lambda}-E}}2iV\\right)e^{-ika} \\tag{36}\\]
to the \\(S\\)-matrix. Now, both \\(M\\) and \\(V\\) have a factor of \\(\\sqrt{k_{c}}\\) multiplying their \\(c\\)th columns (and rows) from their definitions and so a matrix element \\(b_{cc^{\\prime}}\\) of the matrix in parentheses in Eq. (36) will have a \\(\\sqrt{k_{c}}\\) and \\(\\sqrt{k_{c^{\\prime}}}\\) dependence. An inelastic element of \\(S(c\
eq c^{\\prime})\\) would now take the form
\\[S_{cc^{\\prime}}=e^{-ik_{c}a}\\left(b_{cc^{\\prime}}+\\frac{i\\Gamma_{\\lambda c}^{1/ 2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{\\lambda}/2} \\right)e^{-ik_{c^{\\prime}}a}, \\tag{37}\\]
As mentioned our interest is in the case when the entrance channel is at threshold so that this dependence is \\(\\sqrt{k_{e}}\\), making the inelastic probability \\(|S_{ce}|^{2}\\) still continue to behave as \\(k_{e}\\sim\\sqrt{\\epsilon}\\).
### Overlapping Resonances
Here we require the form of the \\({\\bf S}\\) matrix near an energy \\(E\\) where many of the quasi-bound states may be simultaneously excited, i.e. the resonances overlap. Again, neglecting background for the moment, the \\({\\bf S}\\) matrix is simply taken to be a sum over the various resonances.
\\[{\\bf S}=1-\\sum_{\\lambda}\\frac{iA_{\\lambda}}{E-E_{\\lambda}^{(r)}+i\\Gamma_{ \\lambda}/2} \\tag{38}\\]
where \\(A_{\\lambda}\\) is a \\(n\\times n\\) rank 1 matrix with the \\(cc^{\\prime}\\)th component as \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}\\). There is no entirely direct justification of this form, but one can see that there is much which it gets correct.
The \\(A_{\\lambda}\\) are symmetric, hence \\({\\bf S}\\) is symmetric. Obviously it has the poles in the right places allowing the existence of decaying states with a purely outgoing wave at the resonant energies. A crucial additional assumption that also makes \\({\\bf S}\\) approximately unitary is that the signs of the \\(\\Gamma_{\\lambda c}^{1/2}\\) are random and uncorrelated both in the index \\(\\lambda\\) as well as \\(c\\), regardless of how close the energy intervals involved may be. One simple consequence is that we approximately have that
\\[A_{\\lambda}A_{\\lambda^{\\prime}}=\\delta_{\\lambda\\lambda^{\\prime}}\\Gamma_{ \\lambda}A_{\\lambda} \\tag{39}\\]
in the sense that the l.h.s. is negligible for \\(\\lambda\
eq\\lambda^{\\prime}\\) in comparison to the value for \\(\\lambda=\\lambda^{\\prime}\\). With Eq. (39) it is easy to verify the approximate unitarity of \\({\\bf S}\\).
We investigate now the onset of the overlapping regime as \\(E\\) increases. \\(D(E)\\), the level spacing of the resonant \\(E_{\\lambda}^{(r)}\\), is a rapidly decreasing function of its argument. On the other hand, \\(\\Gamma_{\\lambda}=\\Gamma_{\\lambda 1}+\\Gamma_{\\lambda 2}+\\cdots+\\Gamma_{\\lambda n}\\), and since more channels are open at higher energy, \\(\\Gamma_{\\lambda}\\) is increasing with the energy of the resonance. The widths must therefore eventually overlap,and \\(\\Gamma_{\\lambda}\\gg D\\left(E_{\\lambda}^{(r)}\\right)\\) for the larger members of the sequence of \\(E_{\\lambda}^{(r)}\\)'s. In this regard there is a useful estimate due to Bohr and Wheeler [15], that for \\(n\\) large
\\[\\frac{\\Gamma_{\\lambda}}{D(E_{\\lambda}^{(r)})}\\simeq n. \\tag{40}\\]
Appendix A derives this using a phase space argument. Here we point out that this is entirely consistent with the assumption of the random signs, indeed requiring it to be true. Take for example a typical inelastic amplitude
\\[{\\bf S}_{cc^{\\prime}}=-i\\sum_{\\lambda}\\frac{\\Gamma_{\\lambda c}^{1/2}\\Gamma_{ \\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{\\lambda}/2}\\ \\ \\ \\ \\ (c\
eq c^{\\prime}) \\tag{41}\\]
First let us note that the \\(\\Gamma_{\\lambda}\\) being the sum of many random variables (the partial widths \\(\\Gamma_{\\lambda c}\\)) do not fluctuate much. Let \\(\\Gamma\\) denote their typical value over the \\(n\\) overlapping resonances. Also since \\(\\Gamma=nD\\) it follows that the typical size of a partial width \\(\\Gamma_{\\lambda c}\\) is \\(D\\). Therefore the typical size of the product \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}\\) is \\(D\\) but these random variables fluctuate randomly over the index \\(\\lambda\\), and moreover the sign is random. Thus for energies in the overlapping domain \\(S_{cc^{\\prime}}\\) is a sum of \\(n\\) complex numbers each of typical size \\(D/\\Gamma=1/n\\), but random in sign. This makes for a sum of order \\(1/\\sqrt{n}\\). Clearly this is as required to make the \\(n\\times n\\) matrix \\({\\bf S}\\) unitary. Note that the above argument fails (as is should) if \\(c\
eq c^{\\prime}\\) because then the signs of \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c}^{1/2}=\\Gamma_{\\lambda}>0\\) are of course not random.
Unlike the case of the isolated resonance, the S-matrix elements here are smoothly varying in E. Addition of a background term \\(B_{cc^{\\prime}}\\)
\\[{\\bf S}_{cc^{\\prime}}=B_{cc^{\\prime}}-i\\sum_{\\lambda}\\frac{\\Gamma_{\\lambda c}^ {1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{\\lambda}/2}. \\tag{42}\\]
just shifts this smooth variation by a constant. If \\(B_{cc^{\\prime}}\\) is also thought of as arising from a sum over the individual backgrounds then for the same reasons as discussed at the end of the preceding section \\(|B_{cc}|^{2}\\sim k_{e}\\sim\\sqrt{\\epsilon}\\) for an entrance channel near threshold. For simplicity we will continue to take \\(B_{cc^{\\prime}}\\) to be 0 and look at the case with background in the appendix.
## VI Q-Matrix and sticking
From the viewpoint of scattering theory, the sticking of the incident particle to the target is just a long-lived resonance. It is natural then to investigate the time-delay for the collision. Smith [14] introduced the collision lifetime or \\(Q\\)-matrix
\\[{\\bf Q}\\equiv i\\hbar{\\bf S}\\frac{\\partial{\\bf S}^{\\dagger}}{\\partial E} \\tag{43}\\]
which encapsulates such information. We review some of the relevant properties of \\({\\bf Q}\\). The rhs of Eq. (43) involves the 'open-open' upper left block of \\({\\bf S}\\) so that \\({\\bf Q}\\) is also an \\(n\\times n\\) energy-dependent matrix, having dimensions of time. For 1-dimensional elastic potential scattering \\({\\bf S}=e^{i\\phi(e)}\\) and \\({\\bf Q}\\) reduces to the familiar time delay \\(i\\hbar\\frac{\\partial\\phi(E)}{\\partial E}\\). If \\(\\vec{v}\\) is a vector whose entries are the coefficients of the incoming wave in each channel then \\(\\vec{v}^{\\rm tr}{\\bf Q}(E)\\vec{v}\\) is the average delay time experienced by such an incoming wave. Because physically the particle is incident on only one channel, \\(\\vec{v}\\) consists of all 0's except for a 1 in the \\(e\\)th slot so that the relevant quantity is just the matrix element \\({\\bf Q}_{ee}(E)\\). Smith shows that this delay time is the surplus probability of being in a neighborhood of the target (measured relative to the probability if no target were present) divided by the flux arriving in channel \\(e\\). This matches our intuition that when the delay time is long, there is a higher probability that the particle will be found near the target.
Furthermore, as a Hermitian matrix, \\({\\bf Q}(E)\\), can be resolved into its eigenstates \\(\\vec{v}^{(1)}\\cdots\\vec{v}^{(n)}\\) with eigenvalues \\(q_{1}\\cdots q_{n}\\). The components of \\(\\vec{v}^{(1)}\\) are the incoming coefficients of a quasi-bound state with lifetime \\(q_{1}\\) and so on. Then
\\[\\vec{v}^{\\rm tr}{\\bf Q}(E)\\vec{v}=\\sum_{j=1}^{n}q_{j}|\\vec{v}^{(j)}\\cdot\\vec{v} |^{2}. \\tag{44}\\]
As can be seen from this expression, the average time delay results, in general, from the excitation of multiple quasi-stuck states each with its lifetime \\(q_{j}\\) and probability of formation \\(|\\vec{v}^{(j)}\\cdot\\vec{v}|^{2}\\). However, we will find that using our resonant approximation to the \\({\\bf S}\\) matrix near a resonant energy \\(E_{\\lambda}^{(r)}\\) the time delay will consist of only one term from the sum on the rhs of Eq. (44), all the other eigenvalues being identically 0.
Using equation Eq. (43),
\\[{\\bf Q}(E)=i\\hbar\\Biggl{(}\\sum_{\\lambda^{\\prime}}\\frac{-iA_{\\lambda^{\\prime}} }{\\left[E-E_{\\lambda^{\\prime}}^{(r)}-i\\Gamma_{\\lambda^{\\prime}}/2\\right]^{2}}- \\sum_{\\lambda\\lambda^{\\prime}}\\frac{A_{\\lambda}A_{\\lambda^{\\prime}}}{\\left[E-E_ {\\lambda}^{(r)}+i\\Gamma_{\\lambda}/2\\right]\\left[E-E_{\\lambda^{\\prime}}^{(r)}-i \\Gamma_{\\lambda^{\\prime}}/2\\right]^{2}}\\Biggr{)} \\tag{45}\\]
which using Eq. (39) simplifies to
\\[=\\sum_{\\lambda}\\frac{\\hbar}{(E-E_{\\lambda}^{(r)})^{2}+(\\Gamma_{\\lambda}/2)^{2} }A_{\\lambda}\\, \\tag{46}\\]
a remarkably simple answer. We need \\(Q_{ee}(E)\\), where \\(e\\) is the entrance channel.
\\[Q_{ee}(E) = \\sum_{\\lambda}\\frac{\\hbar\\Gamma_{\\lambda e}}{(E-E_{\\lambda}^{(r)} )^{2}+(\\Gamma_{\\lambda}/2)^{2}} \\tag{47}\\] \\[= \\sum_{\\lambda}\\left(\\frac{\\hbar\\Gamma_{\\lambda}}{(E-E_{\\lambda}^ {(r)})^{2}+(\\Gamma_{\\lambda}/2)^{2}}\\times\\frac{\\Gamma_{\\lambda e}}{\\Gamma_{ \\lambda}}\\right) \\tag{48}\\]
where the second equation has the interpretation (for each term) as the life-time of the mode, multiplied by the probability of its formation. Note how for each resonance \\(E_{\\lambda}^{(r)}\\) there is only one term corresponding to the decomposition of Eq. (44). The actual measured lifetime is the average of \\(Q_{ee}(E)\\) averaged over the energy spectrum \\(|g(E)|^{2}\\) of the collision process.
### Energy averaging over spectrum
With the target in state \\(\\Omega_{e}(u)\\) where \\(c=e\\) is the entrance channel, the energy of the target is fixed, and the time-dependent solution will look like \\[\\psi(x,u,t)=\\int dE\\left(g(E)\\sum_{c=1}^{\\infty}\\left(\\frac{e^{-ik_{c}(E)x}}{\\sqrt{k _{e}(E)}}\\delta_{ee}-\\frac{e^{ik_{c}(E)x}}{\\sqrt{k_{c}(E)}}S(E)_{ce}\\right)\\Omega_ {c}(u)\\right). \\tag{49}\\]
Recall, \\(E\\) is the total energy of the system. We are interested in the threshold situation where the incident kinetic energy of the incoming particle \\(\\epsilon\\to 0\\). This can be arranged if \\(g(E)\\) is peaked at \\(E_{0}\\) with a spread \\(\\Delta E\\) such that i) \\(E_{0}\\) is barely above \\(E_{e}^{\\rm target}\\) and ii) \\(\\Delta E=\\delta\\epsilon\\) is some small fraction of \\(\\epsilon\\), the mean energy of the incoming particle. The second condition ensures that we may speak unambiguously of the incoming particle's mean energy. So,
\\[\\langle Q_{ee}(E)\\rangle \\equiv \\int dE|g(E)|^{2}Q_{ee}(E) \\tag{50}\\] \\[\\simeq \\frac{1}{\\Delta E}\\int dEQ_{ee}(E) \\tag{51}\\]
\\(\\langle\\rangle\\) denotes the average over the \\(\\Delta E\\) interval. Now \\(Q_{ee}(E)\\) is just a sum of Lorentzians centred at the \\(E_{\\lambda}^{(r)}\\)'s with width \\(\\Gamma_{\\lambda}\\) and Eq. (51) is just a measure of their mean value over the \\(\\Delta E\\) interval.
So long as the \\(\\Delta E\\) interval around which we are averaging, is broad enough to straddle many of these Lorentzians, the mean height is just
\\[\\frac{1}{\\Delta E}\\times\\rho(E)\\Delta E\\times\\frac{\\hbar\\pi\\Gamma_{\\lambda e} }{\\Gamma_{\\lambda}} \\tag{52}\\]
where the second factor is the number of Lorentzians in the \\(\\Delta\\)E interval and the third factor is the area under the '\\(\\lambda\\)th' Lorentzian. This is true regardless of whether or not they are overlapping. It will be convenient to write \\(\\Gamma_{\\lambda}\\) as
\\[\\Gamma_{\\lambda}=n\\,\\times 2\\bar{k}_{\\lambda}\\,{\\rm var}(\\gamma_{\\lambda}) \\tag{53}\\]
where \\({\\rm var}(\\gamma_{\\lambda})\\) is the variance of the set of \\(\\gamma_{\\lambda c}{}^{\\prime}s\\) over the \\(n\\) open channels and \\(\\bar{k}_{\\lambda}\\) is a mean or effective wavenumber \\(k_{c}\\) over the open channels, which for a particular realization \\(\\lambda\\) we take to be defined by Eq. (53) itself. Let \\(\\langle\\ \\rangle\\) denote the average over the occurrences of the quantity in the \\(\\Delta E\\) interval. \\(\\Gamma\\equiv\\langle\\Gamma_{\\lambda}\\rangle\\), \\(\\bar{k}\\equiv\\langle\\bar{k}_{\\lambda}\\rangle\\). Then Eq. (52) simplifies
\\[\\langle Q_{ee}(E)\\rangle \\simeq \\hbar\\frac{1}{D}\\frac{k_{e}\\langle\\gamma_{\\lambda e}^{2}\\rangle} {nk\\langle{\\rm var}(\\gamma_{\\lambda})\\rangle} \\tag{54}\\] \\[\\simeq \\frac{\\hbar}{\\Gamma}\\frac{k_{e}}{\\bar{k}} \\tag{55}\\]
which tends to \\(0\\) as \\(k_{e}\\sim\\sqrt{\\epsilon}\\). The form of Eq. (55) and all the steps leading up to it remain valid whether the Lorentzians are overlapping or not, as long as the \\(\\Delta E=\\Delta\\epsilon\\) interval which we are averaging over includes many of them.
### On an isolated resonance
If the target is cold enough that the resonances are isolated, then as the incident particle's energy \\(\\epsilon\\to 0\\), adhering to the condition \\(\\Delta\\epsilon<\\epsilon\\) will eventually result in \\(\\Delta\\epsilon\\) becoming narrower than the resonance widths. It becomes possible then for \\(\\Delta\\epsilon\\) to be centered right around a single isolated resonance at \\(E_{\\lambda}^{(r)}\\). In this case \\(\\langle Q_{ee}(E)\\rangle\\) is found simply by putting \\(E=E_{\\lambda}^{(r)}\\), because the spectrum \\(|g(E)|^{2}\\) is well approximated by \\(\\delta(E-E_{\\lambda}^{(r)})\\). So
\\[\\langle Q_{ee}(E)\\rangle=\\frac{\\hbar\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}^{2}} =\\frac{\\hbar}{\\Gamma_{\\lambda}}\\frac{\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}}= \\frac{\\hbar}{\\Gamma_{\\lambda}}\\frac{k_{e}}{n\\hbar}. \\tag{56}\\]
Even in this case there is the \\(\\sqrt{\\epsilon}\\) behavior as \\(\\epsilon\\to 0\\) and there is no sticking.
In the extreme case that there are no other open channels at all (\\(n=1\\)), \\(\\langle Q_{ee}(E)\\rangle\\simeq\\frac{\\hbar\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}^ {2}}=\\frac{\\hbar}{\\Gamma_{\\lambda e}}\\) because \\(\\Gamma_{\\lambda}=\\Gamma_{\\lambda e}\\). In fact, \\(e=1\\), and \\(\\langle Q_{ee}(E)\\rangle\\) diverges, implying in this case that it is possible to have the particle stick. This is an exception to all the cases above but is experimentally not so relevant because we may always expect to find some exothermic channels open for a target with many degrees of freedom.
## VII Inelastic cross sections and sticking
Another physically motivated measure of the sticking probability may be obtained by studying the total inelastic cross-section of the collision. The idea is that any long lived \"sticking\" is overwhelmingly likely to result in an inelastic collision process; i.e. that the scattering particle will leave in a different channel than it entered with. Using the original Wigner approach it is possible to show that for our case where we have only one scattering degree of freedom, the inelastic probability for an exothermic and endothermic collision vanishes like \\(k_{e}\\). The only possible exception to this is a measure zero chance of a resonance exactly at the threshold energy, \\(E_{e}^{\\rm target}\\). In the event that there is a resonance \\(E_{\\lambda}^{(r)}\\) close to but above this threshold energy, it is only necessary that \\(E\\) is below \\(E_{\\lambda}^{(r)}\\) (by an energy of at least \\(\\Delta E\\), the spread in energy) in order to observe the usual Wigner threshold behavior:
\\[P_{\\rm inelastic}\\to 0\\ \\ {\\rm like}\\ \\ k_{e}\\propto\\sqrt{\\epsilon} \\tag{57}\\]
for the inelastic probability. However our problem is unusual in the sense that because of the large number of degrees of freedom of the target, we will always find resonances between \\(E_{e}^{\\rm target}\\) and \\(E\\) no matter how small \\(E-E_{e}^{\\rm target}=\\epsilon\\) is. Thus the Wigner regime is not accessible. Still the surprise is that a simple computation reveals the same behavior holds for large \\(n\\):
\\[P_{\\rm inelastic}(E) = \\sum_{c\
eq e}P_{c\\gets e}(E) \\tag{58}\\] \\[= \\sum_{c\
eq e}|S_{ce}(E)|^{2}\\] (59) \\[= \\sum_{c\
eq e}\\sum_{\\lambda}\\sum_{\\lambda^{\\prime}}\\frac{{\\Gamma _{\\lambda c}}^{1/2}{\\Gamma_{\\lambda e}}^{1/2}}{E-E_{\\lambda}^{(r)}-i\\Gamma_{ \\lambda}/2}\\frac{{\\Gamma_{\\lambda^{\\prime}c}}^{1/2}{\\Gamma_{\\lambda^{\\prime}e }}^{1/2}}{E-E_{\\lambda}^{(r)}+i\\Gamma_{\\lambda}/2}\\] (60) \\[\\Rightarrow P_{\\rm inelastic}(E) = \\sum_{\\lambda}\\frac{\\Gamma_{\\lambda}}{(E-E_{\\lambda}^{(r)})^{2}+( \\Gamma_{\\lambda}/2)^{2}}\\ \\Gamma_{\\lambda e} \\tag{61}\\]
where we used the random sign property of the \\(\\Gamma_{\\lambda c}^{1/2}\\)'s and the understanding that \\(\\sum\\limits_{c\
eq e}\\Gamma_{\\lambda c}\\simeq\\sum\\limits_{\\rm all\\ c}\\Gamma_{ \\lambda c}=\\Gamma_{\\lambda}\\). Since the sum \\(\\sum\\limits_{c\
eq e}\\) is over the \\(n\\gg 1\\) open channels, omission of a single term can hardly matter. Apart from the factor \\(\\hbar/\\Gamma_{\\lambda}\\), the rhs of the above equation is identical to the expression for \\(Q_{ee}(E)\\) in Eq. (48). Averaging \\(P_{\\rm inelastic}(E)\\) over many resonances \\(E_{\\lambda}^{(r)}\\) (overlapping or not) we may use the same algebraic simplifications as before to show
\\[\\langle P_{\\rm inelastic}\\rangle=\\frac{k_{e}}{\\bar{k}} \\tag{62}\\]
As \\(k_{e}\\) tends to 0, this gives the \\(\\sqrt{\\epsilon}\\) Wigner behavior showing that there is no sticking.
The above argument fails when there is only one open channel. There are no inelastic channels to speak of. In this case, if the energy \\(E\\) coincides with a resonant energy \\(E_{\\lambda}^{(r)}\\) we will have the exceptional case of sticking, as discussed at the end of the previous section. But as pointed out there, this is primarily of theoretical interest only.
## VIII Channel Decoherence
The only case for which we stick is seen to be the case of when we are sitting right on top of a resonance with the incoming energy so well resolved that we are completely within the resonance width, AND there are no exothermic channels open. Having no such channels open amounts to an infintesimally low energy for a large target. Otherwise, the sticking probability tends to 0 as \\(\\sqrt{\\epsilon}\\) in every case.
### Time dependent picture
From the time independent point of view, the physical reason for the absence of low energy sticking is contained in the factor \\(\\frac{\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}}\\) of Eq. (48). This is the formation probability for the compound state. We will explain physically why it is small for \\(n\\gg 1\\). The resonance state is a many-body entangled state. If we imagine the decay of this compound state (already prepared by some other means say) each open channel carries away some fraction of the outgoing flux, with no preference for any one particular channel. Running this whole process in reverse it becomes evident that the optimum way to _form_ the compound state is to have each channel carry an incoming flux with exactly the right amplitude and phase. This corresponds to however an entangled initial state. With all the incoming flux instead constrained to be in only one channel it becomes clear that we are not exciting the resonance in the optimal way and the buildup of amplitude inside is not so large; i.e., the compound state has a small probability of forming.
The time dependent view is even more revealing. Imagine a wave packet incident on the system. For a single open channel Feshbach resonance, the build-up of amplitude in the interior region can be decomposed as follows. As the leading edge of the wavepacket approaches the region of attraction, most is turned away due to the quantum reflection phenomena. (It is a useful model to think of the quantum reflection as due to a barrier located some distance away from the interaction region.) The wavefunction in the interaction region constructively interferes with new amplitude entering the region. At the same time, the amplitude leaving the region is out of phase with the reflected wave, cancelling it and assisting more amplitude to enter.
Now suppose many channels are open. All the flux entering the interior must of course return, but it does so fragmented into all the other open channels. Only the fraction that makes it back into the entrance channel has the opportunity to interfere (constructively) with the rest of the entering wavepacket. The constructive interference is no longer efficient and is in fact almost negligible for \\(n\\gg 1\\), thereby ruining the delicate process that was responsible for the buildup of the wave function inside. The orthogonality of the other channels prevents interference in the scattering dimension. If we trace over the target coordinates, leaving only the scattering coordinate, most of the coherence and the constructive interference is lost, and no resonant buildup occurs. Therefore, one way to understand the non-sticking is to say that decoherence is to blame.
### Fabry-Perot and Measurement Analogy
Suppose we have a resonant quantum mechanical Fabry-Perot cavity, where the particle has a high probability of being found in between the two reflecting barriers. Now, during the time it takes for the probability to build up in the interior, suppose we continually measure the position of the particle inside. In doing so we decohere the wave function and in fact never find it there at all. Alternatively, imagine simply tilting one barrier (mirror) to make it non-parallel to the first and redirecting the flux into an orthogonal direction, again spoiling the resonance. Measurement entangles other (orthogonal) degrees of freedom with the one of interest, resulting in flux being effectively re-directed into orthogonal states. Thus the states of the target (if potentially excitable) are in effect continually monitoring (measuring) to see if the incoming particle has made it in inside, ironically then preventing it from ever doing so. The buildup process of constructive interference in the interaction region, described in the preceding paragraph, is slower than linear in \\(t\\). Therefore, the constant measurement of the particle's presence (and resultant prevention of sticking) is an example of the Zeno \"paradox\" in measurement theory.
## IX Conclusion
We have presented a general approach to the low energy sticking problem, in the form of \\(R\\)-matrix theory. This theory is well suited for the task, since it highlights the essential features of multichannel scattering at low incident translational energy. We did not need to make a harmonic or other approximate assumptions about the solid target, which is characterized by its long range interaction with the incoming particle and its density of states. \"Warm\" surfaces are included in the formalism, and do not change the non-sticking conclusion.
Several supporting arguments for the non-sticking conclusion were given. Perhaps most valuable is the physical decoherence picture associated with the conclusion that there is no sticking in the zero translational energy limit.
Reviewing the observations leading up to the non-sticking conclusion, we start with the near 100% sticking in the zero translational energy limit classically (sticking probability 1). We then invoke the phenomenon of quantum reflection (Fig.1), which keeps the incident particle far from the surface (sticking probability 0). Third, we note that quantum reflection can be overcome by resonances (Fig. 2), and since resonances are ubiquitous in a many body target, being the Feshbach states by which a particle could stick to the surface, perhaps sticking approaches 1 after all. Fourth, we suggest that decoherence (from the perspective of the incoming channel, with elestic scattering definded as coherent) ruins the resonance effect, reinstating the quantum reflection as the determining effect. Finally, then, there is no sticking, and the short answer as to why is: quantum reflection and many channel decoherence. The ultrashort explanation is simply quantum reflection, but this is dangerous and non-rigorous, as we have tried to show.
All this does not tell us much about how sticking turns on as incident translational energy is raised. This is the subject of the following paper, where a WKB analysis proves very useful. Quantum reflection is a physical phenomenon liked directly to the failure of the WKB approximation.
###### Acknowledgements.
This work was supported by the National Science Foundation through a grant for the Institute for Theoretical Atomic and Molecular Physics at Harvard University and Smithsonian Astrophysical Observatory:National Science Foundation Award Number CHE-0073544.
## Appendix A \\(\\Gamma\\simeq ND\\)
With the large number of degrees of freedom involved and assuming thorough phase space mixing associated with the resonance we may reasonably describe the compound state wavefunction by a classical ensemble of points \\((x,p_{x},u,p_{u})\\) in the combined phase space of the joint system given by the normalized distribution
\\[\\frac{1}{\\rho_{C}(E)}\\delta(E-H(x,p_{x},u,p_{u})). \\tag{10}\\]
It is understood in the above that the system is restricted to be in the region \\(x<a\\). This makes all accessible states of energy \\(E\\) with \\(x<a\\) equally likely. Then the rate of escape \\(\\Gamma/\\hbar\\) through the hypersurface \\(x=a\\) of the members of this ensemble is
\\[\\frac{\\Gamma}{\\hbar}=\\frac{1}{\\rho_{C}(E)}\\int\\limits_{x=a}dudp_{u}\\int \\limits_{p_{x}\\in[0,\\infty]}dp_{x}\\frac{p_{x}}{m}\\delta(E-H(x,p_{x},u,p_{u})). \\tag{11}\\]
\\(p_{x}/m\\) is just the velocity in phase space of a point at \\(x=a\\) in the \\(\\hat{x}\\) direction. At \\(x=a\\) we have supposed no interaction. Hence the Hamiltonian separates in Eq. (11). Therefore
\\[\\frac{\\Gamma}{\\hbar} = \\frac{1}{\\rho_{C}(E)}\\int dudp_{u}\\int\\limits_{0}^{\\infty}d\\left( \\frac{p_{x}^{2}}{2m}\\right)\\delta\\left(E-\\left(\\frac{p_{x}^{2}}{2m}+H^{\\rm target }(u,p_{u})\\right)\\right) \\tag{12}\\] \\[= \\frac{1}{\\rho_{C}}\\int\\limits_{H^{\\rm target}(u,p_{u})<E}dudp_{u}\\] (13) \\[= \\frac{1}{\\rho_{C}}\\Omega_{C}\\simeq\\frac{1}{2\\pi\\hbar\\rho_{Q}} \\Omega_{Q}=\\frac{1}{2\\pi\\hbar}nD. \\tag{14}\\]Therefore \\(\\frac{\\Gamma}{D}\\simeq n\\). \\(\\rho_{Q}\\) (\\(\\rho_{C}\\)) is the quantum (classical) density of states (phase space volume) of the joint system at energy E. \\(\\Omega_{Q}\\) (\\(\\Omega_{C}\\)) is the quantum (classical) total number of states (total phase space volume) of only the target below energy E. We have used the correspondence between the Classical and Quantum density of states. \\(1/\\rho_{Q}\\) is identified with \\(D\\), and the number of states of the target having energy less that \\(E\\) is just \\(n\\), the number of open channels.
## Appendix B Inelastic probability with background
We show here that the inelastic probabilities remain essentially unaffected in magnitude with the presence of a background term in the S-matrix. In the isolated case the addition of \\(b_{cc^{\\prime}}\\) to an inelastic element \\(S_{cc^{\\prime}}\\) simply changes the Lorentzian profile of \\(|S_{cc^{\\prime}}|^{2}\\). In the more important overlapping case, the energy variation of \\(S_{cc^{\\prime}}\\) is smooth in any case without background and
\\[|{\\bf S}_{cc^{\\prime}}|^{2} = \\left|B_{cc^{\\prime}}-i\\sum\\limits_{\\lambda}\\frac{\\Gamma_{ \\lambda c}^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i \\Gamma_{\\lambda}/2}\\right|^{2}\\cdot \\tag{10}\\] \\[= |B_{cc^{\\prime}}|^{2}+\\sum\\limits_{\\lambda}\\frac{\\Gamma_{\\lambda c }\\Gamma_{\\lambda c^{\\prime}}}{\\left(E_{\\lambda}^{(r)}-E\\right)^{2}+{\\Gamma_{ \\lambda}}^{2}/4} \\tag{11}\\]
where we have used the random sign property of the products \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c}^{\\ \\ \\ 1/2}\\) to neglect the 2nd cross-term in comparison to the last one where again the same property is used to simplify the double sum to a single one. Summing over all the inelastic channels then leads to the same result of Eq. ( 61) with an added term of \\(\\sum\\limits_{c\
eq e}|B_{cc^{\\prime}}|^{2}\\) which itself is proportional to \\(k_{e}\\) as discussed at the end of Section V.2.
## References
* [1] J. E. Lennard-Jones _et. al._, _Proc. R. Soc._ London, Ser. A **156** 6, (1936); Ser. A **156** 36, (1936).
* [2] T.W. Hijmans, J.T.M. Walraven, and G.V. Shlyapnikov, _Phys. Rev._ B **45**, 2561 (1992).
* [3] W. Brenig, _Z. Phys._ B **36**, 227 (1980).
* [4] D.P. Clougherty and W. Kohn, _Phys. Rev._ B, **46** 4921 (1992).
* [5] E. R. Bittner, _J. Chem. Phys._**100**, 5314 (1993).
* [6] P. S. Julienne and F. H. Mies, _J. Opt. Soc. Am._**B 6**, 2257 (1989).
* [7] P. S. Julienne, A.M. Smith, and K. Burnett, Adv. At. Mol. Op. Phys. **30**, 141 (1992).
* [8] L.D. Landau and E.M. Lifshitz, _Quantum Mechanics (Non-relativistic Theory)_ (Pergamon Press, Oxford (UK) 1981).
* [9] C.J. Joachain, _Quantum Collision Theory_ (North-Holland, Amsterdam 1975).
* [10] G.F. Gribakin and V. V. Flambaum, _Phys. Rev._ A **48** 546 (1993).
* [11] R. Cote, E. J. Heller, and A. Dalgarno, \"Quantum suppression of cold atom collisions\" Phys. Rev. A **53**, 234-41 (1996).
* [12] I. A. Yu, J. Doyle, J. C. Sandberg, C. L. Cesar, D. Kleppner, and T. J. Greytak, _Phys. Rev. Lett_**71** 1589 (1993).
* [13] J. Doyle, J. C. Sandberg, I. A. Yu, C. L. Cesar, D. Kleppner, and T. J. Greytak, _Phys. Rev. Lett_**67** 603 (1991); C. Carraro and M.W. Cole, _Phys. Rev._ B **45**, 12931 (1992); T.W. Hijmans, J.T.M. Walraven, and G.V. Shlyapnikov, _Phys. Rev._ B **45**, 2561 (1992).
* [14] F. T. Smith, _Phys. Rev._ **118**, 349 (1960).
* [15] N. Bohr and J. Wheeler, _Phys. Rev._**56** (5),416-450 (1939). see p. 426 Sec. III. | We provide the theoretical basis for understanding the phenomenon in which an ultra cold atom incident on a possibly warm target will not stick, even in the large \\(n\\) limit where \\(n\\) is the number of internal degrees of freedom of the target. Our treatment is non-perturbative in which the full many-body problem is viewed as a scattering event purely within the context of scattering theory. The question of sticking is then simply and naturally identified with the formation of a long lived resonance. One crucial physical insight that emerges is that the many internal degrees of freedom serve to decohere the incident one body wavefunction, thus upsetting the delicate interference process necessary to form a resonance in the first place. This is the physical reason for not sticking. | Write a summary of the passage below. | 152 |
arxiv-format/2201_03364v1.md | # High-resolution Ecosystem Mapping in Repetitive Environments Using Dual Camera SLAM
Brian M. Hopkinson
Department of Marine Sciences, University of Georgia
Athens, Georgia 30602, USA
Email: [email protected]
Suchendra M. Bhandarkar
Department of Computer Science, University of Georgia
Athens, Georgia 30602, USA
Email: [email protected]
## I Introduction
The spatial arrangement of organisms within an ecosystem reflects fundamental underlying ecological processes such as competition, resource availability, trophic relationships, and symbioses [1, 2]. Consequently, the ability to map the abundance and distribution of organisms within an ecosystem is critical for advancing ecology. Traditionally, mapping organismal distributions entailed time-consuming, manual field work, limiting the scale and frequency at which maps could be generated. In recent decades, satellite remote sensing has offered unprecedented insight into the spatial arrangement and coverage of various ecosystems, but because of the coarse resolution of satellite imaging (\\(\\sim\\) 1-30 m pixel size) it is typically only useful at the ecosystem level (e.g. distribution of forest vs. grassland) and cannot assess the distribution of individual species [3]. Recent advances in computer vision algorithms, most notably in Structure-from-Motion (SfM) [4, 5, 6], have begun to see their application in ecology for construction of much higher resolution (mm to cm) 3D optical maps of ecosystems. When combined with machine learning tools for automated classification, these maps are capable of delineating the distribution of individual species across landscape scales [7, 8].
SfM has been the tool of choice for map generation from images for ecologists and geographers due to its ability to use structured or unstructured image collections and the availability of high-quality commercial and open-source implementations [9, 10]. SfM relies on _globally_ distinct visual features in images to register overlapping images for map generation. This requirement is typically met when images are acquired at high elevations via unmanned aerial vehicles (UAVs or drones). However, when ecosystem images are acquired closer to the scene, for example to resolve small plants or animals, they often become repetitive causing conventional SfM approaches to fail. In contrast, Simultaneous Localization and Mapping (SLAM) approaches developed in the robotics community, process images sequentially in the order they were acquired using _locally_ distinct visual features to map the environment and localize the position and orientation of the camera with respect to the map [11, 12]. SLAM approaches, though promising for mapping repetitive scenes [13], work best with high-frame rate (and consequently lower resolution), wide-angle cameras whose images are of limited use for identification and localization of organisms [14].
In this paper, we describe a dual-camera SLAM approach to map visually repetitive environments such as grassland and shrubland ecosystems (Fig. 1). A high-frame rate, wide-angle camera is used for conventional visual SLAM-based localization whereas the other high-resolution, medium- to narrow-angle (video or still image) camera is used to acquire high-quality ecosystem images suitable for \"documentation\", i.e., identification and localization of organisms to the species or genus level. The documentation camera does not need to be tightly integrated at the hardware level with the SLAM camera, allowing the use of extremely high-quality and low-cost commercial off-the-shelf (COTS) cameras. The trajectory of the localization camera is then used to guide detailed map generation from the documentation camera images using the proposed dual-camera SLAM approach.
The primary contributions of this work are: (a) a novel approach to ecosystem map generation that allows flexible use of high-resolution, medium- to narrow-angle COTS cameras to resolve smaller organisms by decoupling localization and documentation; (b) development of a multistage alignmentprocess for the documentation camera images that uses the localization camera to guide image pose determination and map point positioning; and (c) experimental demonstration of the proposed system's ability to map visually repetitive environments.
## II Related Work
### _Ecosystem Mapping_
Although mapping the distribution of ecosystems (e.g. forest, coral, and grassland extent) from satellite imagery has a long history [3, 15], we confine our overview to methods that exploit higher-resolution imagery (mm-cm pixel size) enabling more fine-grained taxonomic resolution (species, genera) since they are more closely related to our work. The most common ecosystem mapping workflow is comprised of image acquisition from a structured aerial survey using a UAV at \\(\\approx\\)10-250 m elevation followed by image positioning and map generation via SfM. This approach provides ground resolution of \\(\\approx\\)1 cm, suitable for classifying and delineating moderate- to large-sized organisms. The combination of UAVs, COTS cameras, and SfM has proved to be both, highly accurate and cost-effective avoiding the need for high-cost precise positioning sensors (e.g. RTK-GPS, IMUs) or high-cost imaging sensors (e.g. LiDAR, hyperspectral).
Hayes et al. [16] use SfM to construct orthomosaics of seabird colonies from UAV images acquired at 60-90 m altitude followed by CNN-based object detectors to count individual birds for population tracking. Baena et al. [17] map the distribution of the keystone Algarrobo tree in Pacific Equatorial dry forests using SfM procedures on large-scale 260 m altitude UAV imagery. However, in low-visibility underwater environments, images are typically acquired closer to the scene (several meters) either manually or using underwater vehicles and then processed via SfM [18] or customized approaches [19].
### _Structure from Motion (SfM)_
SfM approaches to scene reconstruction (i.e., mapping) and image pose determination are currently the primary tool for image-based ecosystem mapping. SfM attempts to map a scene from unordered images from uncalibrated and possibly multiple cameras, imposing minimal constraints on image acquisition [6, 9]. The relative pose between images is computed by extracting features and associated descriptors (e.g. SIFT) from the unordered image collection and matching these features between images, using geometric verification to remove outliers [20]. Matched feature points are triangulated producing a sparse representation of the scene. The image poses, scene points, and camera calibration parameters are optimized via a bundle adjustment procedure [21]. This sparse scene representation can further be made dense using multi-view stereo methods [22], and/or converted to a triangular mesh representation.
SfM is extremely flexible and relatively easy for non-specialists to use since it imposes minimal constraints on the image acquisition process, but the allowance for unordered image sets makes SfM computationally expensive with a non-linear time complexity [6, 23]. Furthermore, because images are unordered they must be visually similar only to their true neighbors; otherwise matches between spatially disparate locations may be incorrectly accepted. Consequently, SfM is often confounded in scenes with repetitive features when this requirement is violated resulting in _visual aliasing_[24].
### _Simultaneous Localization and Mapping (SLAM)_
SLAM techniques typically provide a similar final mapping solution to SfM, i.e., a sparse or dense representation of the scene and image poses [12]. However, SLAM assumes that the images are acquired sequentially, reflecting its origins in robotics. For ecosystem mapping, the sequential image acquisition constraint is generally not onerous as images are typically acquired from a single camera as it is moved
Fig. 1: **Overview of Proposed Approach:** The imaging system consists of two cameras: a forward facing stereo-camera for localization and a downward facing, high-resolution camera for documentation. The dual-camera SLAM system produces trajectories (magenta line), landmark maps (black and red dots), and image poses (blue frames) for each camera. The image poses and landmark map from the documentation camera can be used to produce a map of the targeted, visually-repetitive environment as shown here for a salt marsh over the underlying ecosystem. Since images are processed sequentially, the features need be only locally distinct, thus making SLAM better suited to handle repetitive environments. However, SLAM works best with a wide-angle, high frame rate camera whose limited image resolution is typically impractical for organismal identification [14]. This motivates our incorporation of an additional, higher resolution camera for ecosystem documentation. Although several multi-camera SLAM systems have been developed for robotics, they assume that precise synchronization information in the form of timing and orientation are available for each camera [25, 26, 27], thereby preventing use of most inexpensive, high quality COTS cameras, which do not expose synchronization signals. By relaxing the precise synchronization requirement, we allow use of COTS cameras, but do not incorporate documentation camera images into the estimation of the system pose.
## III Proposed System
The proposed dual-camera SLAM-based ecosystem mapping approach uses SLAM to determine the trajectory of the localization camera, which is then used to guide map generation from the documentation camera images (Fig. 2). The system assumes the relative orientation of the two cameras is constant and approximately known and that the image streams are roughly (\\(<\\)0.5 s) synchronized. As SLAM is applied to the localization camera images, the documentation camera images are processed concurrently via monocular SLAM using the localization trajectory to guide generation of an initial ecosystem map and approximate image poses. However, the initial ecosystem map is generally fragmentary as tracking is frequently lost due to rapid scene movement resulting from the narrow field of view (FOV) and limited number of trackable features in the documentation camera images. After completion of the SLAM processing of the localization camera image sequence, the fragmentary maps from the documentation camera are scaled and transformed to approximately align with the localization camera trajectory based on the acquisition time of documentation camera images and the pose of the documentation camera relative to the localization camera. Finally, the documentation camera poses and associated map are optimized based on constraints derived from the localization camera trajectory and landmark-to-camera correspondences in the fragmentary maps using a factor graph framework.
### _SLAM System Core_
The proposed system employs a modified version of ORB-SLAM2 [28], a keyframe-based SLAM approach, as its base since it is a comprehensive SLAM approach capable of using monocular, stereo, and RGB-D cameras and produces accurate maps through multiple rounds of optimization (i.e., bundle adjustment). ORB-SLAM2 performs the following main operations: _tracking_, which localizes each frame relative to the existing map and determines when new keyframes should be added, _mapping_, which maintains the current map and updates it via insertion of new keyframes, triangulation of new map points, and optimization of the map structure via bundle adjustment, and _loop closing_, which identifies revisited locations (i.e., trajectory loops) and revises the map accordingly.
We made several notable modifications to the ORB-SLAM2 system. First, the system was extended to handle multiple cameras simultaneously, with each camera maintaining a separate master map. The map data structure, which holds keyframes and map points, was converted into a recursive tree structure to handle sub-maps that are generated when tracking is lost. Sub-maps can be kept private or optionally registered with their parent to make keyframes and map points accessible to the parent. The _relocalization_ state, which ORB-SLAM2 enters immediately upon loss of tracking, was eliminated; instead a new sub-map is spawned and SLAM initialization started upon loss of tracking. For the localization camera, the new sub-map is registered with the parent assuming the camera maintained a constant velocity between the time when tracking was lost and when a new map was successfully initialized. This time interval is typically very short (\\(<\\)0.1 s) but if initialization is not successful after a set number of frames the sub-map is not registered. The per-frame camera trajectories are explicitly recorded as \\(SE(3)\\) transformations relative to the reference keyframes, whose positions are continuously updated via optimization. The availability these trajectories is critical for positioning of the documentation camera images based on the pose of the localization camera. Our SLAM code is available online at [https://github.com/bmhopkinson/hyslam](https://github.com/bmhopkinson/hyslam).
### _Generation of the Fragmentary Map_
The first step in generating the ecosystem map and associated documentation image poses is the application of the modified monocular ORB-SLAM2 system to the sequentially acquired documentation images. A new map is initialized by tracking ORB feature points through multiple frames until there is sufficient parallax to accurately triangulate map points corresponding to the tracked features. In our system, the resulting map of arbitrary scale is brought into a consistent scale (approximately) with the localization camera map by estimating the absolute motion of the documentation camera over the initialization period using the motion of the localization camera. The pose of the documentation camera can be estimated at any time \\(t\\) as:
\\[\\mathbf{X}_{d}(t)=\\mathbf{X}_{l}(t)\\mathbf{T}_{dl} \\tag{1}\\]
where \\(\\mathbf{X}_{d}(t)\\) is the pose of the documentation camera at time \\(t\\), \\(\\mathbf{X}_{l}(t)\\) is the pose of the localization camera at time \\(t\\), and \\(\\mathbf{T}_{dl}\\) is the rigid-body \\(SE(3)\\) transformation between the localization and documentation cameras. All poses are expressed in the camera-to-world transformation convention. The motion of the documentation camera over the initialization period can then be determined as:
\\[\\mathbf{V}_{d0}(t)=\\mathbf{X}_{d0}^{-1}\\mathbf{X}_{d1} \\tag{2}\\]
Equation (2) is necessary because although both cameras are rigidly attached to a frame, the motion experienced by each camera may be different. For example, pure rotation about an axis of the localization camera will induce a rotation and translation of the documentation camera when the documentation camera is spatially offset relative to the axis of rotation. The new map is scaled using the estimated motion (\\(\\textbf{V}_{\\mathrm{d0}}\\)), which works well in most cases but can occasionally be inaccurate as a result of small absolute distances traveled during monocular SLAM initialization. The fragmentary maps are later rescaled using the full distance travelled during their generation providing a consistent, absolute scale among the fragmentary maps.
Once the map is initialized, the documentation images are processed using standard monocular SLAM [29, 28] which provides a convenient and robust way to determine approximate relative poses between documentation images and identify features that are consistently matched and geometrically verified. Tracking is often lost due to the narrow FOV and rapid relative motion of the downward facing documentation camera at which point a new fragmentary map is initialized.
### _Alignment of Fragmentary Maps_
The relevant outputs of the initial processing steps consist of the full localization camera trajectory and multiple fragmentary maps from the documentation camera as depicted in the Intermediate Map in Fig. 2. The fragmentary maps are approximately scaled to the localization trajectory but their orientations and positions may diverge substantially from their true values. Although a full non-linear optimization procedure is ultimately used to provide the best estimate for the documentation camera map, proper initialization is necessary for convergence. Since the documentation camera sub-maps often deviate substantially from their correct configurations, a two-step procedure is used to approximately align the fragmentary maps with the localization camera trajectory. First, the camera centers for the documentation images in each fragmentary map are brought into alignment with their expected positions based on the localization camera trajectory using a \\(Sim(3)\\) transformation estimated using Horn's method [30]. Since the path traveled within any fragmentary map is often approximately linear, there is a rotational ambiguity in the documentation camera poses when aligned based solely on the camera centers. To align the documentation image orientations an optimal \\(SO(3)\\) transform is determined, again using Horn's method [30], from the camera poses augmented with points representing unit positions along the pose axes. The \\(SO(3)\\) transform is applied to the documentation camera poses, resulting in fragmentary maps in a coherent world coordinate system defined by the localization camera. The two-step procedure ensures that the arbitrary-length vectors taken to represent positions along the camera frame axes used to resolve the rotation ambiguity do not influence the scale estimated in the first step.
### _Linking Fragmentary Maps_
To provide inter-map constraints, landmarks viewed in multiple fragmentary maps are identified and redundant landmarks removed. For each fragmentary map, landmarks from other sub-maps potentially visible in the fragmentary map's keyframes are collected. Correspondences between these landmarks and keypoints in the keyframes are determined and validated using geometric and feature-based criteria. When sufficient correspondences are established to keypoints associated with a landmark in the current fragmentary map the corresponding landmarks are merged.
### _Global Optimization of the Ecosystem Map_
The previously described procedures yield a set of fragmentary ecosystem maps from the documentation camera images approximately aligned based on the localization camera trajectory and linked via mutually visible landmarks. This ecosystem map, comprising of keyframes and landmarks, is refined using global, non-linear optimization on the \\(SE(3)\\) manifold resulting in a consistent, unified ecosystem map, depicted as the Finalized Map in Fig. 2. The objective (error) function incorporates costs assigned to all landmark to feature
Fig. 2: **Dual Camera SLAM Procedure:** Images from the localization and documentation cameras are processed in parallel starting with feature extraction, followed by initial pose estimation through landmark tracking, and finally optimization of the pose and generation of new landmarks in the mapping phase. Processing of the entire video streams results in an intermediate map consisting of a unified map for the localization camera and fragmentary maps from the documentation camera. The localization camera’s trajectory is used to align the fragmentary maps, which are then linked through commonly viewed landmarks. Finally, all poses and landmarks are globally optimized to produce a finalized map from the documentation camera.
point associations and constraints on the keyframe poses based on the localization trajectory (Fig. 3). The constraints and estimated variables (i.e., keyframe poses and landmark positions) are structured as a locally-connected factor graph to facilitate global optimization [31]. The optimization procedure largely follows previous work [28, 32], the novelty being the constraints imposed by the localization trajectory.
As depicted in Fig. 3, the constraints on the documentation camera keyframe poses derived from the localization trajectory involve two variables, the documentation image acquisition time \\(t_{i}\\) and the transformation \\(\\mathbf{T}_{dl}\\) between the localization and documentation cameras, both of which are, in turn, constrained by their prior estimates. The documentation and imaging cameras are not required to be precisely synchronized in time, i.e., the acquisition time for each documentation camera image, in terms of the localization trajectory, is only approximately known. Specifically, the constraint on the documentation camera keyframe pose is expressed as a ternary edge in the factor graph wherein \\(t_{i}\\) implies a specific localization camera pose that can be obtained from the trajectory. The localization camera pose can then translated into an implied documentation camera pose via \\(\\mathbf{T}_{dl}\\) and equation (1). Since the imaging system upon which the cameras are mounted is assumed to be rigid and stable throughout the data collection process, a single value of \\(\\mathbf{T}_{dl}\\) is estimated for the entire data set. The non-linear optimization attempts to minimize the following error function:
\\[\\operatorname*{arg\\,min}_{\\mathbf{X},\\mathbf{L},\\mathbf{T}_{dl},t }\\sum_{i,j}\\rho_{h}(r_{i,j}(\\mathbf{X}_{i},L_{j}))+\\sum_{i}\\rho(p_{i}(\\mathbf{ X}_{i},t_{i},\\mathbf{T}_{dl}))\\\\ +\\sum_{i}\\rho(t_{i}-t_{i-obs})+\\rho(\\mathbf{T}_{dl}^{-1}\\mathbf{ T}_{dl-obs}) \\tag{3}\\]
where \\(\\mathbf{X}_{i}\\) denotes the camera pose, \\(L_{j}\\) the landmark position, \\(t_{i}\\) the documentation image acquisition time, \\(\\mathbf{T}_{dl}\\) the transformation between the localization and documentation cameras, \\(\\rho\\) the squared-error function, \\(\\rho_{h}\\) the robust Huber error function, \\(r_{i,j}\\) the reprojection error for landmark \\(j\\) observed in keyframe \\(i\\) and \\(p_{i}\\) the pose error between the estimated pose \\(\\mathbf{X}_{di}\\) of documentation camera \\(i\\) and its pose implied by the localization camera pose \\(\\mathbf{X}_{li}\\) at the time the documentation image was taken via transformation \\(\\mathbf{T}_{dl}\\). Specifically:
\\[p_{i}=\\mathbf{X}_{di}^{-1}\\mathbf{X}_{li}\\mathbf{T}_{dl} \\tag{4}\\]
## IV Experimental Evaluation
### _Data Collection_
A dual-camera rig was constructed consisting of a Stereolabs ZED-mini interfaced to a Jetson Xavier computer as the localization camera and a Panasonic GH5s configured with a 14 mm prime lens as the documentation camera. The cameras were secured to rigid frame so that their relative orientation was constant. The Panasonic GH5s was oriented downward to provide ecosystem images of the highest quality. The ZED-mini was either facing directly forward or angled slightly downward (\\(\\approx 25^{\\circ}\\), measured for each deployment). Aligning the localization camera with the direction of travel allows for persistence of features in the FOV and observation of features at a wide range of distances, thereby improving tracking. However, it was found to be advantageous to angle the localization camera slightly downward to observe more proximal features thereby improving motion estimation and avoiding tracking loss. The ZED-mini stereo video was recorded at 60 frames per second (fps) with \\(1280\\times 720\\) resolution and the Panasonic GH5s video at 60 fps with \\(4096\\times 2160\\) resolution.
Data was collected in two visually repetitive environments: a lawn on the University campus and a salt marsh on Sapelo Island, GA, USA. Patches of roughly 10 m \\(\\times\\) 10 m were imaged by traversing the area in a lawn-mower (bootstrapp-don) pattern. At four sites on Sapelo Island, nine AprilTag markers [33] were placed in the imaged patch to serve as ground control points for accuracy assessment. The ground-truth positions of the AprilTags were determined to \\(<\\)2 cm accuracy using an RTK-GPS system (Trimble R12 GNSS receiver with a Trimble TSC7 controller).
### _Comparison with SfM_
The dual-camera SLAM approach was compared with two state-of-the-art SfM platforms: COLMAP [9] and Agisoft Metashape [34]. Six datasets (four from salt marshes on Sapelo Island, two from the campus lawn) were processed
Fig. 4: **Sample Reconstructions: A: Accurately reconstructed campus lawn using dual camera SLAM. B: SfM failure due to visual aliasing (blue squares represent aligned images). The three lines of images (highlighted in red) should be parallel but instead converge on a single point in the reconstruction.**
Fig. 3: **Documentation Camera Factor Graph: Circles represent model parameters estimated via global optimization and squares represent error terms, constraining the parameters.**to generate maps using the dual-camera SLAM program and the two SfM programs. For this comparison, video frames were extracted at 4 fps from the documentation camera videos (resulting in 80-90% overlap between frames) and processed using the default SfM program settings that were slightly modified based on preliminary trials to improve reconstruction quality. First, the reconstructions were visually assessed to determine if the reconstruction was roughly consistent with the planar geometry of the patches and whether the inferred image locations relative to the reconstruction approximately matched the camera trajectory. Second, the completeness of the reconstructions was assessed using the fraction of aligned images as a metric for completeness of the SfM programs and the fraction of visible mesh elements (out of those determined to be potentially visible) as a completeness metric for the dual-camera SLAM system. For the SfM approaches all sub-maps were considered, offering a charitable representation of their performance. On these six datasets, the dual-camera SLAM approach was able to successfully reconstruct repetitive environments in cases when traditional SfM systems either failed entirely or were unable to fully reconstruct the imaged location (Table I). COLMAP was better able to generate reconstructions than Metashape, but the reconstructions were typically broken into multiple (up to 22) fragmentary maps. In contrast the dual-camera SLAM approach is able to produce a single, unified map. As examples we show texture mapped reconstructions of a salt marsh grassland (Fig. 2) and a campus lawn (Fig. 4A). The SfM approaches often incorrectly merged subsections due to visual aliasing (Fig. 4B).
### _Accuracy of Dual Camera SLAM_
In all the test sites, the dual-camera SLAM system produced reconstructions that appeared reasonable and covered the entire imaged patch (Table I). For a more quantitative assessment of the reconstruction quality, distances between ground control points (AprilTags) in the reconstructions were compared with the true distances determined using RTK-GPS positions. At four locations on Sapelo Island, nine AprilTags were placed spanning the imaged patch: four at the corners forming a square defining the edges of the patch, four forming a nested square, and one at the center of the patch. Determination of AprilTag locations in the reconstructions was done as a post-processing step. After running the dual-camera SLAM program, the reconstructed landmark map from the documentation camera was exported as a point cloud and the documentation camera images and their associated poses were saved. A triangular surface mesh was fit to the point cloud. AprilTags were detected in the documentation camera images and their 3D locations determined via backprojection onto the mesh using the camera poses and inverse camera model. Since most AprilTags were viewed in multiple images, the backprojected 3D positions of all views were averaged to produce a single location for each tag. Euclidean distances between all reconstructed tag pairs and RTK-GPS positions were computed and compared (Fig. 5). The estimated inter-tag distances were generally in good agreement with the true distances (Fig. 5), though slightly overestimated (average 3%), most likely due to localization camera calibration errors. Nonetheless, the reconstructions were deemed sufficiently accurate for most ecological applications.
## V Conclusions and Future Work
We proposed a dual-camera SLAM approach to map repetitive environments for use in environmental monitoring applications. While the proposed approach does entail a more complicated image acquisition setup, it offers much more reliable mapping of repetitive environments, typical of many ecosystems. Furthermore, decoupling the localization and documentation cameras allows use of cameras ideally suited to each task and flexible swapping of either camera as required for the task at hand. Future improvements include incorporation of additional constraints such as IMU or GPS data [35] into the factor graph for more accurate mapping and development of fully coupled optimization strategies for the localization and documentation camera reconstructions [26].
## Acknowledgements
This work was supported by the US National Science Foundation (DBI 2016741). We thank the University of Georgia Marine Institute on Sapelo Island for facilitating field work.
Fig. 5: Accuracy assessment from inter-tag distances measured using RTK-GPS (true distances) and in dual camera SLAM reconstructions (estimated distances) for four imaged patches (P1-P4). The solid line is a linear regression fit to the data forced through the origin (equation and R\\({}^{2}\\) listed on plot) and the dotted line is the 1:1 line.
[MISSING_PAGE_POST]
hoto tourism: exploring photo collections in 3D. ACM TRANSACTIONS ON GRAPHICS25 (3), pp. 835-846. Cited by: SSI.
* [44]S. D. Johnson, S. M. Seitz, and R. Szeliski (2006) Building Rome in a Day. In 2009 IEEE 12th International Conference on Computer Vision (ICCV), Vol., pp. 72-79. Cited by: SSI.
* [45]S. A. Levin (1992) The Problem of Pattern and Scale in Ecology. Ecology73 (6), | Structure from Motion (SfM) techniques are being increasingly used to create 3D maps from images in many domains including environmental monitoring. However, SfM techniques are often confounded in visually repetitive environments as they rely primarily on globally distinct image features. Simultaneous Localization and Mapping (SLAM) techniques offer a potential solution in visually repetitive environments since they use local feature matching, but SLAM approaches work best with wide-angle cameras that are often unsuitable for documenting the environmental system of interest. We resolve this issue by proposing a dual-camera SLAM approach that uses a forward facing wide-angle camera for localization and a downward facing narrower angle, high-resolution camera for documentation. Video frames acquired by the forward facing camera video are processed using a standard SLAM approach providing a trajectory of the imaging system through the environment which is then used to guide the registration of the documentation camera images. Fragmentary maps, initially produced from the documentation camera images via monocular SLAM, are subsequently scaled and aligned with the localization camera trajectory and finally subjected to a global optimization procedure to produce a unified, refined map. An experimental comparison with several state-of-the-art SfM approaches shows the dual-camera SLAM approach to perform better in repetitive environmental systems based on select samples of ground control point markers. | Provide a brief summary of the text. | 258 |
arxiv-format/1807_01569v1.md | # The SEN1-2 Dataset for Deep Learning in SAR-Optical Data Fusion
M. Schmitt\\({}^{1}\\)
L. H. Hughes\\({}^{1}\\)
X. X. Zhu\\({}^{1,2}\\)
\\({}^{1}\\) Signal Processing in Earth Observation, Technical University of Munich (TUM), Munich, Germany - (m.schmitt,lloyd.hughes)@tum.de \\({}^{2}\\) Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Oberpfaffenhofen, Germany - [email protected]
## 1 Introduction
Deep learning has had an enormous impact on the field of remote sensing in the past few years (Zhang et al., 2016, Zhu et al., 2017). This is mainly due to the fact that deep neural networks can model highly non-linear relationships between remote sensing observations and the eventually desired geographical parameters, which could not be represented by physically-interpretable models before. One of the most promising directions of deep learning in remote sensing certainly is its pairing with data fusion (Schmitt and Zhu, 2016), which holds especially for a combined exploitation of _synthetic aperture radar (SAR)_ and optical data as these data modalities are completely different from each other both in terms of geometric and radiometric appearance. While SAR images are based on range measurements and observe physical properties of the target scene, optical images are based on angular measurements and collect information about the chemical characteristics of the observed environment.
In order to foster the development of deep learning approaches for SAR-optical data fusion, it is of utmost importance to have access to big datasets of perfectly aligned images or image patches. However, gathering such a big amount of aligned multi-sensor image data is a non-trivial task that requires quite some engineering efforts. Furthermore, remote sensing imagery is generally rather expensive in contrast to conventional photographs used in typical computer vision applications. These high costs are mainly caused by the financial efforts associated to putting remote sensing satellite missions into space. This changed dramatically in 2014, when the SAR satellite Sentinel-1A, the first of the Sentinel missions, was launched into orbit by the European Space Administration (ESA) in the frame of the Copernicus program, which is aimed at providing an on-going supply of diverse Earth observation satellite data to the end user free-of-charge (European Space Agency, 2015).
Exploiting this novel availability of _big remote sensing data_, we publish the so-called _SEN1-2_ dataset with this paper. It is comprised of \\(282,384\\) SAR-optical patch-pairs acquired by Sentinel-1 and Sentinel-2. The patches are collected from locations spread across the land masses of the Earth and over all four seasons. The generation of the dataset, its characteristics and features, as well as some pilot applications are described in this paper.
## 2 Sentinel-1/2 Remote Sensing Data
The Sentinel satellites are part of the Copernicus space program of ESA, which aims to replace past remote sensing missions in order to ensure data continuity for applications in the areas of atmosphere, ocean and land monitoring. For this purpose, six different satellite missions focusing on different Earth observation aspects are put into operation. Among those missions, we focus on Sentinel-1 and Sentinel-2, as they provide the most conventional remote sensing imagery acquired by SAR and optical sensors, respectively.
### Sentinel-1
The Sentinel-1 mission (Torres et al., 2012) consists of two polar-orbiting satellites, equipped with C-band SAR sensors, which enables them to acquire imagery regardless of the weather.
Sentinel-1 works in a pre-programmed operation mode to avoid conflicts and to produce a consistent long-term data archive built for applications based on long time series. Depending on which of its four exclusive SAR imaging modes is used, resolutions down to \\(5\\) m with a wide coverage of up to \\(400\\) km can be achieved. Furthermore, Sentinel-1 provides dual polarization capabilities and very short revisit times of about \\(1\\) week at the equator. Since highly precise spacecraft positions and attitudes are combined with the high accuracy of the range-based SAR imaging principle, Sentinel-1 images come with high out-of-the-box geolocation accuracy (Schubert et al., 2015).
For the Sentinel-1 images in our dataset, so-called ground-range-detected (GRD) products acquired in the most frequently available interferometric wide swath (IW) mode were used. These images contain the \\(\\sigma^{0}\\) backscatter coefficient in dB scale for every pixel at a pixel spacing of 5 m in azimuth and 20 m in range. For sake of simplicity, we restricted ourselves to vertically polarized (VV) data, ignoring potentially available other polarizations. Finally, for precise ortho-rectification, restituted orbit information was combined with the 30 m-SRTM-DEM or the ASTER DEM for high latitude regions where SRTM is not available.
Since we want to leave any further pre-processing to the end user so that it can be adapted to fit the desired task, we have not carried out any speckle filtering.
### Sentinel-2
The Sentinel-2 mission (Drusch et al., 2012) comprises twin polar-orbiting satellites in the same orbit, phased at 180\\({}^{\\circ}\\) to each other. The mission is meant to provide continuity for multi-spectral image data of the SPOT and LANDSAT kind, which have provided information about the land surfaces of our Earth for many decades. With its wide swath width of up to 290 km and its high revisit time of 10 days at the equator (with one satellite), and 5 days (with 2 satellites), respectively, under cloud-free conditions it is specifically well-suited to vegetation monitoring within the growing season.
For the Sentinel-2 part of our dataset, we have only used the red, green, and blue channels (i.e. bands 4, 3, and 2) in order to generate realistically looking RGB images. Since Sentinel-2 data are not provided in the form of satellite images, but as precisely geo-referenced _granules_, no further processing was required. Instead, the data had to be selected based on the amount of cloud coverage. For the initial selection, a database query for granules with less than or equal to 1% of cloud coverage was used.
## 3 The Dataset
In order to generate a multi-sensor SAR-optical patch-pair dataset, a relatively large amount of remote sensing data with very good spatial alignment needs to be acquired. In order to do this in a mostly automatic manner, we have utilized the cloud-based remote sensing platform Google Earth Engine (Gorelick et al., 2017). The individual steps of the dataset generation procedure are described in the following.
### Data Preparation in Google Earth Engine
The major strengths of Google Earth Engine are two-fold from the point of view of our dataset generation endeavour: On the one hand, it provides an extensive data catalogue containing several petabytes of remote sensing imagery - including all available Sentinel data - and other freely available geodata. On the other hand, it provides a powerful programming interface that allows to carry out data preparation and analysis tasks on Google's computing centers. Thus, we have used it to select, prepare and download the Sentinel-1 and Sentinel-2 imagery from which we have later extracted our patch-pairs. The workflow of the GEE-based image download and patch preparation is sketched in Fig. 1. In detail, it comprises the following steps:
#### 3.1.1 Random ROI Sampling
In order to generate a dataset that represents the versatility of our Earth as good as possible, we wanted to sample the scenes used as basis for dataset production over the whole globe. For this task, we use Google Earth Engine's ee.FeatureCollection.randomPoints() function to randomly sample points from a uniform spatial distribution. Since many remote sensing investigations focus on urban areas and since urban areas contain more complex visual patterns than rural areas, we introduce a certain artificial bias to urban areas by sampling 100 points over all land masses of the Earth and another 50 points only over urban areas. The shape files for both land masses and urban areas were provided by the public domain geo-data service www.naturalearthdata.com at a scale of 1:50m. If two points are located in close proximity to each other, we removed one of them to ensure non-overlapping scenes.
This sampling process is carried out for four different seed values (1158, 1868, 1970, 2017). The result of the random ROI sampling is illustrated in Fig. 2a.
#### 3.1.2 Data Selection
In the second step, we use GEE's tools to filter image collections to select the Sentinel-1/Sentinel-2 image data for our scenes. Since we want to use only recent data acquired in 2017, this first means that we structure the year into the four meteorological seasons: winter (1 December 2016 to 28 February 2017), spring (1 March 2017 to 30 May 2017), summer (1 June 2017 to 31 August 2017), and fall (1 September 2017 to 30 November 2017). Each season is then associated to one of the four sets of random ROIs, thus providing us with the top-level dataset structure (cf. Fig. 3): We structure the final dataset into four distinct sub-groups ROIs1158_spring, ROIs1868_summer, ROIs1970_fall, and ROIs2017_winter.
Then, for each ROI, we filter for Sentinel-2 images with a maximum cloud coverage of 1% and for Sentinel-1 images acquired in IW mode with VV polarization. If no cloud-free Sentinel-2 image or no VV-IW Sentinel-1 image is available within the corresponding season, the ROI is discarded. Thus, the number of ROIs is significantly reduced from about \\(600\\) to about \\(429\\). For example, all ROIs that were located in Antarctica are rendered obsolete, since the geographical coverage of Sentinel-2 is restricted to \\(56^{\\circ}\\) South to \\(83^{\\circ}\\) North.
#### 3.1.3 Image Mosaicking
Continuing with the selected image data, we use the Google Earth Engine in-built functions ee.ImageCollection.mosaic() and ee.Image.clip() to create one single image for each ROI, clipped to the respective ROI extent. The ee.ImageCollection.mosaic() function simply composites overlapping images according to their order in the collection in a _last-on-top_ sense. As mentioned in Section 2.2, we select only bands 4, 3, and 2 for Sentinel-2 in order to create RGB images.
#### 3.1.4 Image Export
Finally, we export the images created in the previous steps as GeoTiffs using the GEE function Export.image.toDrive and a scale of 10m. The downloaded GeoTiffs are then pre-processed for further use by cutting the gray values to the \\(\\pm 2.5\\sigma\\) range, scaling them to the interval \\([0;1]\\) and performing a contrast-stretch. These corrections are applied to all bands individually.
#### 3.1.5 First Manual Inspection
We then visually inspect all downloaded scenes for severe problems. These can mostly belong to one of the following categories:* Large _no-data_ areas. Unfortunately, the ee.ImageCollection.mosaic() function does not return any error message if it does not find a suitable image to fill the whole ROI with data. This mostly happens to Sentinel-2, when no sufficiently cloud-free granule is available for a given time period.
* Strong cloud coverage. The cloud-coverage metadata information that comes with every Sentinel-2 granule is only a global parameter. Thus, it can happen that the whole granule only contains a few clouds, but the part covering our ROI is where all the clouds reside.
* Severely distorted colors. Sometimes, we observed very unnatural colors for Sentinel-2 images. Since we want to create a dataset that contains naturally looking RGB images for Sentinel-2, we also removed some Sentinel-2 images with all too strange colors.
After this first manual inspection, only \\(258\\) scenes/ROIs remain (cf. Fig. (b)b).
#### 3.1.6 Tiling
Since our goal is a dataset of patch-pairs that can be used to train machine learning models aiming at various data fusion tasks, we eventually seek to generate patches of \\(256\\times 256\\) pixels. Using a stride of 128, we reduce the overlap between neighboring patches to only \\(50\\%\\) while maximising the number of independent patches we can get out of the available scenes. We end up with \\(298,790\\) Sentinel-1/Sentinel-2 patch-pairs after this step.
#### 3.1.7 Second Manual Inspection
In order to remove sub-optimal patches that, e.g., still contain small clouds or visible mosaicking seamlines, we have again inspected all patches visually. In this step, \\(16,406\\) patch-pairs are manually removed, leaving the final amount of \\(282,384\\) quality-controlled patch-pairs. Some examples are shown in Fig. 4.
### Dataset Availability
The _SEN1-2_ dataset is shared under the open access license CC-BY and available for download at a persistent link provided by the library of the Technical University of Munich: [https://mediatum.ub.tum.de/1436631](https://mediatum.ub.tum.de/1436631). This paper must be cited when the dataset is used for research purposes.
## 4 Example Applications
In this section, we present some example applications, for which the dataset has been used already. These should serve as inspiration for future use cases and ignite further research on SAR-optical deep learning-based data fusion.
### Colorizing Sentinel-1 Images
The interpretation of SAR images is still a highly non-trivial task, even for well-trained experts. One reason for this is the missing color information, which supports any human image understanding endeavour. One promising field of application for the _SEN1-2_ dataset thus is to learn to colorize gray-scale SAR images with color information derived from corresponding optical images, as we have proposed earlier (Schmitt et al., 2018). In this approach, we make use of SAR-optical image fusion to create artificial color SAR images as training examples, and of the combination of variational autoencoder and mixture density network proposed by (Deshpande et al., 2017) to learn a conditional color distribution, from which different colorization samples can
Figure 1: Flowchart of the semi-automatic, Google Earth Engine-based patch extraction procedure.
Figure 2: Distribution of the ROIs sampled uniformly over the land masses of the Earth: (a) Original ROIs, (b) final set of scenes after removal of cloud- and/or artifact-affected ROIs.
be drawn. Some first results resulting from a training on \\(252\\),\\(384\\)_SEN1-2_ patch pairs are displayed in Fig. 5.
### SAR-optical Image Matching
Tasks such as image co-registration, 3D stereo reconstruction, or change detection rely on being able to accurately determine similarity (i.e. matching) between corresponding parts in different images. While well-established methods and similarity measures exist to achieve this for mono-modal imagery, the matching of multi-modal data remains challenging to this day. The _SEN1-2_ dataset can assist in creating solutions in the field of multi-modal image matching by providing the large quantities of data required to exploit modern _deep matching_ approaches, such as proposed by [10] or [12]: Using a pseudo-siamese convolutional neural network architecture, corresponding SAR-optical image patches of a _SEN1-2_ test subset can be identified with an accuracy of \\(93\\%\\). The confusion matrix for the model of [12] trained on \\(300\\),\\(000\\) corresponding and non-corresponding patch pairs created from a _SEN1-2_ training subset can be seen in Tab. 1. Furthermore, some exemplary matches achieved on the test subset are shown in Fig. 6.
### Generating Artificial Optical Images from SAR Inputs
Another possible field of application of the _SEN1-2_ dataset is to train generative models that allow to predict artificial SAR images from optical input data [13, 10] or artificial optical imagery from SAR inputs [14, 15, 16]. Some preliminary examples based on the well-known generative adversarial network (GAN) pix2pix [17] trained on
\\begin{table}
\\begin{tabular}{|l|l|l|} \\hline \\(\\hat{y}/y\\) & non-match & match \\\\ \\hline non-match & 93.84\\% & 6.16\\% \\\\ \\hline match & 6.02\\% & 93.98\\% \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Confusion Matrix for Pseudo-siamese patch matching trained on _SEN1-2_
Figure 4: Some exemplary patch-pairs from the _SEN1-2_ dataset. Top row: Sentinel-1 SAR image patches, bottom row: Sentinel-2 RGB image patches.
Figure 5: Some results for colorized SAR image patches. In each row, from left to right: original Sentinel-1 SAR image patch, corresponding Sentinel-2 RGB image patch, artificial color SAR patch based on color-space-based SAR-optical image fusion, artificial color SAR image predicted by a deep generative model.
Figure 3: Structure of the final dataset.
\\(108,221\\)_SEN1-2_ patch pairs are shown in Fig. 7.
## 5 Strengths and Limitations of the Dataset
To our knowledge, _SEN1-2_ is the first dataset providing a really large amount (\\(>100,\\!000\\)) of co-registered SAR and optical image patches. The only other existing dataset in this domain is the so-called SARptical dataset published by [22]. In contrast to the _SEN1-2_ dataset, it provides very-high-resolution image patches from TerraSAR-X and aerial photogrammetry, but is restricted to a mere 10,\\(000\\) patches extracted from a single scene, which is possibly not sufficient for many deep learning applications - especially since many patches show an overlap of more than \\(50\\%\\). With its \\(282,\\!384\\) patch-pairs spread over the whole globe and all meteorological seasons, _SEN1-2_ will thus be a valuable data source for many researchers in the field of SAR-optical data fusion and remote sensing-oriented machine learning. A particular advantage is that the dataset can easily be split into various deterministic subsets (e.g. according to scene or according to season), so that truly independent training and testing datasets can be created, supporting unbiased evaluations with regard to unseen data.
However, also _SEN1-2_ does not come without limitations: For example, we restricted ourselves to RGB images for the Sentinel-2 data, which is possibly insufficient for researchers working on the exploitation of the full radiometric bandwidth of multi-spectral satellite imagery. Furthermore, at the time we carried out the dataset preparation, GEE stocked only Level-1C data for Sentinel-2, which basically means that the pixel values represent top-of-atmosphere (TOA) reflectances instead of atmospherically corrected bottom-of-atmosphere (BOA) information. We are planning to extend the dataset for a future version 2 release accordingly.
## 6 Summary and Conclusion
With this paper, we have described and released the _SEN1-2_ dataset, which contains \\(282,\\!384\\) pairs of SAR and optical image patches extracted from versatile Sentinel-1 and Sentinel-2 scenes. We assume this dataset will foster the development of machine learning, and in particular, deep learning approaches in the field of satellite remote sensing and SAR-optical data fusion. For the future, we plan on releasing a refined, second version of the dataset, which contains not only RGB Sentinel-2 images, but full multi-spectral Sentinel-2 images including atmospheric correction. In addition, we might add coarse land use/land cover (LULC) class information to each patch-pair in order to foster also developments in the field of LULC classification.
## Acknowledgements
This work is jointly supported by the Helmholtz Association under the framework of the Young Investigators Group SiPEO (VH-NG-1018, www.sipeo.bgu.tum.de), the German Research Foundation (DFG) as grant SCHM 3322/1-1, and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement ERC-2016-StG-714087, Acronym: _So2Sat_).
## References
* [1]D. Deshpande, A. Lu, J. Yeh, M.-C. Chong, M. J. Forsyth, and D. (2017) Learning diverse image colorization. In Proc. CVPR, Honolulu, HI, USA, pp. 6837-6845. Cited by: SS1.
* [2]M. Drusch, U. Del Bello, S. Carlier, O. Colin, V. Fernandez, F. Gascon, B. Hoersch, C. Isola, P. Laberini, P. Martimort, et al. (2012) Sentinel-2: ESA's optical high-resolution mission for GMES operational services. Remote sensing of Environment120, pp. 25-36. Cited by: SS1.
* [3]E. E.
Isola, P., Zhu, J.-Y., Zhou, T. and Efros, A. A., 2017. Image-to-image translation with conditional adversarial networks. In: _Proc. CVPR_, Honolulu, HI, USA, pp. 1125-1134.
* Ley et al. (2018) Ley, A., d'Hondt, O., Valade, S., Hansch, R. and Hellwich, O., 2018. Exploiting GAN-based SAR to optical image transcoding for improved classification via deep learning. In: _Proc. EUSAR_, Aachen, Germany, pp. 396-401.
* Marmanis et al. (2017) Marmanis, D., Yao, W., Adam, F., Datcu, M., Reinartz, P., Schindler, K., Wegner, J. D. and Stilla, U., 2017. Artificial generation of big data for improving image classification: a generative adversarial network approach on SAR data. In: _Proc. BiDS_, Toulouse, France, pp. 293-296.
* Merkle et al. (2018) Merkle, N., Auer, S., Muller, R. and Reinartz, P., 2018. Exploring the potential of conditional adversarial networks for optical and SAR image matching. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_. in press.
* Merkle et al. (2017) Merkle, N., Wenjie, L., Auer, S., Muller, R. and Urtasun, R., 2017. Exploiting deep matching and SAR data for the geo-localization accuracy improvement of optical satellite images. _Remote Sensing_ 9(9), pp. 586-603.
* an ever-growing relationship. _IEEE Geosci. Remote Sens. Mag._ 4(4), pp. 6-23.
* Schmitt et al. (2018) Schmitt, M., Hughes, L. H., Korner, M. and Zhu, X. X., 2018. Colorizing Sentinel-1 SAR images using a variational autoencoder conditioned on Sentinel-2 imagery. In: _Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci._, Vol. XLII-2, pp. 1045-1051.
* Schubert et al. (2015) Schubert, A., Small, D., Miranda, N., Geudtner, D. and Meier, E., 2015. Sentinel-1a product geolocation accuracy: Commissioning phase results. _Remote Sensing_ 7(7), pp. 9431-9449.
* Torres et al. (2012) Torres, R., Snoeij, P., Geudtner, D., Bibby, D., Davidson, M., Attema, E., Potin, P., Rommen, B., Floury, N., Brown, M. et al., 2012. GMES Sentinel-1 mission. _Remote Sensing of Environment_ 120, pp. 9-24.
* Wang and Patel (2018) Wang, P. and Patel, V. M., 2018. Generating high quality visible images from SAR images using CNNs. _arXiv:1802.10036_.
* Wang and Zhu (2018) Wang, Y. and Zhu, X. X., 2018. The SARptical dataset for joint analysis of SAR and optical image in dense urban area. _arXiv:1801.07532_.
* Zhang et al. (2016) Zhang, L., Zhang, L. and Du, B., 2016. Deep learning for remote sensing data. _IEEE Geoscience and Remote Sensing Magazine_ 4(2), pp. 22-40.
* Zhu et al. (2017) Zhu, X. X., Tuia, D., Mou, L., Xia, G.-S., Zhang, L., Xu, F. and Fraundorfer, F., 2017. Deep learning in remote sensing: A comprehensive review and list of resources. _IEEE Geoscience and Remote Sensing Magazine_ 5(4), pp. 8-36. | _This is a pre-print of a paper accepted for publication in the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Please refer to the original (open access) publication from October 2018._
While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the _SEN1-2_ dataset to foster deep learning research in SAR-optical data fusion. _SEN1-2_ comprises \\(282,384\\) pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since _SEN1-2_ is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.
Synthetic aperture radar (SAR), optical remote sensing, Sentinel-1, Sentinel-2, deep learning, data fusion | Summarize the following text. | 278 |
arxiv-format/2404_17434v1.md | # Exploring Wireless Channels in Rural Areas:
A Comprehensive Measurement Study
Tianyi Zhang, Guoying Zu, Taimoor Ul Islam, Evan Gossling, Sarath Babu, Daji Qiao, Hongwei Zhang
Department of Electrical and Computer Engineering, Iowa State University, U.S.A.
{tianyiz, gyzu, tislam, evang, sarath4, daji, hongwei}@iastate.edu
## I Introduction
Numerous cutting-edge technologies are being integrated into agricultural practices, revolutionizing precision and automation in the field. These advancements heavily rely on the use of cameras, sensors, and unmanned vehicles to optimize agricultural processes. As demand for the use of such devices in agriculture continues to grow, it has become imperative to ensure reliable wireless communication over a wireless network that possesses characteristics such as large-scale coverage, affordability, high capacity, and low latency. A reliable wireless connection depends not only on advanced hardware and algorithms but also on a thorough understanding of the wireless channel behaviors. Some studies such as the one by Wang et al. in [1] aims to measure and model the behavior of 5G wireless channels. However, agricultural areas are located in rural regions, where the wireless channel characteristics may significantly differ from urban and suburban areas.
In recent years, wireless sensor networks have gained significant attention in agriculture, leading to numerous studies focusing on measuring wireless channels in various agricultural scenarios. For instance, Jawad et al. derived empirical path loss models in farm fields using drones [2]. In a similar way, measurements were conducted in apple orchards in [3] and [4]. Raheemah et al. generated empirical path loss models in [5] for greenhouses. Measurements from cornfields were taken by Pan et al. in [6], while Zhu et al. conducted measurements in a pig breeding farm [7]. However, these studies primarily focused on Zigbee at 2.4 GHz and their path loss models focus on the impact of distance on the wireless channels.
As discussed in [8] and [9], and further verified in our previous work [10], weather conditions have a significant impact on wireless channels. In our previous study, we measured the wireless channels of TV white space (TVWS) in a crop farm and found that channel quality varies considerably between morning and mid-day. However, due to the hardware limitations, we could not collect accurate weather information to establish the quantitative relationship between channel quality and weather conditions. Now, with the ARA wireless living lab [11], we have the ability to collect and analyze wireless channel information with the help of accurate and comprehensive weather data.
ARA [11] as part of the NSF Platforms for Advanced Wireless Research (PAWR) program, is an at-scale platform for advanced wireless research deployed across the Iowa State University (ISU) campus, City of Ames, Iowa, USA, surrounding research and producer farms, and rural communities in central Iowa, spanning a rural area with a diameter of over 60 km. ARA serves as a wireless living lab for smart and connected rural communities, facilitating the research and development of rural-focused wireless technologies that provide affordable, high-capacity connectivity to rural communities and industries such as agriculture.
Leveraging the ARA wireless living lab, we conducted a measurement study between March and June of 2023 to analyze the TVWS and mid-band wireless channels in various crop and livestock farms. Key contributions of this measurement study are summarized below:
* We performed a comprehensive analysis of multi-band wireless channels using wireless channel measurement data collected by ARA base stations (BSs) and user equipment (UEs), and weather data collected by the weather station and the disdrometer. The weather dataset includes information such as rain rate, raindrop size, humidity, and temperature.
* We gathered and analyzed path loss information from different types of building blockages in various crop and livestock farms, which could be valuable for both radio deployments and algorithm designs in the rural settings.
* We will make the dataset publicly available that contains time-stamped wireless channel measurements and weather information, including the channel matrix of a MIMO system, which could be beneficial to data-driven wireless communications research.
The rest of this paper is organized as follows: Section II presents an overview of the entire system. Section III discusses the methodology employed in this measurement study. We present and analyze the measurement results in Section IV and section V summarizes the key findings.
## II System overview
ARA has four outdoor base stations (BSs) and 13 user equipment (UEs) at fixed locations in Phase-1. In this paper, our research focuses specifically on the BS on the rooftop of ISU Wilson Residence Hall, that is surrounded by essential application facilities of ARA. To the south, there are ISU dairy farm, sheep teaching farm, and Curtiss crop farm. On the west side, a few City of Ames facilities and several farms are located, while agricultural vehicle operate on the east side of the building.
### _Base Station_
The rooftop base station at Wilson Hall, located approximately 120 ft above the ground, is equipped with a comprehensive array of wireless equipment. This includes 1\\(\\times\\)Skylark TVWS BS, 3\\(\\times\\)Ericsson mid-band BSs, 3\\(\\times\\)Ericsson mmWave BSs, 3\\(\\times\\)NI N320 Software-defined radios (SDRs), 1\\(\\times\\)Keysight RF Sensor, 1\\(\\times\\)weather station, and 1\\(\\times\\)disdrometer. The BS consists of three sectors with azimuths of 60, 180, and 300 degrees, and each sector covers 120 degrees. Fig. 1 shows the antenna layout of the northwest sector (an azimuth of 60 degrees).
The Skylark BS is a commercial-off-the-shelf (COTS) platform designed to operate in the TVWS bands. It comprises one central unit (CU), one distributed unit (DU), and three radio units (RUs). Each RU is equipped with 14 antennas. Skylark supports many-antennas multiple-input multiple-output (mMIMO) technology and is poised to support Open-RAN in the future. The Skylark deployment as part of the ARA wireless living lab, named _AraMIMO_[12], provides a set of APIs, enabling control, configuration and measurement, thereby facilitating comprehensive research on whole-stack mMIMO systems.
In the mid-band frequency range, ARA is equipped with three Ericsson AIR6419 BSs operating in the range of 3450-3550 MHz. These BSs support mMIMO as well as CSI-RS and SRS beam forming. ARA has also deployed three NI USRP N320 SDRs operating in the mid-band. SDRs are programmable transceivers that offer flexible, reconfigurable, and programmable framework for various wireless technologies, eliminating the need for hardware updates. In ARA, SDRs are integrated with power amplifiers (PA) and low-noise amplifiers (LNA) to enhance signal strength in outdoor environments. These amplifiers operate between 3400-3800 MHz. The SDRs can be controlled using USRP Hardware Driver (UHD) [13] or GNURadio [14], enabling functions such as spectrum sensing, signal generation and analysis, as well as running the full-stack LTE and 5G using open-source software such as ssrRAN [15] and OpenAirInterface [16].
### _User Equipment_
ARA is equipped with two types of UEs: fixed-location UE and portable UE. Fixed-location UEs are strategically placed in crop and livestock farms to facilitate agricultural and livestock sciences research, while portable UEs are designed to be mounted on various vehicles used for agriculture, school and public transport, and fire and safety services. In this measurement study, portable UEs are used to assess how wireless links are affected by their surrounding environments, e.g., blockage characteristics of various farm buildings.
As depicted in Fig. 2, each UE is housed within a box that consists of a Skylark customer premises equipment (CPE) operating in the TVWS band, a Quectel UE to communicate with Ericsson BS in both mid-band and mmWave band, and an NI B210 SDR operating in the mid-band. In addition, each UE box is equipped with a Cradlepoint IBR600C router, enabling management and provisioning of the UE devices through the ARA portal [17].
### _Weather Station and Disrometer_
ARA features weather stations and high-precision discrometers at BS sites (Fig. 3) to collect weather information. These devices enable continuous collection of weather data such as temperature, humidity, rain rate, and raindrop size. By correlating weather data with channel measurement results, we could gain a comprehensive understanding of how the environmental variables impact the wireless channel condition. This knowledge serves as a crucial foundation for uninterrupted ultra-reliable low-latency communication (URLLC) under all-day, all-weather conditions.
## III Methodology
The measurement study took place between March and June of 2023 and is divided into two parts: fixed-location UE measurements and portable UE measurements, each serving different purposes.
### _Fixed-location UE measurements_
The primary goal of the fixed-location UE measurements was to collect data and study the impact of weather conditions. Automated scripts were developed and run on both BS and UE host computers for data collection. These scripts allow for the customization of measurement parameters such as starting time, duration, center frequency, and bandwidth. In this paper, we focus our study on Skylark (TVWS band) and Ericsson/Quectel (mid-band). At the UE side, the scripts were designed to collect only the received signal strength, while on the BS side, the scripts collected throughput, latency, and signal-to-noise ratio (SNR). Furthermore, the scripts at the BS side initiate the scripts at the UE side to ensure synchronized data collection at both ends.
In the experiments, we mainly used a fixed-location UE deployed at the Curtiss Farm field and the BS on the rooftop of Wilson Hall. The geographical locations of these nodes are shown in Fig. 4, with a line-of-sight (LOS) path of 0.94 miles between them, enabling a reliable connection throughout the entire duration of the experiments. This allows for long-term automated measurements without interruption. Data from the TVWS bands is collected at 2 seconds intervals, while for the mid-band, the interval is set to 8 seconds.
Previous studies have demonstrated that wireless communication, particularly in higher frequency bands, are susceptible to precipitation [9]. In this work, we collect various wireless channel parameters such as path loss, SNR, throughput, and latency, under different weather conditions and across different frequency bands. Measurements were taken for several hoursto account for varying levels of precipitation. Weather information reported by the weather station and disdrometer was also incorporated to facilitate the study of the impact of different levels of precipitation.
Furthermore, to study the impact of humidity on wireless link quality, as acknowledged in the previous studies [10, 18], we also collect data during various time intervals on sunny days. We recorded humidity data using the weather station simultaneously.
### _Portable UE measurements_
We used the portable UEs to compare the wireless link performance at different locations. Even though there exist numerous mature channel models describing the change in channel behavior with distance, this paper specifically focuses on studying the blockages caused by buildings in farms, which have not been extensively investigated so far. Farm environments consist of different types of buildings with unique structures designed for specific purposes, such as crop storage, agricultural machinery storage, and sheep/cattle breeding. Such specialized structures cannot be easily modeled using existing models. Therefore, the objective of this section is to fill the gap in knowledge regarding blockages caused by farm buildings.
The portable UE measurements were carried out at three farm sites: Curtiss Farm, Sheep Farm, and Dairy Farm, as shown in Fig. 5. Curtiss Farm is a crop farm cultivating corn and soybeans. Our measurements at Curtiss Farm were divided into three groups, as shown in Fig. 6. Group A involves measuring a barn used for agricultural machinery storage, with a large gate facing north. We conducted measurements at five locations, with an additional reference location, under two conditions: (1) with the gate opened and (2) with the gate closed. The five locations consist of one situated to the north of the barn, one positioned to the south of the barn, and three locations inside the barn itself, namely inside-north, inside-middle, and inside-south. The reference location is in the nearby field around the same distance toward the Wilson Hall BS, however, with a clear LOS path. This is to facilitate the comparison of the building's impact on wireless links. Groups B and Group C focused on measuring the blockage caused by trees and a metal crop storage barn, respectively. Measurements were taken both to the north and the south of the trees/barn.
Similar to Group A measurements at Curtiss Farm, we placed a portable UE to the north and south of the SheepFarm, as well as three locations inside the building. The fixed-location UE deployed on the rooftop near the north end of the building is used as the reference node.
The measurements at Dairy Farm were also divided into three groups, as depicted in Fig. 7. Group A involves measurements at the lactation barn. Measurements were taken to the north, to the south, and inside the barn. Group B includes two hoop houses, one facing the west and another facing the south. We performed the measurements on both hoop houses due to their different orientations. Group C is dedicated to the measurement of the blockage resulting from a large hay pile. Although hay piles are not as tall as buildings, we include this measurement to account for its potential blockage effects, as hay piles are a common source of blockage in farms.
## IV Measurement results
In this section, we present a subset of our analysis highlighting the practical value of weather information and the distinctive data regarding the impact of farm buildings. Additional data will be made accessible to the public via the ARA data warehouse.
### _Impact of Rain_
As mentioned in Section III-A, we use a fixed-location UE at Curtiss Farm to evaluate the impact of different weather conditions, including rain. Among the observed weather events, we focus on analyzing the effects of a single rain event in this paper to isolate the influence of other uncontrollable variables. To gain a better understanding of the impact of rain, we present Figs. 8 and 9 which organize the data by dividing the rain rate into five distinct levels. Higher rain rates result in increased path loss. However, even in the presence of the highest rain rate, the received signal strength only experiences a drop of 1.49 dB in the mid-band and 1.09 dB in the TVWS band when compared to no rain.
The ITU-R (International Telecommunication Union - Recommendations) provides a rain attenuation model, P.838-3, in [19], which predicts the attenuation caused by rain based on the rain rate within the frequency range of 1-1,000 GHz. The specific attenuation, denoted as \\(\\gamma_{R}\\), is determined using a power-law relationship with the rain rate \\(R\\):
\\[\\gamma_{R}=kR^{\\alpha}, \\tag{1}\\]
where coefficients \\(k\\) and \\(\\alpha\\) are functions of frequency and can be calculated using Eqns. (2) and (3)in [19]. According to this model, the worst-case attenuation caused by rain is estimated to be 0.00226 dB, which is significantly lower than what we have observed from Figs. 8 and 9. One plausible explanation for this substantial attenuation discrepancy is the influence of surface water on the antenna. To gain a deeper understanding of the underlying causes behind this inconsistency, further investigation is warranted. Moreover, there exists potential for the development of a new model designed to account for these variations.
In addition to the rain rate, we also examine the impact of the raindrop size. Fig. 10 shows limited differentiation in both mean and confidence interval for different raindrop diameters, compared to the previous figures. This lack of distinction could primarily be attributed to the mixing of raindrops with varying diameters in the data. For instance, based on the raw data obtained from the disdrometer, small raindrops (with diameter \\(\\leq\\) 1 mm) are present across all levels of rain rate. Hence, it is not recommended that the raindrop diameter be used as a reliable indicator of the intensity of rain attenuation.
### _Impact of Humidity_
In our previous study in [10], we observed a significant impact of time on path loss, with notable variations between morning and mid-day. While we hypothesized that humidity may be a contributing factor, there does not exist sufficient data to validate this hypothesis. In this study, we collected wireless channel information alongside humidity measurements to investigate further. The results, depicted in Figs. 11 and 12, illustrate the influence of humidity on the TVWS and mid-band channels throughout a clear day. Evidently, humidity exhibits an inverse correlation with received signal strength.
To gain a more profound insight into the impact of humidity, we conducted a thorough analysis, calculating the correlation coefficient. This coefficient serves as a valuable tool for assessing the statistical relationship between these two variables, yielding values that span from -1 to 1. A coefficient of 1 signifies a perfect positive correlation, indicating a direct and proportional relationship between the variables, while -1 denotes a perfect negative or inverse correlation. A correlation coefficient of 0, on the other hand, implies the absence of any linear relationship.
In the measurement results of the rain, the correlation coefficient shows a strong negative correlation (-0.94) between the RSRP and humidity in the mid-band, while the correlation (-0.55) is relatively weaker in TVWS. These observations align with the general expectations, as higher frequency bands tend to experience more signal absorption by water vapor when the frequency is less than 20 GHz.
### _Impact of Temperature_
Studies, such as [8, 9], believe that temperature is equally important as humidity in affecting wireless channels. In our investigation, we also gathered temperature data to assess its impact. To illustrate our findings, we present the results in Figs. 13 and 14. Both figures demonstrate a positive correlation between temperature and received signal strength. Through correlation coefficient calculations, we found that the TVWS band exhibits a stronger correlation (0.91) with temperature. In contrast, the correlation between temperature and the mid-band is relatively weaker at 0.38. We have not yet found a reasonable explanation for this discrepancy. Considering the
Fig. 7: Three groups of measurements at Dairy Farm.
insights gained from the humidity analysis in Section IV-B, further investigation is needed to determine whether temperature has a more pronounced impact than humidity in lower frequency bands, and the opposite trend in higher frequency bands.
### _Impact of Farm Buildings_
As discussed in Section III-B, all measurement locations are to the south of the Wilson Hall BS. Hence, the signal measurements taken on the north side of the farm buildings were not blocked by any obstacles. In contrast, the signals on the south side of the buildings had to pass through the buildings, resulting in a disparity between the measured signal strengths. Such a discrepancy can be considered as an additional impact caused by the buildings. Moreover, measurements were taken inside the buildings to analyze the path loss due to the walls.
The blockage path loss results are listed in TABLE I, while TABLE II presents the results for the agricultural machinery storage building to demonstrate the impact of open and closed gates. To simplify the presentation of the results, we consider the measurements taken from the north side of the buildings as the baseline. The tables only show the additional path loss due to the buildings compared to the baseline. Moreover, Figs. 16 to 18 include snapshots of the buildings showing their shapes and structures.
From the findings in TABLE I, it is observed that the hay piles cause significant signal blockage, which is somewhat unexpected. The orientation of the hoop house does not have a substantial impact. However, the orientation of the livestock barn does matter. For effective air circulation, the animal barns are open (and fenced) in the north-south direction or the east-west direction. The lactation barn, being north-south open, exhibits much less blockage compared to the sheep barn, which is east-west open.
### _Dataset for Future Research_
ARA, as a large-scale multi-cell multi-band wireless experimental infrastructure, serves not only as a testbed for rural wireless and applications but also has the potential to play a unique role in providing valuable datasets to support various types of research. For example, one such research area could be AI-related research for channel modeling and channel occupancy prediction, where the channel information collected by the MIMO system and RF sensors, along with weather information collected by the weather station and disdrometer, can make significant contributions.
Currently, ARA is in the process of building a data warehouse to store the aforementioned data, and we plan to share it with the public through the ARA portal [17] in the near future. The dataset generated from this measurement study will be included in the data warehouse. The dataset consists of three parts: (1) Skylark TVWS measurements, (2) Ericsson Midband measurements, and (3) weather information. All measurements are timestamped, enabling users to seamlessly integrate and analyze the data. Additionally, the data warehouse will also include measurement data that were collected during this measurement study but not yet analyzed and reported in this paper, such as data collected by the Skylark BS with multiple Skylark CPEs under different weather conditions.
## V Conclusion
This work investigates the impact of weather conditions and agricultural buildings on TVWS and mid-band wireless channels in rural areas. Our study involved collecting wireless channel data during rainfall and analyzing the impact of rain rate and raindrop size. Our findings revealed that the rain rate has a more significant effect on signal attenuation than the raindrop size. Additionally, we discovered strong correlations between humidity, temperature, and path loss, suggesting a need for further exploration of the relationship between these three factors. Another notable contribution of this paper is the inclusion of the path loss data resulting from agricultural buildings, which is an area of research that has only received limited attention so far. Furthermore, all data collected during this measurement study, including weather data, will be publicly accessible through the ARA portal [17]. We anticipate that these datasets of real-world measurement results will prove valuable for estimation, modeling, and algorithm design pertaining to rural wireless channels.
## Acknowledgment
This work is supported in part by the NIFA award 2021-67021-33775, and NSF awards 2130889, 2112606, 2212573, 2229654, 2232461.
## References
* [1] C.-X. Wang, J. Bian, J. Sun, W. Zhang, and M. Zhang, \"A survey of 5G channel measurements and models,\" _IEEE Communications Surveys & Tutorials_, vol. 20, no. 4, pp. 3142-3168, Aug. 2018.
* [2] H. M. Jawad, A. M. Jawad, R. Nordin, S. K. Gharghan, N. F. Abdullah, M. Ismail, and M. J. Abu-AlShaeer, \"Accurate empirical path-loss model based on particle swarm optimization for wireless sensor networks in smart agriculture,\" _IEEE Sensors Journal_, vol. 20, no. 1, pp. 552-561, Sep. 2019.
* [3] P. Abouzar, D. G. Michelson, and M. Hamdi, \"RSSI-based distributed self-localization for wireless sensor networks used in precision agriculture,\" _IEEE Transactions on Wireless Communications_, vol. 15, no. 10, pp. 6638-6650, Jul. 2016.
* [4] X. Guo, C. Zhao, X. Yang, M. Li, C. Sun, L. Qu, and Y. Wang, \"Propagation characteristics of 2.4 GHz wireless channel at different heights in apple orchard,\" _Transactions of the Chinese Society of Agricultural Engineering_, vol. 28, no. 12, pp. 195-200, Jun. 2012.
* [5] A. Raheemah, N. Sabri, M. Salim, P. Ehkan, and R. B. Ahmad, \"New empirical path loss model for wireless sensor networks in mango greenhouses,\" _Computers and Electronics in Agriculture_, vol. 127, pp. 553-560, Sep. 2016.
* [6] H. Pan, Y. Shi, X. Wang, and T. Li, \"Modeling wireless sensor networks radio frequency signal loss in corn environment,\" _Multimedia Tools and Applications_, vol. 76, pp. 19 479-19 490, Oct. 2017.
* [7] H. Zhu, S. Li, L. Zheng, and L. Yang, \"Modeling and validation on path loss of WSN in pig breeding farm,\" _Transactions of the Chinese Society of Agricultural Engineering_, vol. 33, no. 2, pp. 205-212, Jan. 2017.
* [8] J. Inomala and I. Hakala, \"Effects of temperature and humidity on radio signal strength in outdoor wireless sensor networks,\" in _2015 Federated Conference on Computer Science and Information Systems (FedCSIS)_. IEEE, Sep. 2015, pp. 1247-1255.
* [9] D. Czerwinski, S. Przylucki, P. Wojcicki, and J. Sitkiewicz, \"Path loss model for a wireless sensor network in different weather conditions,\" in _Computer Networks: 24th International Conference, CN 2017, Ladek Zdny, Poland, June 20-23, 2017, Proceedings_. Springer, May 2017, pp. 106-117.
* [10] M. Sander-Frigau, T. Zhang, C.-Y. Lim, H. Zhang, A. E. Kamal, A. K. Somani, S. Hey, and P. Schnable, \"A measurement study of TVWS wireless channels in crop farms,\" in _2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems (MASS)_. IEEE, Dec. 2021, pp. 344-354.
* [11] H. Zhang, Y. Guan, A. Kamal, D. Qiao, M. Zheng, A. Arora, O. Boyraz, B. Cox, T. Daniels, M. Darr _et al._, \"ARA: A wireless living lab vision for smart and connected rural communities,\" in _Proceedings of the 15th ACM Workshop on Wireless Network Testbeds, Experimental evaluation & CHaracterization_, Jan. 2022, pp. 9-16.
* [12] T. U. Islam, T. Zhang, J. O. Bouteng, E. Gossling, G. Zu, S. Babu, H. Zhang, and D. Qiao, \"AraMIMO: Programmable TVWS mMIMO Living Lab for Rural Wireless,\" in _Proceedings of the 17th ACM Workshop on Wireless Network Testbeds, Experimental Evaluation & Characterization_, ser. WINTECH '23, 2023, p. 9-16.
* Etus Knowledge Base,\" [https://kb.ettus.com/UHD](https://kb.ettus.com/UHD), (Accessed on 06/30/2023).
* The free & open source radio ecosystem
- GNU Radio,\" [https://www.gnurado.org](https://www.gnurado.org), (Accessed on 06/30/2023).
* Open Source RAN,\" [https://www.srsle.com/](https://www.srsle.com/), (Accessed on 06/30/2023).
* 5G software alliance for democractising wireless innovation,\" [https://openairinterface.org/](https://openairinterface.org/), (Accessed on 06/30/2023).
* [17] \"ARA Portal,\" [https://portai.arawireless.org](https://portai.arawireless.org), (Accessed on 06/30/2023).
* [18] J. Luo, X. Xu, and Q. Zhang, \"Understanding link feature of wireless sensor networks in outdoor space: A measurement study,\" in _2011 IEEE Global Telecommunications Conference-GLOBECOM 2011_. IEEE, Jan. 2011, pp. 1-5.
* [19] I. Rec, \"ITU-R p. 838-3: specific attenuation model for rain for use in prediction methods,\" _International Telecommunication Union-ITU, fevereiro_, pp. 383-3, 2005. | The study of wireless channel behavior has been an active research topic for many years. However, there exists a noticeable scarcity of studies focusing on wireless channel characteristics in rural areas. With the advancement of smart agriculture practices in rural regions, there has been an increasing demand for affordable, high-capacity, and low-latency wireless networks to support various precision agriculture applications such as plant phenotyping, livestock health monitoring, and agriculture automation. To address this research gap, we conducted a channel measurement study on multiple wireless frequency bands at various crop and livestock farms near Ames, Iowa, based on Iowa State University (ISU)'s ARA Wireless Living lab - one of the NSF PAWR platforms. We specifically investigate the impact of weather conditions, humidity, temperature, and farm buildings on wireless channel behavior. The resulting measurement dataset, which will soon be made publicly accessible, represents a valuable resource for researchers interested in wireless channel prediction and optimization.
ARA, rural wireless, channel measurement, 5G. | Provide a brief summary of the text. | 198 |
arxiv-format/1607_06869v1.md | # Fragmentation of protoplanetary disks around M-dwarfs
Isaac Backus,\\({}^{1}\\) Thomas Quinn,\\({}^{1}\\)
\\({}^{1}\\)University of Washington, Seattle, WA 98195, USA
E-mail: [email protected]: [email protected]
Accepted XXX. Received YYY; in original form ZZZ
## 1 Introduction
The importance of gravitational instabilities (GI) in the evolution of protoplanetary disks (PPDs) and in planet formation remains hotly debated (Cameron, 1978; Boss, 1997; Durisen et al., 2007; Boley & Durisen, 2010; Paardekooper, 2012). In recent years, the core accretion (CA) plus gas capture model of giant planet formation has received much attention (Pollack et al., 1996; Helled et al., 2014), but GI is still seen as a candidate for the direct formation of giant planets, especially at large orbital radii (Boley, 2009). While CA gives a more natural explanation of terrestrial planet formation and small bodies, GI may be important for the formation of these objects via solid enhancement within spiral arms or fragments (Haghighipour & Boss, 2003). GI may also play an important role during the embedded phase of star formation (Vorobyov, 2011).
Understanding the role of GI in planet formation will require continued observation of PPDs (Andrews & Williams, 2005; Isella et al., 2009; Mann et al., 2015) and further theoretical work. Of primary importance are (i) disk cooling times (Gammie, 2001; Rafikov, 2007; Meru & Bate, 2011, 2012), which must be sufficiently short to allow density perturbations to grow against pressure support, and (ii) the Toomre \\(Q\\) parameter (Toomre, 1964):
\\[Q\\equiv\\frac{c_{s}\\kappa}{\\pi G\\Sigma} \\tag{1}\\]
where \\(c_{s}=\\sqrt{\
u k_{B}T/m}\\) is the gas sound speed, \\(\\kappa\\) is the epicyclic frequency (\\(\\kappa=\\Omega\\) for a massless disk), and \\(\\Sigma\\) is the disk surface density. As \\(Q\\) decreases toward unity, PPDs become increasingly unstable, and if \\(Q\\) becomes sufficiently small, disks will undergo fragmentation.
The parameters required for fragmentation, such as disk mass (\\(M_{d}\\)), disk radius (\\(R_{d}\\)), and disk temperature (\\(T\\)), are constrained by the critical \\(Q\\) required for fragmentation. Some previous studies have found values of \\(Q_{crit}=1.3-1.5\\)(Boss, 1998, 2002; Mayer et al., 2004), although it has been noted that \\(Q\\) can drop below unity and the disk may still tend to a self-regulating state (Boley, 2009).
Determining parameters required for fragmentation is complicated by issues of resolution. The constant (\\(\\beta\\)) cooling simulations of Meru & Bate (2011) demonstrated non-convergence of SPH simulations. Further work (Meru & Bate, 2012; Rice et al., 2012) suggested artificial viscosity is to blame. Work is underway to investigate this problem; however, resolution dependent effects are still poorly understood in SPH simulations of PPDs (Rice et al., 2014).
Previous work has tended to focus on PPDs around solar mass stars. Motivated by the large population of low mass stars, we study GI around M-dwarfs with mass \\(M_{*}=M_{\\sun}/3\\). Around 10% of known exoplanets are around M-dwarfs (Han et al., 2014). Due to selection effects of current surveys such as Kepler (Borucki et al., 2010), this is expected to be a large underestimate of the actual population. Recent discoveries show that disks around M and brown dwarfs are different from those around solar analogs: the mass distribution falls of more slowly with radius, and is denser at the midplane. These differences change disk chemistry and the condensation sequence. M-dwarf disks are also less massive and survive longer (Apai & Pascucci, 2009, 2010). Core accretion timescales, which scale as the orbital period, are long around M-dwarfs. Because the stars are much lower in luminosity, their disks are substantially cooler. Additionally, planets orbiting nearby M-dwarfs are likely to be the first smaller planets spectroscopically characterized (Seager et al., 2015).
The simulations of Boss (2006a) indicate that GI is able to form gas giants around M-dwarfs. Boss (2006b) and Boss (2008) even argue that super earths around M-dwarfs can be explained as gas giants, formed via GI, and stripped of their gaseous envelopes by photoevaporation.
In this paper we explore the conditions required for disk fragmentation under GI around M-dwarfs. Previous studies have found a range of values of the \\(Q_{crit}\\) required for disk fragmentation (Boss, 1998, 2002; Mayer & Gawryszczak, 2008; Boley, 2009). Discrepancies may be due to different equations of state (EOS), cooling algorithms, numerical issues such as artificial viscosity prescriptions, Eulerian vs. Lagrangian codes, and initial conditions (ICs). ICs close to equilibrium are non-trivial to produce and so we explore the dependence on ICs of simulations of gravitationally unstable disks.
We focus on probing disk fragmentation around M-Dwarfs, which remains poorly studied, and the importance of ICs. These warrant a simple, well understood isothermal EOS. We therefore probe the Toomre \\(Q\\) required for fragmentation while leaving the question of the cooling required for fragmentation for future work.
We begin in SS2 by presenting our method for generating equilibrium initial conditions for smoothed-particle hydrodynamic (SPH) simulations of PPDs, with particular care taken in calculating density and velocity profiles. SS3 describes the suite of simulations presented here and discuss the theoretical and observational motivations behind our disk profiles. SS4 presents our analysis of disk fragmentation around M-dwarfs and discusses the importance of ICs in simulations of unstable disks. SS5 presents our method for finding and tracking gravitationally bound clumps and discusses clump formation in our simulations. We present our discussion in SS6. We consider the effects of thermodynamics and ICs on fragmentation in PPD simulations and argue that GI should play an important role in PPDs around M-dwarfs and that we expect disk fragmentation at large radii to occur around many M-dwarfs.
## 2 Initial conditions
A major emphasis of this work was to ensure that ICs were as close to equilibrium as possible. Axisymmetric disks very near equilibrium may not realistically model actual PPDs, but we wish to make as few assumptions as possible about disks and have attempted to minimize numerical artifacts. One worry is that disks too far from equilibrium may artificially enter the non-linear regime, possibly initiating fragmentation in an otherwise stable disk. This possibility is explored in SS4.1. In this section, we present our method for generating ICs.1
Footnote 1: Our code for generating ICs is freely available on github at [https://github.com/ibackus/diskpy](https://github.com/ibackus/diskpy) as a part of our PPD python package _diskpy_
Many methods have been used in previous work for generating initial conditions. As discussed by Mayer & Gawryszczak (2008), apparently contradictory results in fragmentation studies may be due to differences in ICs used. Much published research does not detail IC generation in sufficient detail to be reproducible, but we can sketch out a few different approaches used. Boss (1998) developed ICs by defining the midplane density \\(\\rho(R,z=0)\\), analytically estimating \\(\\rho(z)\\), using an approximate circular velocity, and iteratively adjusting the temperature profile to create a steady state solution.
As with us, other authors (Mayer et al., 2004; Rogers & Wadsley, 2011) were able to to define the surface density and temperature profiles. They then estimated vertical hydrostatic equilibrium to calculate density. Mayer et al. (2004) estimated the gas velocity required for circular orbits (\\(v_{circ}\\)) from gravitational forces and adjusted for hydrodynamic forces. Additionally, as with others (e.g. Mayer & Gawryszczak (2008)), they also approached low \\(Q\\) values by slowly growing the disk mass. Similarly, Boley (2009) used low mass, high-\\(Q\\) models in his grid code simulations and accreted mass from the \\(z\\) boundaries gradually to grow simulations towards instability.
Pickett et al. (2003) placed great care in developing equilibrium ICs. In contrast to our ICs, they also modeled the central, accreting star. They generated a stable disk (\\(Q=1.8\\)) using a field equilibrium code (Hachisu, 1986). They specified the specific angular momentum \\(j(R)\\) of the gas (which forces velocity to be solely a function of radius) then iteratively used a self consistent field method to solve the Poisson gravity equation and balance the hydrodynamic forces to approach equilibrium. A shooting method for \\(j(R)\\) was used to reach a desired \\(\\Sigma(R)\\). For low-\\(Q\\) simulations, they cooled the disk until it reached \\(Q_{min}=0.9\\).
For our simulations, we desired to scan parameter space by defining surface density (\\(\\Sigma\\)) and temperature (\\(T\\)) radial profiles, along with star mass. From these, the gas density (\\(\\rho\\)) can be estimated to ensure vertical hydrostatic equilibrium in the disk. SPH particles are then semi-randomly seeded and their equilibrium circular velocities are estimated using the NBody/SPH simulation code ChaNGa to calculate the forces. Our method allows us to directly and quickly generate equilibrium ICs for low \\(Q\\) values and arbitrary \\(\\Sigma\\) and \\(T\\) profiles.
### Estimating \\(\\rho(R,z)\\)
To estimate \\(\\rho(R,z)\\), we first define \\(M_{*}\\), \\(\\Sigma(R)\\), and \\(T(R)\\). Hydrostatic equilibrium is solved by adjusting the vertical density profile to maintain vertical hydrostatic equilibrium and adjusting the gas orbital velocity to ensure radial equilibrium.
To be in equilibrium along the vertical direction, the vertical component of gravity from the star and the disk's self-gravity should balance the vertical pressure gradient in the gas. For the disk self-gravity term, we assume the thin disk approximation where \\(\\rho\\) is assumed to be only a function of \\(z\\). \\(T\\) is set to be independent of \\(z\\), which is reasonable for the locally isothermal equation of state used in these simulations. All quantities are axisymmetric and symmetric about the midplane \\(z=0\\). Under these assumptions, the vertical hydrostatic equilibrium condition can be written as:
\\[\\frac{k_{B}T}{m}\\frac{d\\rho}{dz}+\\frac{GM,z\\rho}{(z^{2}+R^{2})^{3/2}}+4\\pi G \\rho\\int_{0}^{z}\\rho(z^{\\prime})dz^{\\prime}=0 \\tag{2}\\]
where \\(m\\) is the mean molecular weight of the gas, \\(M\\), is the star's mass and \\(R\\) is the cylindrical radius. The first term is the pressure gradient and would be altered for a non-isothermal EOS. The boundary conditions are:
\\[\\int_{0}^{\\infty}\\rho(z^{\\prime})dz^{\\prime}=\\Sigma/2 \\tag{3a}\\] \\[\\frac{d\\rho}{dz}\\Big{|}_{z=0}=0 \\tag{3b}\\]
To apply these boundary conditions, eq.(2) is transformed to be a differential equation for \\(I(z)\\equiv\\int_{-}^{\\infty}\\rho(z^{\\prime})dz^{\\prime}\\), which gives an equation of the form:
\\[\\frac{d^{2}I}{dz^{2}}+\\frac{dI}{dz}\\left[\\frac{c_{1}z}{(z^{2}+R^{2})^{3/2}}+c _{2}\\left(\\frac{\\Sigma}{2}-I\\right)\\right]=0 \\tag{4}\\]
with the constants set by eq.(2). Then eq.(4) is solved numerically by a root finding algorithm. The density is calculated by applying \\(\\rho=-\\frac{dI}{dz}\\). To ensure a robust solution, the solution for \\(\\rho\\) is then iteratively entered into a root finding algorithm for eq.(2) and rescaled to ensure the boundary condition in eq.(3a) is met. 2
Footnote 2: The current publicly available version of our code _disply_ uses an altered version of this algorithm to solve eq. 2 which is significantly faster and has been tested to provide the same results as the algorithm described here.
### Generating particle positions
The solution to \\(\\rho\\) is then used to semi-randomly seed SPH particles. To mitigate Poisson noise, we used the method of Cartwright et al. (2009a) which places the particles along a spiral in the \\(x-y\\) plane to keep them more evenly spaced. The spiral is made in such a way that \\(\\Sigma(R)\\) is reproduced. See Cartwright et al. (2009a) for a more detailed explanation. The particle \\(z\\) values are then randomly assigned according to \\(\\rho(z)\\). Note that it is important to make the spirals trailing (rather than leading) to avoid unwinding effects and swing amplification.
### Circular velocity calculation
A major concern with avoiding artifacts in the ICs is calculating the particle velocities required for circular orbits (\\(v_{\\mathit{circ}}\\)). For typical disks, \\(v_{\\mathit{circ}}\\) is one to two orders of magnitude larger than \\(c_{r}\\). Thus, particle velocities which deviate from \\(v_{\\mathit{circ}}\\) on the percent level will deposit a significant amount of power into the disk. If this dominates the available thermal energy (which tends to stabilize the disk), the disk may enter a non-linear regime and fragment for unrealistically high \\(Q\\). We include a demonstration of this in SS4.1.
To calculate \\(v_{\\mathit{circ}}\\), we employ our simulation code ChaNGa to calculate the radial gravitational and hydrodynamical forces on all particles. Following Mayer et al. (2004), we set the gravitational softening length to \\(\\epsilon_{s}=0.5\\,\\langle h\\rangle\\), where \\(\\langle h\\rangle\\) is the SPH smoothing length, calculated over the 32 nearest neighbors, and averaged over all particles in the simulation. The softening length for the star is set as the distance to the nearest gas particle. ChaNGa is then used to estimate the gravitational and SPH forces separately.
To deal with SPH noise, the radial forces must be averaged over many particles. The gravitational and SPH forces are averaged separately because of a different dependence on position. In both cases, 50 radial bins are used. To fit the \\(R\\) and \\(z\\) dependence of the gravitational forces, particles are binned radially, and for each bin a line is fit to the radial force per mass due to gravity, as a function of \\(\\cos\\theta\\):
\\[a_{\\mathit{g}}(\\cos\\theta)=m_{\\mathrm{i}}\\cos\\theta+b_{\\mathit{i}} \\tag{5}\\]
where \\(\\theta\\) is the angle above the plane of the disk. Here \\(\\cos\\theta\\) is chosen as the independent variable because the radial gravitational force from the central star is proportional to \\(\\cos\\theta\\). The fit parameters (\\(m_{\\mathit{i}},b_{\\mathit{i}}\\)) are then linearly interpolated as a function of \\(R\\) to calculate \\(a_{\\mathit{g}}(R,z)\\).
The radial hydrodynamic forces display very little \\(z\\) dependence, as is expected for a temperature profile independent of \\(z\\). The SPH forces are averaged over logarithmically spaced radial bins and then interpolated with a linear spline. From the total force per mass (\\(a=a_{\\mathit{g}\\mathit{g}\\mathit{m}}+a_{\\mathit{S}\\mathit{H}}\\)) we can calculate \\(v_{\\mathit{circ}}\\) as:
\\[v_{\\mathit{circ}}=\\sqrt{aR} \\tag{6}\\]
## 3 Set of runs
### ChaNGa
All the simulations in this paper were performed with ChaNGa3, a highly parallel N-body/SPH code originally written for cosmology simulations (Menon et al., 2015). ChaNGa is written in the Charm++ parallel programming language which allows the overlap of computation and communication, and enables scaling to over half a million processor cores (Menon et al., 2015). The physics modules of ChaNGa are taken from gasoline (Wadsley et al., 2004) where gravity is calculated using a Barnes-Hut (Barnes & Hut, 1986) tree with hexadecapole expansions of the moments, and hydrodynamics is performed with SPH. All the simulations performed here used a gravitational force accuracy (node opening) criterion of \\(\\theta_{\\mathit{BH}}=0.7\\). Timesteps are set by a Courant condition of \\(\\eta_{C}=0.3\\) and an acceleration criterion of \\(\\Delta t_{i}=\\eta\\sqrt{\\frac{a_{i}}{a_{i}}}\\) where \\(\\epsilon_{i}\\) and \\(a_{i}\\) are respectively the softening and acceleration of a particle, and \\(\\eta=0.2\\). To stabilize the SPH we use the Monaghan (1992) artificial viscosity with coefficients \\(\\alpha=1\\) and \\(\\beta=2\\), and we use the Balsara (1995) switch to suppress viscosity in non-shocking shearing environments. We have tested ChaNGa with the PPD Wengen code test 4 and it performs well (see Appendix A). The simulations presented here each took about 3k core-hours running on eight 12-core nodes on the University of Washington's supercomputer Hyak.
### Disk Profiles
Figures 1 & 2 show radial profiles for example simulations. Pictured are \\(\\Sigma(R)\\) (top panel), \\(T(R)\\) (middle panel), and \\(Q(R)\\) (bottom panel). The exact disk structure of PPDs, especially young ones, is poorly constrained. Therefore we adopt simple, easy to interpret profiles, and consider a range of values of \\(M_{d}\\) and \\(T\\) in order to bracket plausible disk parameters. Below we describe our choice of disk temperature and surface density profiles, along with the theoretical and observational motivations for them.
#### 3.2.1 Temperature
For a blackbody disk with heating dominated by solar radiation, Chiang & Goldreich (1997) showed that the temperature profile should be a power law. The exponent is \\(q=3/7\\) for a fully flared disk and \\(q=3/4\\) for a flat disk. We therefore adopt a temperature profile of the form:
\\[T(R)=T_{0}\\left(\\frac{R}{R_{0}}\\right)^{-q} \\tag{7}\\]
where \\(R_{0}\\) was set to be 1 AU and \\(T_{0}\\) is the temperature at 1 AU. The values we adopt for \\(T_{0}\\) and \\(q\\) come from the observations of Andrews & Williams (2005). They observed dust SEDs of circumstellar disks in the Taurus-Auriga star forming region. Fit results for 44 mainly solar type stars yielded median inferred parameters of \\(T_{0}=148K\\) and \\(q=0.58\\) (see their Table 2). Averaging their results for M-Stars only, we adopt \\(q=0.59\\) for every simulation and \\(T_{0}=130K\\) as our central fiducial values. This power law of \\(q=0.59\\) lies between a fully flared and completely flat disk. Given the uncertainty of these values we also ran simulations with \\(T_{0}=65K\\) and \\(260K\\) to bracket plausible disk temperatures.
#### 3.2.2 Surface density
Two functional forms for \\(\\Sigma\\) were used: a power law (Fig. 1) and the similarity solution for a thin, viscous disk (Fig. 2).
Figure 1: Example radial profiles for powerlaw surface density \\(\\Sigma\\propto 1/R\\), with \\(R_{\\rm in}=0.3\\) AU and \\(R_{d}=1\\) AU. **Top:** Surface density profile including cutoffs (solid) and excluding cutoffs (dashed). **Middle:** Disk temperature \\(T\\propto R^{-0.59}\\)**Bottom:** Toomre \\(Q\\), calculated including full disk self gravity, SPH forces, and calculation of \\(\\kappa\\).
Figure 2: Example radial profiles for a viscous disk surface density \\(\\Sigma(r)=\\Sigma_{0}r^{-\\gamma}\\exp(-r^{-2-\\gamma})\\), where \\(r\\) is a dimensionless radius and \\(\\gamma=0.9\\). The radius containing 95% of the mass is \\(R_{d}=11\\) AU. **Top:** Surface density profile including cutoffs (solid) and excluding cutoffs (dashed). **Middle:** Disk temperature \\(T\\propto R^{-0.59}\\)**Bottom:** Toomre \\(Q\\).
The power law used was:
\\[\\Sigma(R)=\\Sigma_{0}\\left(\\frac{R}{R_{0}}\\right)^{-1} \\tag{8}\\]
Where the normalization \\(\\Sigma_{0}\\) is fixed by the desired disk mass. As shown in Fig. 1, interior and exterior cutoffs were applied. For \\(R>R_{d}\\), an exponential cutoff was applied by multiplying \\(\\Sigma(R)\\) in eq.(8) by:
\\[\\Sigma_{extextur}=\\Sigma(R)\\mathrm{e}^{-(R-R_{d})^{2}/t^{2}} \\tag{9}\\]
where the cutoff length was set to \\(L=0.3R_{d}\\). This form ensures that \\(\\Sigma\\) and \\(\\frac{R_{d}}{dt}\\) are unchanged at \\(R=R_{d}\\). The interior cutoff was applied by multiplying \\(\\Sigma\\) by a smooth high order polynomial approximation to a step function, defined to be \\([0,1]\\) at \\(R=[0,R_{\\mathrm{cut}}]\\) with the first 10 derivatives set to be 0 at \\(R=[0,R_{\\mathrm{cut}}]\\). For these simulations \\(R_{\\mathrm{cut}}=0.5R_{d}\\) and \\(\\Sigma\\) differs significantly from a power law for \\(R\\leq 0.3R_{d}\\) (see Fig.1). This radius was chosen such that: (a) \\(Q\\gg 1\\) at \\(R_{\\mathrm{cut}}\\) to ensure the disk is stable at \\(R_{\\mathrm{cut}}\\), and (b) the cutoff is applied far enough from the most unstable disk region and removes little enough mass that fragmentation should not be strongly affected by the cutoff.
The second functional form for \\(\\Sigma\\) (see Fig.2) comes from the similarity solution to a thin, light, viscous disk orbiting a star, as found in Lynden-Bell & Pringle (1974). For a viscosity obeying a power law \\(\
u\\propto R^{\\gamma}\\), at a given time \\(\\Sigma\\) can be written as:
\\[\\Sigma(r)=\\Sigma_{0}r^{-\\gamma}\\exp(-r^{2-\\gamma}) \\tag{10}\\]
where \\(r\\) is a dimensionless radius and the normalization \\(\\Sigma_{0}\\) is fixed by the disk mass. Note that the full similarity solution includes a time dependence which we fold into \\(\\Sigma_{0}\\) and \\(r\\). From fits to observations of 9 circumstellar disks, Andrews et al. (2009) found a median value of \\(\\gamma=0.9\\), which is the value we adopt here. For this profile, no exterior cutoff is required. The same interior cutoff as for the power law \\(\\Sigma\\) was applied at \\(r=0.1\\).
If we define \\(R_{d}\\) to be the radius containing 95% of the disk mass (ignoring the interior cutoff), then:
\\[r=\\frac{R}{R_{d}}\\ln(1/0.05)^{1/(2-\\gamma)} \\tag{11}\\]
As has been noted before (e.g. Isella et al. (2009)), there is no physical motivation for adopting a power law for \\(\\Sigma\\). It is also not clear how applicable the viscous profile is. These are just simple functional forms often adopted in previous work. We have examined the end state of high-\\(Q\\) runs and they are better approximated by the viscous profile, but the fit is not perfect.
### Run Parameters
For a given functional form of \\(\\Sigma(R)\\) and \\(T(R)\\), three parameters must be set to make our ICs and thereby fix a value of \\(Q_{\\mathrm{min}}\\): the temperature normalization (\\(T_{0}\\)), the disk mass (\\(M_{d}\\)), and the disk radius (\\(R_{d}\\)). The choices of \\(T_{0}\\) are discussed in SS3.2.1.
#### 3.3.1 Disk mass
Under our scheme, setting \\(M_{d}\\) fixes the surface density normalization. We selected plausible values to explore, ranging from 0.01 to \\(0.08M_{\\sun}\\). Isella et al. (2009) reported observations of 11 disks around pre-main-sequence stars, including 7 around M-stars. From their Tables 1 & 2, we find a median value of \\(M_{d}/M_{*}=0.15\\), which for our simulations (\\(M_{*}=1/3M_{\\sun}\\)) gives a central value of \\(M_{d}=0.05M_{\\sun}\\). It should be stressed that these disk masses are inferred using an assumed gas to dust ratio of 100 and therefore may have large, uncharacterized uncertainties.
#### 3.3.2 Disk radius
Isella et al. (2009) argue that for a typical disk, \\(R_{d}\\) increases from around 20 AU to 100 AU over the course of \\(\\sim\\)5 Myr. This tends to stabilize disks by decreasing \\(\\Sigma\\) over time. Since we are interested in disks at their most unstable, we adopt 20 AU as our central fiducial value. To explore parameter space, we used values of \\(R_{d}\\) ranging from 1/3 AU to 30 AU.
#### 3.3.3 Numerical Parameters
For the analysis of fragmentation criteria, a set of 64 ICs were run with \\(10^{6}\\) particles. For typical disks, this yields particles with a mass of \\(m_{particle}\\approx 5\\times 10^{-5}M_{sphere}\\). A locally isothermal EOS with a mean molecular weight of 2 was used for the gas. Following Mayer et al. (2004), we set the gravitational softening length to \\(\\epsilon_{i}=0.5\\,\\langle h\\rangle\\), where \\(\\langle h\\rangle\\) is the SPH smoothing length calculated over the 32 nearest neighbors and averaged over all particles. Typical values are around \\(\\langle h\\rangle=5\\times 10^{-3}R_{d}\\).
The central star (mass \\(M_{\\sun}/3\\)) was set to be a sink particle: when a gas particle approaches the star within a distance of \\(R_{\\mathrm{sink}}\\), its mass and momentum are accreted onto the star. \\(R_{\\mathrm{sink}}\\) was set as the distance to the closest gas particle in the ICs. This yielded \\(R_{\\mathrm{sink}}=[0.08R_{d},0.02R_{d}]\\) for the power law and viscous \\(\\Sigma\\) profiles, respectively.
Making the central star a sink serves two purposes worth mentioning. A gas particle near the star gains a large velocity, experiences strong forces, and is often captured in a tight orbit around the star. This requires a very small time-step which can increase computation time by orders of magnitude. Secondly, the adaptive time-stepping used by ChaNGa can fail to conserve momentum when two interacting particles require time-steps which differ by more than a couple orders of magnitude. This momentum non-conservation is particularly problematic for tight, rapid orbits. The effect is sufficiently strong that very stable disks (\\(Q>2\\)) were seen to fragment when the central star was not treated as a sink.
We note that treating the star as a sink in this manner is unrealistic in that accretion happens at very large radii (of order an AU). When accretion happens, the star will jump to the center of mass of the star + particle system. In the limit of low accretion rates and in locations far from the star this will be a negligible effect, but in disks with high accretion a different scheme should be used. For our simulations there is very little accretion: less than \\(10^{-3}M_{d}\\) over the duration of the simulations.
As shown in figure 5, non-fragmenting simulations were run for \\(\\sim\\)30 outer rotational periods (ORP), where we define 1 ORP as the orbital period at the most unstable disk radius. ORPs for our disks range from 0.3 yrs for our smallestdisks to 300 yrs for the largest. Disks with parameters close to fragmentation were run longer (100-200 ORP) to ensure they reached a steady state which would not fragment. Since the main goal of this work is to investigate fragmentation of PPDs and since computation time increases drastically after clump formation, simulations which fragmented were run for around 1-2 ORP after fragmentation. Figure 5 shows the fragmentation timescales for these disks.
## 4 Fragmentation Analysis
The primary goal of this paper is to investigate under what conditions we can expect a PPD surrounding an M-dwarf to fragment. The parameters explored by our model are \\(T\\), \\(M_{d}\\), and \\(\\Sigma\\). Although all simulations were run with a star of \\(M_{*}=M_{\\sun}/3\\), we also extend our analysis to stars of similar mass. As expected, we find sufficiently heavy or cold disks will fragment under GI.
Gravitational instability in PPDs is typically parameterized by the Toomre \\(Q\\) parameter (eq. 1). For our isothermal simulations, \\(c_{i}=\\sqrt{k_{B}T/m}\\). Two dimensional disks will be unstable for \\(Q>1\\), but the instability and fragmentation criteria for 3D disks remain uncertain. Previous studies have found that 3D disks will fragment for \\(Q_{\\rm disk}\\) significantly greater than 1 (Boss, 1998, 2002; Mayer et al., 2004; Durisen et al., 2007); however, we do not find this to be the case for our simulations. Figure 3 shows the fragmentation boundary for our simulations. Disks with \\(Q\\lesssim 0.9\\) fragment. Here, we define fragmentation to be when the first gravitationally bound clump is found (see SS5).
We also considered the effects of swing amplification. If the parameter:
\\[X_{m}=\\frac{\\kappa^{2}R}{2\\pi G\\Sigma}\\frac{1}{m} \\tag{12}\\]
is near or below unity, small leading disturbances will amplify upon becoming trailing disturbances and can drive disk dynamics, especially spiral arm growth (Binney & Tremaine, 1987). Lower order modes are expected to dominate. For our disks, our lowest value is \\(X_{1}=5.7\\), and most disks have \\(X_{1}>10\\), so swing amplification should not be significant in these simulations.
Another proposed instability in gaseous disks is provided by the SLING mechanism (Adams et al., 1989; Shu et al., 1990), which is driven by \\(m=1\\) mode growth and is sensitive to the outer edge of the disk. For our simulations, we see little \\(m=1\\) power. Additionally, we tested the importance of the outer edge by applying a step-function cutoff to \\(\\Sigma\\) on the disk outer edge for marginally stable disks and the overall behavior was unaltered. It therefore seems unlikely that the SLING mechanism plays a major role in these simulations.
It should be noted that for our simulations we calculate \\(Q\\) from the ICs. Since the equilibrium orbital velocity is known (see SS2.3), we can directly calculate \\(\\kappa^{2}=\\frac{20}{k}\\frac{d}{d}(R^{2}\\Omega)\\). This fully includes the effects of a 3D disk with self-gravity and pressure gradients. \\(\\Sigma\\) is calculated by binning SPH particles radially, summing their masses, and dividing by the annulus area. A common approximation for light disks is to ignore disk self gravity and pressure gradients and use \\(Q\\approx c_{i}\\Omega/nG\\Sigma\\), which for disks of \\(M_{d}/M_{*}\\approx 0.1\\) underestimates \\(Q\\) at the 10% level. Different estimates of \\(Q\\) may account for some of the discrepancy in the literature regarding the critical \\(Q\\) required for fragmentation.
As can be seen in figure 3, there is some overlap in \\(Q\\) for the fragmenting and non-fragmenting populations. Since the Toomre stability criterion strictly applies to a 2D disk, this is unsurprising. Following the work of Romeo et al. (2010), to higher order a disk scale height correction enters into the dispersion relation for axisymmetric perturbations. Taller disks should be more stable than thinner ones.
Figure 4: Minimum effective Toomre \\(Q\\) (\\(Q_{eff}\\)) for the fragmenting (clump–forming) and non-fragmenting simulations. Reparameterizing the stability criterion for a protoplanetary disk as \\(Q_{eff}=\\mathcal{Q}\\beta(H/R^{\\prime\\prime})\\) is sufficient for predicting whether a protoplanetary disk will fragment. \\(\\beta\\) is a normalization factor chosen such that disks with a \\(Q_{eff}<1\\) will fragment.
Figure 3: Minimum Toomre \\(Q\\) for the fragmenting (clump–forming) and non-fragmenting simulations. The red (blue) lines mark the largest (smallest) values of the fragmenting (non-fragmenting) simulations. The two populations overlap around \\(Q\\approx 0.9\\).
We re-parameterized \\(Q\\) to include disk height as:
\\[Q_{eff}\\equiv Q\\beta(H/R)^{\\alpha} \\tag{13}\\]
where \\(H\\) is the disk scale height and \\(\\alpha\\) is a free parameter. \\(\\beta\\) is normalization parameter that we set such that \\(Q_{eff}<1\\) is the boundary for disk fragmentation (see below). \\(H\\) is calculated as the standard deviation of the vertical density profile, rather than the first order approximation \\(c_{s}/\\Omega\\). We then fit the power law \\(\\alpha\\) to minimize the overlap of the boundaries in \\(Q_{eff}\\) for the fragmenting/non-fragmenting population. Figure 4 shows the separation of the two populations (compare to fig. 3). For \\(\\alpha=0.18\\) and \\(\\beta=2.1\\), simulations with \\(Q_{eff}<1\\) fragment.
A power law was chosen because it is a simple functional form, and there is a great deal of self-similarity in PPDs, but other forms such as linear corrections might be suitable as well. As expected, taller disks have a larger \\(Q_{eff}\\) and are therefore less prone to fragmentation, although the dependence on height is weak.
We also found that \\(Q_{eff}\\) correlates strongly with time until fragmentation (see fig. 5). We found it to predict fragmentation time with less scatter than \\(Q\\). To verify that \\(H/R\\) is an important parameter in predicting disk fragmentation, we considered power law dependence of \\(Q_{eff}\\) on various dimensionless combinations of parameters, including the most unstable wavelength, \\(c_{s}\\), \\(T\\), and \\(\\Omega\\). We found \\(H/R\\) to separate the populations most strongly.
These considerations indicate disk scale height is an important parameter in dictating stability and fragmentation. With the fragmentation boundary \\(Q_{eff}=1\\), we are equipped to estimate disk parameters for which we may reasonably expect disks to fragmentation.
Figure 6 shows the boundaries for disk fragmentation for a star of mass \\(M_{t}=M_{\\sun}/3\\) as a function of \\(R_{d}\\), \\(M_{d}\\), and the temperature at 1 AU (\\(T_{0}\\)). The contour lines mark the boundary for various values of \\(T_{0}\\). Disks to the right of the contour lines have a minimum \\(Q_{eff}<1\\) and will fragment.
The red boundary marks the fiducial value of \\(T_{0}=130K\\) from the observations of Andrews & Williams (2005) (see SS3.2.1), with the surrounding red region marking the sample scatter in their observations of \\(25K\\). The red point marks the fiducial values (for a young disk) of \\(M_{d}\\) and \\(R_{d}\\) from the observations of Isella et al. (2009) (see SS3.2.2).
Since the fiducial disk parameters lie to the left of the fiducial boundary, we expect the observed disks to not be susceptible to fragmentation by GI. This is to be expected, since the timescales for fragmentation are so much shorter than observed disk lifetimes/ages, it is unlikely to observe gravitationally highly unstable PPDs. However, we note that observed disk parameters lie close to the region of fragmentation.
### Sensitivity to ICs
It is important to understand the causes of disk fragmentation in our simulations. We have paid particular care to characterize numerical issues which may drive fragmentation. In appendix B we present a simple convergence test which demonstrates that for our locally isothermal SPH treatment, lower resolution disks fragment more easily.
To assess the sensitivity of disk fragmentation to the state of the ICs, we applied a series of small perturbations to disks close to the fragmentation boundary, with a \\(Q_{eff}\\) slightly greater than 1. We used two of the ICs in our suite of runs which didn't fragment: one with \\(Q_{eff}=1.01\\) and one with \\(Q_{eff}=1.12\\) (simulations 2 and 5 in table B2). We found that perturbing orbital velocities slightly out of equilibrium
Figure 5: Total simulation time (non-fragmenting simulations) and time until fragmentation (fragmenting simulations), in units of the orbital period at the most unstable disk radius. Fragmentation is defined to occur when a gravitationally bound clump forms. The fragmentation timescale increases rapidly as \\(Q_{eff}\\) approaches 1. Simulations with \\(Q_{eff}\\gtrsim 1\\) were run for longer to verify that they do not fragment.
Figure 6: Fragmentation criteria for disk ICs. The curved lines define which disks will fragment for various disk temperatures at 1 AU, assuming a temperature profile of \\(T\\propto r^{-0.59}\\) and a surface density profile of \\(\\Sigma\\propto 1/R\\). ICs which lie to the right of a line will fragment. The boundaries are \\(Q_{eff}=1\\) contours, where the \\(Q_{eff}\\) estimates include approximations for disk height and disk self gravity. The red line marks the boundary defined by the fiducial temperature of \\(T_{0}(1~{}\\mathrm{AU})=130\\pm 25K\\) from Andrews & Williams (2005) and the red dot marks the fiducial disk mass and radius for a young disk from Isella et al. (2009) (see §3.3 for a discussion of these values). These fiducial values likely have large uncertainties (not pictured).
can cause a disk to fragment, but that results were fairly insensitive to perturbing the disk height or to applying small spiral density perturbations. These results are summarized in table 1.
We applied \\(m=2,3,4\\) spiral density perturbations by multiplying the particle masses \\(M_{0}\\) by:
\\[M_{1}=M_{0}\\left(1+\\epsilon\\cos(m\\theta)\\right) \\tag{14}\\]
where \\(M_{1}\\) is the perturbed particle mass and \\(\\epsilon\\) is the depth of the perturbation, here chosen to be \\(\\epsilon=0.01\\). These perturbations are similar to those done by e.g. Boss (2002) to seed instabilities in a grid code which does not have SPH particle noise. These perturbations were performed on the \\(Q_{eff}=1.01\\) simulation and were not sufficient to force the disk to fragment.
We also wished to probe how sensitive disks are to perturbing the height. When generating ICs, there are many different methods for estimating the vertical density profile. We therefore wished to see whether perturbing a disk's scale height away from equilibrium could cause it to fragment. We ran the \\(Q_{eff}=1.01\\) simulation twice, multiplying the particle \\(z\\) positions by 0.98 and 0.9, decreasing the scale heights by 2% and 10%, respectively. Neither of these runs fragmented.
The disks do appear to be much more sensitive to particle velocities. We applied small, axisymmetric velocity perturbations to disks which otherwise did not fragment. The perturbed velocity \\(v_{1}\\) was calculated as:
\\[v_{1}=v_{0}\\left(1+\\epsilon\\frac{R_{0}}{R}\\right) \\tag{15}\\]
where \\(v_{0}\\) is the original velocity, \\(R_{0}\\) is the inner edge of the disk (where the \\(\\Sigma\\) reaches a maximum) and \\(\\epsilon<<1\\) is the depth of the perturbation. This applies a fractional perturbation of \\(\\epsilon\\) at \\(R_{0}\\) which decays as \\(1/R\\). Nearly all the disk mass lies outside of \\(R_{0}\\).
For the \\(Q_{eff}=1.01\\) disk, a 1% perturbation (\\(\\epsilon=0.01\\)) was sufficient to cause fragmentation. Figure 7 shows the spiral power as a function of time for this run. Spiral power is calculated by binning \\(\\Sigma\\) in \\((R,\\theta)\\) and calculating the standard deviation. The perturbed and original simulations initially develop in a similar manner for the first orbital period (defined at the most unstable radius). During this stage an axisymmetric (\\(m=0\\)) density wave moves outward from the disk center. After about an orbital period the perturbed disk develops significantly more pronounced spiral density waves. After 9-10 orbits the disk fragments. Fragmentation is accompanied by a rapid spike in the spiral power as the disk becomes highly non-axisymmetric.
A similar test was performed with a much more stable disk (\\(Q_{eff}=1.12\\)). A series of perturbations were applied (\\(\\epsilon=.01,.02,.04,.08,.11,.16\\)). An 11% perturbation was sufficient to force the disk to fragment. Depending on the approximations used to estimate circular velocities, discrepancies on this order can happen for disks sufficiently massive to be close to the fragmentation boundary.
This demonstrates the care which must be taken in developing equilibrium models of unstable disks near the fragmentation boundary, especially with regard to the velocity calculation. An apparently small perturbation can deposit large amounts of energy in a disk, forcing it sufficiently far of equilibrium to fragment.
We also tested the importance of the inner and outer boundaries by taking a disk near the fragmentation boundary with \\(Q_{eff}\\) slightly above 1 and applying a step function cut-off to \\(\\Sigma\\) on the inner edge, outer edge, and both. Waves were seen to reflect off these hard boundaries, but the effect was not sufficiently strong to drive the disk to fragmentation.
## 5 Clumps
The formation of tightly bound, dense clumps of gas marks the stage of disk evolution where our isothermal treatment begins to break down. We therefore restrict our analysis of clumps to the early stages of formation and limit the scope of our results.
\\begin{table}
\\begin{tabular}{c c c c c} type & depth & fragment? & \\(Q_{eff}\\) & notes \\\\ \\hline \\hline density & 1\\% & no & 1.01 & m=4 \\\\ density & 1\\% & no & 1.01 & m=3 \\\\ density & 1\\% & no & 1.01 & m=2 \\\\ \\hline height & -10\\% & no & 1.01 & \\\\ height & -2\\% & no & 1.01 & \\\\ \\hline velocity & 0.50\\% & no & 1.01 & \\\\ velocity & 1\\% & yes & 1.01 & \\\\ velocity & 2\\% & yes & 1.01 & \\\\ velocity & 10\\% & yes & 1.01 & \\\\ velocity & 1\\% & no & 1.12 & \\\\ velocity & 2\\% & no & 1.12 & \\\\ velocity & 4\\% & no & 1.12 & \\\\ velocity & 8\\% & no & 1.12 & \\\\ velocity & 11\\% & yes & 1.12 & \\\\ velocity & 16\\% & yes & 1.12 & \\\\ \\end{tabular}
\\end{table}
Table 1: Results for a series of tests to perturb disks near equilibrium.
Figure 7: Total non-axisymmetric power versus time for a disk with and without a 1% velocity perturbation (simulation number 2). The original simulation (\\(Q_{eff}=1.01\\)) did not fragment, the simulation with a small, axisymmetric velocity perturbation does. Spiral structure initially develops similarly for both simulations for about the first orbit. For the next 8 orbits the perturbed simulation develops deeper spiral structure until it fragments after 9-10 orbits.
To track the formation of clumps, we developed a simple clump finding/tracking software routine4 built around the group finder SKID5. To find gravitationally bound clumps, we may factor in the disk geometry. Particle masses are scaled by \\(R^{3}\\) and a density threshold is set such that at least \\(N\\) particles lie within the Hill-sphere of particles under consideration, where \\(N\\) is chosen as the number of neighbors used for SPH smoothing. This gives a threshold of
Footnote 4: Our clump finding code is freely available on github at [https://github.com/ibackus/diskpy](https://github.com/ibackus/diskpy) as a part of our PPD python package _diskpy_
Footnote 5: SKID is freely available at [https://github.com/N-BodyShop/skid](https://github.com/N-BodyShop/skid)
\\[\\rho_{min}=\\frac{3NM_{*}}{R^{3}} \\tag{16}\\]
The SKID algorithm is then applied, which uses a friends-of-friends clustering algorithm followed by a gravitational unbinding procedure to determine which clumps are gravitationally bound. By visual inspection, this provides robust results over a large range of disk and simulation parameters, and importantly avoids marking high-density spiral arms as clumps. Figure 9 shows an example of this clump finding applied to a highly unstable PPD.
To track clumps over many time steps, clumps are first found in all simulation snapshots. They are then tracked over time by comparing clumps in adjacent time steps and seeing which have the most particles in common. Mergers (including multiple mergers), clump destruction, and clump formation are accounted for. Clump parameters such as mass, density, size, and location are all calculated and followed as a function of time.
We find that clumps form according to the same general picture, as shown in figure 8. Disks with a minimum \\(Q_{eff}\\lesssim 1.6\\) (or about \\(Q_{min}\\lesssim 1.3\\)) will form noticeable spiral structure after several orbital periods. Disks with an initial \\(Q_{eff}<1\\) will grow overdense spiral arms which will collapse and fragment into several dense clumps. Clumps initially form near the most unstable disk radius. For these simulations, that is approximately at \\(R_{d}\\). They form after several to tens of orbital periods at the most unstable radius. Average clump masses are around \\(0.3M_{ship}\\), with some rapidly growing to \\(1M_{ship}\\). The timescales for disk fragmentation increase rapidly as \\(Q_{eff}\\to 1\\) (fig. 5). After this stage, the isothermal approximation begins to break down. The disks then undergo a rapid, violent fragmentation.
## 6 Discussion
### Thermodynamics
For these simulations we used a locally isothermal approximation for several reasons. We wished to perform a large scan of parameter space without compromising resolution too strongly. A computationally fast isothermal EOS is straightforward to implement. We also desired to build on previous work and to extend it to poorly studied M-dwarf systems. Our work here is directed at exploring the dependence of the fragmentation boundary on stellar/disk mass, disk height, and ICs. We leave the dependence on EOS for future work.
A non-isothermal EOS introduces non-trivial numerical
Figure 8: An example of the typical stages of clump formation. Clockwise, from top-left: (1) Initial conditions \\(Q_{eff}<1\\) (2) Strong spiral structure develops. (3) Spiral arms become overdense and break apart into clumps. (4) The disk begins fragment strongly.
Figure 9: A demonstration of the clump finding algorithm used here. Integrated column density for the gas is pictured with a logarithmic color scale. Red circles mark the detected clumps. At the end of the simulation, this highly unstable disk formed 108 distinct, gravitationally bound clumps. The algorithm picks out clumps with a high success rate without reporting false positives from other high density structure such as spiral arms.
issues, especially in the context of SPH simulations of protoplanetary disks. Unwarranted, poorly understood heating terms, especially from artificial viscosity (AV), are introduced into the energy equation. Previous results (Meru & Bate, 2011, 2012; Lodato & Clarke, 2011; Rice et al., 2012, 2014) and our own initial tests indicate that in the context of PPDs, AV heating can dominate disk thermodynamics. Non-isothermal PPD simulations may not converge (Meru & Bate, 2011), whereas in Appendix B we demonstrate that our approach does converge. It is also unclear how rapidly disks will radiatively cool, an important parameter for the possibility of disk fragmentation (Gammie, 2001; Rafikov, 2005, 2007). We hope to investigate these effects in future work.
The isothermal approximation used for these simulations limits the scope of our results. An isothermal EOS would well approximate a disk where stellar radiation, viscous accretion heating, and radiative losses to infinity, are nearly balanced and control the temperature of the disk. Furthermore, temperature is independent of density, which is only appropriate for an optically thin disk. The isothermal approximation applies only to scenarios where dynamical timescales are much longer than the timescales for heating/cooling back to thermal equilibrium with the background.
These conditions may not hold in the disks under consideration. This experiment therefore does not realistically follow the thermal evolution of the disk. We are limited to a preliminary investigation into the large-scale dynamics of the disk before the putative equilibrium temperature profile would be expected to be strongly altered.
During the initial stages of disk evolution, dynamical and physical timescales are of order the orbital period and disk radius, respectively. During this stage, the isothermal EOS can still provide insight into the global dynamics of a GI disk at a certain stage. However, once clumps form, the isothermal approximation no longer provides much insight. Clumps should get hot as they collapse. The dynamic timescales of dense clumps will be short as they accrete matter, decouple from the disk, and scatter with other clumps. Pressure support of clumps, which is poorly captured by an isothermal EOS, should tend to increase their size and their coupling to the disk, meaning that the violent fragmentation of disks after initial clump formation which we see in our simulations may not be the final state of a typical fragmenting disk. Clumps which are sufficiently dense may decouple from the disk enough to experience strong shocks and tidal interactions which will cause heating. These processes will strongly influence clump growth, evolution, and survival, all of which are still under investigation (Nayakshin, 2010; Galvagni et al., 2012).
We therefore limit ourselves to discussing the early stages of clump formation. Our results indicate that the critical value of \\(Q_{min}\\lesssim 0.9\\) required for fragmentation is significantly lower than some previous results which found closer to \\(Q_{min}\\lesssim 1.5\\). Although this makes requirements for fragmentation more stringent, it does not rule out GI and disk fragmentation as important mechanisms during planet formation in PPDs around M-dwarfs.
### Previous results
We find that for most disks, \\(Q\\lesssim 0.9\\) is required for disk fragmentation. Re-parameterizing \\(Q\\) as \\(Q_{eff}\\) to include the stabilizing effect of disk height provides a more precise way to predict fragmentation. The ratio of \\(Q_{eff}/Q\\) can vary by 30% for reasonable disk parameters. This ratio will vary even more when considering solar type stars in addition to M-dwarfs. However, \\(Q\\) still provides a reasonable metric for disk fragmentation.
Other isothermal studies have found different boundaries. In contrast to our results, Nelson et al. (1998) found the threshold to be \\(Q\\leq 1.5\\). Boss (1998) found \\(Q\\) as high as 1.3 would fragment. Boss (2002) found, using an isothermal EOS or diffusive radiative transfer, that \\(Q=1.3-1.5\\) would fragment. Mayer et al. (2004) found \\(Q=1.4\\) isothermal disks would fragment. Pickett et al. (2003) found that cooling a disk from \\(Q=1.8\\) to \\(Q=0.9\\) caused it to fragment. Their clumps did not survive, although as they note that may be due to numerical issues. Boley (2009) found that disks could be pushed below \\(Q=1\\) by mass loading and still not fragment, by transporting matter away from the star and thereby decreasing \\(\\Sigma\\) and increasing \\(Q\\). It should be noted that some of these simulations were run at much lower resolution than ours. Differences may also be due in part to simulation methods: using cylindrical grids, spherical grids, or SPH methods; applying perturbations; or even 2D (Nelson et al., 1998) vs 3D simulations.
Since all the details of previous work are not available, the source of the discrepancy in critical \\(Q\\) values is uncertain. One source of discrepancy is simply how \\(Q\\) is calculated. As mentioned in SS4, approximating \\(Q\\) by ignoring disk self-gravity and pressure forces can overestimate \\(Q\\) on the 10% level for heavy disks.
The discrepancy may also be due to the different methods of constructing equilibrium disks. As demonstrated in SS4.1, overestimating velocities at less than the percent level can force a disk to fragment. Disks near the fragmentation boundary are very sensitive to ICs. Initial conditions are not in general available for previous work, however we can note that some studies appear to display a rapid evolution of \\(Q\\) at the beginning of the simulation. For example, some runs of Pickett et al. (2003) evolve from \\(Q_{min}=1.5\\) to \\(Q_{min}=1\\) in fewer than 3 ORPs (see their Figure 14). Figure 1 of Mayer et al. (2004) shows two isothermal simulations which evolve from a \\(Q_{min}\\) of 1.38 and 1.65 to \\(Q_{min}=1\\) in fewer than 2 ORPs. This is indicative of ICs which are out of equilibrium.
In contrast, even our very unstable disks display a remarkably smooth and gradual initial evolution. Figure 10 shows the behavior of the minimum \\(Q_{eff}\\) (normalized by its initial value) for our fragmenting runs as a function of time until fragmentation. For all the runs, \\(Q_{eff}\\) decreases gradually for most of the simulation until dropping rapidly shortly before the disk fragments. For our runs near the fragmentation boundary, this is much more gradual and much less pronounced than for the runs of Pickett et al. (2003) or Mayer et al. (2004) mentioned above. For us, a much smaller change in \\(Q\\) takes around 10 ORPs. \\(Q_{eff}\\) evolves even more slowly for non-fragmenting runs. \\(Q_{min}\\) follows a similar behavior, although with more scatter (in large part because \\(Q\\) does not determine the timescale until fragmentation as well as \\(Q_{eff}\\) does).
However it is not certain how close to equilibrium ICs should be to capture the relevant physics of PPDs. Actual disks are constantly evolving from the early stages of star formation until the end of the disk lifetime. We chose to use disks as close to equilibrium as possible, seeded only with SPH poisson noise, to avoid introducing numerical artifacts. Some authors introduce density perturbations which are controllable. If sufficiently large, they may serve to ameliorate the problems mentioned above by explicitly having fragmentation be driven by physically reasonable spiral modes (e.g. Boss (2002)) or intentionally large random perturbations (e.g. Boley (2009)). Others have considered mass loading as a means to grow to low \\(Q\\)(Boley, 2009).
### GI in PPDs
Observed disks around M-dwarfs do not appear to have low enough inferred \\(Q_{\\rm{crit}}\\) values to be sufficiently unstable for fragmentation under GI, however this is what would be expected given the short timescales for the fragmentation of an unstable disk. Reported disk ages are of order \\(10^{6}\\) years (Haisch et al., 2001), orders of magnitude longer than fragmentation timescales, making the observation of a highly gravitationally unstable disk unlikely. This is a strong selection effect for observed disk parameters.
Although fragmentation timescales are very rapid, disks may persist much longer in moderately unstable configurations where GI drives large scale structure but which have not grown sufficiently unstable as to be prone to fragmentation. Early work is being done on trying to observe GI driven structures, but with current instrumentation such structures will be difficult to resolve (Douglas et al., 2013; Evans et al., 2015).
As shown in figure 6, the fiducial values for disk parameters adopted here place observed disks near the boundary for fragmentation. The fact that observations indicate normal disk parameters which are close to the boundary, rather than orders of magnitude off, suggests it is plausible that a significant portion of PPDs around M-dwarfs will undergo fragmentation. This would predict a sharp transition in the distribution of inferred \\(Q_{eff}\\) values around \\(Q_{eff}=1\\). Older disks tend to expand radially (Isella et al., 2009), thereby decreasing \\(\\Sigma\\), increasing \\(Q\\), and pushing them away from the \\(Q_{eff}=1\\) boundary. We note that while \\(Q\\) is a reasonably strong predictor of fragmentation, disk height is an additional parameter worth measuring to predict fragmentation.
Given the results of these isothermal simulations, we expect GI to play a large role in the early stages of planet formation around M-dwarfs. The exact role of star mass/type on fragmentation remains unclear. The parameter space we scanned is sufficiently large that adding an extra dimension was prohibitive, we therefore only studied one star mass. Future work should be able to determine what stars are the most suitable for fragmentation. Once large-scale density perturbations are formed via GI, the fate of the disk remains unclear. Future work should include more sophisticated thermodynamics to follow the evolution of the gaseous component of the disk to better determine under what conditions clumps will form, and what is required for them to survive. Additionally, decreasing the resolution in our isothermal SPH simulations appears to drive fragmentation (see SSB). The importance of resolution in simulations is subject to much investigation and should be further pursued (e.g. Meru & Bate (2011, 2012); Cartwright et al. (2009b)).
Planet formation will of course require the concentration of solids as well. Including dust in simulations of young PPDs will be required. In a fully 3D, highly non-axisymmetric environment, we may investigate how GI affects solids. Dust enhancement through pressure gradients, dust evolution through collisions and coagulation, and dust coupling to disk opacity and cooling, will all strongly affect prospects for planet formation. Dust dynamics in PPDs using SPH has been proposed recently (Laibe & Price, 2012; Price & Laibe, 2015), and we hope to explore solid/gas interactions in the future.
## Code and Data
The code repository for generating our ICs and for clump-tracking is freely available online at [https://github.com/ibackus/diskpy](https://github.com/ibackus/diskpy) as a part of our _diskpy_ python package. IC generation is contained within the subpackage _diskpy.ICgen_ and clump tracking is contained within the subpackage _diskpy.clumps_. A public version of ChaNGa is available at [http://www-hpcc.astro.washington.edu/tools/changa.html](http://www-hpcc.astro.washington.edu/tools/changa.html) and is required for IC generation. The group finding software SKID (required for clump finding) is available at [https://github.com/N-BodyShop/skid](https://github.com/N-BodyShop/skid) and depends also on tipsy tools ([https://github.com/N-BodyShop/tipsy_tools](https://github.com/N-BodyShop/tipsy_tools)).
We have also made much of our data available online at the University of Washington's ResearchWorks archive at [http://hdl.handle.net/1773/34933](http://hdl.handle.net/1773/34933). Initial conditions and final simulation snapshots are available for all the simu
Figure 10: Minimum \\(Q_{eff}\\) normalized by the initial minimum \\(Q_{eff}\\) vs fraction of time until fragmentation for all the fragmenting runs. The average of these runs is plotted in red. Disk fragmentation occurs at \\(t/t_{fragment}=1\\). All runs follow similar trajectories in this plot, even though a significant range of initial \\(Q_{eff}\\) and \\(t_{fragment}\\) values are represented here (all the fragmenting runs in Fig. 5 are presented here). The simulations undergo an initially gradual decrease in \\(Q_{eff}\\) which steepens sharply shortly before fragmentation.
lations presented here. Wengen test results (see appendix A) are also available there.
## Acknowledgements
We would like to acknowledge Aaron Boley for many fruitful discussions and much feedback on this work. We thank David Fleming for help on this work and _diskpy_. This work was performed as part of the NASA Astrobiology Institute's Virtual Planetary Laboratory, supported by the National Aeronautics and Space Administration through the NASA Astrobiology Institute under solicitation NNH12ZDA002C and Cooperative Agreement Number NNA13AA93A. The authors were also supported by NASA grant NNX15AE18G. This work used the Extreme Science and Engineering Discovery Environment (XSEDE)(Towns et al., 2014), which is supported by National Science Foundation grant number ACI-1053575. This work was facilitated though the use of advanced computational, storage, and networking infrastructure provided by the Hyak supercomputer system at the University of Washington.
## References
* Adams et al. (1989) Adams F. C., Ruden S. P., Shu F. H., 1989, ApJ, 347, 959
* Andrews & Williams (2005) Andrews S. M., Williams J. P., 2005, ApJ, 631, 1134
* Andrews et al. (2009) Andrews S. M., Wilner D. J., Hughes A. M., Qi C., Dullemond C. P., 2009, ApJ, 700, 1502
* Apai & Pascucci (2009) Apai D., Pascucci I., 2009, Meteoritics and Planetary Science Supplement, 72, 5361
* Apai & Pascucci (2010) Apai D., Pascucci I., 2010, in Astrobiology Science Conference 2010: Evolution and Life: Surviving Catastrophes and Extremes on Earth and Beyond. p. 5396
* Balsara (1995) Balsara D. S., 1995, Journal of Computational Physics, 121, 357
* Barnes & Hut (1986) Barnes J., Hut P., 1986, Nature, 324, 446
* Binney & Tremaine (1987) Binney J., Tremaine S., 1987, Galactic dynamics
* Boley (2009) Boley A. C., 2009, ApJ, 695, L53
* Boley & Durisen (2010) Boley A. C., Durisen R. H., 2010, ApJ, 724, 618
* Borucki et al. (2010) Borucki W. J., et al., 2010, Science, 327, 977
* Boss (1997) Boss A. P., 1997, Science, 276, 1836
* Boss (1998) Boss A. P., 1998, ApJ, 503, 923
* Boss (2002) Boss A. P., 2002, ApJ, 576, 462
* Boss (2006a) Boss A. P., 2006a, ApJ, 643, 501
* Boss (2006b) Boss A. P., 2006b, ApJ, 644, L79
* Boss (2008) Boss A. P., 2008, Physica Scripta Volume T, 130, 014020
* Cameron (1978) Cameron A. G. W., 1978, Moon and Planets, 18, 5
* Cartwright et al. (2009a) Cartwright A., Stamatellos D., Whitworth A. P., 2009a, MNRAS, 395, 2373
* Cartwright et al. (2009b) Cartwright A., Stamatellos D., Whitworth A. P., 2009b, MNRAS, 395, 2373
* Chiang & Goldreich (1997) Chiang E. I., Goldreich P., 1997, ApJ, 490, 368
* Douglas et al. (2013) Douglas T. A., Caselli P., Ilee J. D., Boley A. C., Hartquist T. W., Durisen R. H., Rawlings J. M. C., 2013, MNRAS, 433, 2064
* Durisen et al. (2007) Durisen R. H., Boss A. P., Mayer L., Nelson A. F., Quinn T., Rice W. K. M., 2007, Protostars and Planets V, pp 607-622
* Evans et al. (2015) Evans M. G., Ilee J. D., Boley A. C., Caselli P., Durisen R. H., Hartquist T. W., Rawlings J. M. C., 2015, MNRAS, 453, 1147
* Galvagni et al. (2012) Galvagni M., Hayfield T., Boley A., Mayer L., Roskar R., Saha P., 2012, MNRAS, 427, 1725
* Gammie (2001) Gammie C. F., 2001, ApJ, 553, 174
* Hachisu (1986) Hachisu I., 1986, ApJS, 61, 479
* Haghighipour & Boss (2003) Haghighipour N., Boss A. P., 2003, ApJ, 598, 1301
* Haisch et al. (2001) Haisch Jr. K. E., Lada E. A., Lada C. J., 2001, ApJ, 553, L153
* Han et al. (2014) Han E., Wang S. X., Wright J. T., Feng Y. K., Zhao M., Fakhouri O., Brown J. I., Mancock C., 2014, PASP, 126, 827
* Helled et al. (2014) Helled R., et al., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. University of Arizona Press, pp 643-665, doi:10.2485/azuajengers.9780816531240
* Isella et al. (2009) Isella A., Carpenter J. M., Sargent A. I., 2009, ApJ, 701, 260
* Laibe & Price (2012) Laibe G., Price D. J., 2012, MNRAS, 420, 2345
* Lodato & Clarke (2011) Lodato G., Clarke C. J., 2011, MNRAS, 413, 2735
* Lynden-Bell & Pringle (1974) Lynden-Bell D., Pringle J. E., 1974, MNRAS, 168, 603
* Mann et al. (2015) Mann R. K., Andrews S. M., Eisner J. A., Williams J. P., Meyer M. R., Di Francesco J., Carpenter J. M., Johnstone D., 2015, ApJ, 802, 77
* Mayer et al. (2008) Mayer L., Gawryszczak A. J., 2008, in Fischer D., Rasio F. A., Thorsett S. E., Wolszczan A., eds, Astronomical Society of the Pacific Conference Series Vol. 398, Extreme Solar Systems. p. 243 (arXiv:0710.3590)
* Mayer et al. (2004) Mayer L., Quinn T., Wadsley J., Stadel J., 2004, ApJ, 609, 1045
* Menon et al. (2015) Menon H., Wesolowski L., Zheng G., Jetley P., Kale L., Quinn T., Governato F., 2015, Computational Astrophysics and Cosmology, 2, 1
* Meru & Bate (2011) Meru F., Bate M. R., 2011, MNRAS, 411, L1
* Meru & Bate (2012) Meru F., Bate M. R., 2012, MNRAS, 427, 2022
* Monaghan (1992) Monaghan J. J., 1992, ARA&A, 30, 543
* Nayakshin (2010) Nayakshin S., 2010, MNRAS, 408, L36
* Nelson (2006) Nelson A. F., 2006, MNRAS, 373, 1039
* Nelson et al. (1998) Nelson A. F., Benz W., Adams F. C., Arnett D., 1998, ApJ, 502, 342
* Paardekooper (2012) Paardekooper S.-J., 2012, MNRAS, 421, 3286
* Pickett et al. (2003) Pickett B. K., Mejia A. C., Durisen R. H., Cassen P. M., Berry D. K., Link R. P., 2003, ApJ, 590, 1060
* Pollack et al. (1996) Pollack J. B., Hubickyj O., Bodenheimer P., Lissauer J. J., Podolak M., Greenawieg Y., 1996, Icarus, 124, 62
* Price & Laibe (2015) Price D. J., Laibe G., 2015, MNRAS, 451, 813
* Rafikov (2005) Rafikov R. R., 2005, ApJ, 621, L69
* Rafikov (2007) Rafikov R. R., 2007, ApJ, 662, 642
* Rice et al. (2012) Rice W. K. M., Forgan D. H., Armitage P. J., 2012, MNRAS, 420, 1640
* Rice et al. (2014) Rice W. K. M., Paardekooper S.-J., Forgan D. H., Armitage P. J., 2014, MNRAS, 438, 1593
* Rogers & Wadsley (2011) Rogers P. D., Wadsley J., 2011, MNRAS, 414, 913
* Romeo et al. (2010) Romeo A. B., Burkert A., Agertz O., 2010, MNRAS, 407, 1223
* Seager et al. (2015) Seager S., Dalcanton J. J., Postman M., Tumlinson J., Mather J. C., 2015, preprint, (arXiv:1511.01144)
* Shu et al. (1990) Shu F. H., Tremaine S., Adams F. C., Ruden S. P., 1990, ApJ, 358, 495
* Toomre (1964) Toomre A., 1964, ApJ, 139, 1217
* Towns et al. (2014) Towns J., et al., 2014, Computing in Science and Engineering, 16, 62
* Vorobyov (2011) Vorobyov E. I., 2011, ApJ, 729, 146
* Wadsley et al. (2004) Wadsley J. W., Stadel J., Quinn T., 2004, New Astronomy, 9, 137
## Appendix A WENGen tests
The Wengen tests are a series of code tests designed to compare different astrophysical hydrodynamic and gravity simulation codes and are available online at [http://www.astrosim.net/code/](http://www.astrosim.net/code/). We ran Wengen test 4, an unstable isothermal PPD, with ChaNGa. We present the results of our simulation here. We find that our results are in good agreement with other simulation codes (previous test results can be found at [http://users.camk.edu.pl/gawrysz/test4/](http://users.camk.edu.pl/gawrysz/test4/)). We have reproduced all the plots on the Wengen test website for the ChaNGa results, which are available at http://hdl.
handle.net/1773/34933. Our ICs are the 200k-particle run. Here we present a few figures demonstrating the ChaNGa results.
## Appendix B Convergence test
Previous work has indicated that the results of SPH simulations of PPDs can be resolution dependent (Lodato & Clarke, 2011; Meru & Bate, 2011, 2012). Nelson (2006) laid out several resolution requirements for SPH simulations of PPDs. For our suite of simulations, we exceed the mass resolution requirement by a minimum factor of 7, and most simulations exceed it by a factor of 40. We also easily meet their scale height resolution requirement. At the midplane of the most unstable disk radius, the ratio of the smoothing length to the scale height is between \\(4-12\\) for all our runs (Nelson (2006) finds this ratio should be at least \\(\\sim 4\\)).
As part of our analysis, we ran a basic convergence test to verify our simulation code and to investigate the effects of resolution on disk fragmentation in SPH simulations. We ran a simulation close to the fragmentation boundary with a minimum \\(Q_{eff}=1.06\\) (simulation 48 in table 2) at 6 different resolutions from 50k to 10M particles. These runs are summarized in table 11.
We find (i) that low resolution simulations are more susceptible to disk fragmentation and (ii) that simulations appear to converge reasonably well, with our chosen resolution of \\(10^{6}\\)-particles being sufficient for the analysis presented in this paper. However, these convergence tests are only preliminary and future work may reveal that higher particle count is required for fully believable results. Convergence may also depend on EOS, cooling prescriptions, and whether a grid code or an SPH code is used.
Figure 13 shows logarithmic surface density plots of all 6 runs after 4.0 ORPs. The 50k- and 100k-particle runs have already fragmented violently. The 500k-particle run has developed somewhat stronger spiral power than the higher resolution runs and eventually fragments after about 12 ORPs. The other runs have developed some spiral power that is insufficient to drive the disks to fragmentation.
Figure 14 shows what we call the normalized spiral power. We calculate spiral power by binning the surface density in \\(R,\\theta\\) (we used \\(128\\times 128\\) bins here), calculating the standard deviation along the angular direction, and summing along the radial direction. We then normalize by multiplying the spiral power by \\(\\sqrt{N}\\), where \\(N\\) is the number of
SPH particles in the run. This is done to account for noise in the number of particles per bin which scales as \\(\\propto 1/\\sqrt{n_{\\mathit{par~{}bin}}}\\) where \\(n_{\\mathit{par~{}bin}}\\) is approximately proportional to \\(N\\).
The normalized spiral power then represents how much larger than the particle noise the spiral power is. Fragmentation is visible in figure 10 as a sharp rise in power at the end of the simulation. As can be seen, the normalized spiral power decreases with increasing particle count and converges for the higher-resolution simulations.
As expected for \\(Q_{\\mathit{eff}}=1.06\\), the higher resolution simulations do not fragment. As shown in table 11, the simulations with \\(N\\geq 10^{6}\\) particles approach a stable value of \\(Q_{\\mathit{eff}}\\approx 1.11\\) and therefore would not fragment if run for longer. Simulations with \\(N\\leq 500\\)k particles approached very low \\(Q_{\\mathit{eff}}\\) minimum values, although \\(Q_{\\mathit{eff}}\\) is strictly speaking not well defined for highly non-axisymmetric disks.
\\begin{table}
\\begin{tabular}{c c c c} Run & Resolution & Fragment? & Final \\(Q_{\\mathit{eff}}\\) \\\\ \\hline
0 & 50k & yes & 0.67 \\\\
1 & 100k & yes & 0.69 \\\\
2 & 500k & yes & 0.89 \\\\
3 & 1M & — & 1.11 \\\\
4 & 5M & — & 1.11 \\\\
5 & 10M & — & 1.12 \\\\ \\end{tabular}
\\end{table}
Table 10: Table of convergence tests runs. ICs are identical to simulation 48 in table 10 but with a different number of particles. The resolution is the number of SPH particles in the run. Runs that fragment are highly non-axisymmetric, so the quoted \\(Q_{\\mathit{eff}}\\) values are only illustrative. Runs that don’t fragment approach a stable value significantly above 1.
Figure 11: Logarithmic surface density plots for the simulations listed in table 12 of a \\(Q_{eff}=1.06\\) run after 4.0 ORPs (defined at the initial disk radius). The low resolution runs have already fragmented. The 500k-particle run has developed strong spiral power and will eventually fragment. The remaining runs do not fragment and approach a value of \\(Q_{eff}>1.1\\) within the course of the simulations.
Figure 16: Normalized spiral power vs. time for the 6 simulations in the convergence test. Power is calculated by binning \\(\\Sigma\\) in (_R_,_\\(\\theta\\)), calculating the standard deviation along \\(\\theta\\) and summing along \\(R\\). Power is then normalized by multiplying by \\(\\sqrt{N}\\) to adjust for the power in particle noise. Fragmentation is seen for the simulations with 50k-, 100k-, and 500k-particles as a rapid increase in spiral power at the end of the simulation. As expected, the higher resolution simulations do not fragment.
| We investigate the conditions required for planet formation via gravitational instability (GI) and protoplanetary disk (PPD) fragmentation around M-dwarfs. Using a suite of 64 SPH simulations with \\(10^{6}\\) particles, the parameter space of disk mass, temperature, and radius is explored, bracketing reasonable values based on theory and observation. Our model consists of an equilibrium, gaseous, and locally isothermal disk orbiting a central star of mass \\(M_{*}=M_{\\sun}/3\\). Disks with a minimum Toomre \\(Q\\) of \\(Q_{min}\\lesssim 0.9\\) will fragment and form gravitationally bound clumps. Some previous literature has found \\(Q_{min}<1.3-1.5\\) to be sufficient for fragmentation. Increasing disk height tends to stabilize disks, and when incorporated into \\(Q\\) as \\(Q_{eff}\\propto Q(H/R)^{\\alpha}\\) for \\(\\alpha=0.18\\) is sufficient to predict fragmentation. Some discrepancies in the literature regarding \\(Q_{crit}\\) may be due to different methods of generating initial conditions (ICs). A series of 15 simulations demonstrates that perturbing ICs slightly out of equilibrium can cause disks to fragment for higher \\(Q\\). Our method for generating ICs is presented in detail. We argue that GI likely plays a role in PPDs around M-dwarfs and that disk fragmentation at large radii is a plausible outcome for these disks.
keywords: accretion, accretion disks - protoplanetary disks - methods: numerical | Write a summary of the passage below. | 348 |
isprs/214f37e1_b72a_4f27_a3c4_8683e26f0061.md | # Fuzzy Spatial Objects and Their Dynamics
Martin Molenaar
Tao Cheng
International Institute for Aerospace Survey and
Earth Sciences (ITC)
P.O. Box G, 75000A Enschede
The Netherlands
e-mail: (molenaar, cheng)@ic.nl
Phone: 31 - 53 - 4874 454 Fax: 31 - 53 - 4874 355
## 1 Introduction
The syntactic approach for handling spatial object information as presented in (Molenaar 1994 and 1996) makes it possible to distinguish three types of statements with respect to the existence of spatial objects:
* an _existential statement_ asserting that there are spatial and thematic conditions that imply that an object exists,
* an _extensional statement_ identifying the geometric elements describing the spatial extent of the object,
* a _geometric statement_ identifying the actual shape, size and position of the object in a metric sense.
These three types of statements are intimately related. The extensional and geometric statements imply the existential statement and if an object does not exist it can not have a spatial extent and geometry. The existential statement often relates to the thematic information though, that is not explicit in the other two statements. The geometric statement also implies the extensional statement, often the actual geometry of the object is derived from the extensional description. These three types of statements can all have a degree of uncertainty and although these statements are related they give us different perspectives that may help us to understand the different aspects of uncertainty in relation to the description of spatial objects.
The determination of the spatial extent of geo-objects is generally approached through the boundaries, or more precisely through the position of the boundary points. The analysis of the geometric uncertainty of the objects is therefore often based on accuracy models for the coordinates of these points. The epsilon band method is well known in this context (Chun et al., 1980). Yet the solutions for handing this problem are not found satisfactory though because the geometric uncertainty of geo-objects is not only a matter of coordinate accuracy, i.e. it is not only a problem of geometry, but it is also a problem of object definition and thematic vagueness. This latter aspect can not be handled by a geometric approach alone. This becomes apparent when mapping is not done in a crisp geometry as for land surveying and photogrammetry. The object detection through image interaction is an example of the formulation of extensional statements. The uncertainty exists in the thematic aspect expressed by the likelihood of pixels belonging to thematic classes. Image segments can then be formed of adjacent pixels falling under the same class. If these segments represent spatial objects then the uncertainty of the geometry of these objects is due to the fact that the value of the likelihood function varies per pixel.
Nowadays, concepts of fuzzy set theory are being applied to model the uncertainty in geometric aspects of mapping units (Usery, 1996; Brown, 1998). Most works propose approaches to describe and represent the spatial extent of boundaries of fuzzy objects due to uncertain classification of the mapping units. However, the inter-relationships between the various types of uncertainty are not described, although Gahen & Ethers (1997) proposed a framework for uncertainty transformation between thematic data and geographic features through remote sensing interpretation. In the paper we discuss the extensional and geometric uncertainty.
Moreover, literature to date hardly discusses the dynamics of objects, particularly spatial change, in a generic way. Even less literature is available about the dynamic behavior of fuzzy objects with indeterminate boundaries. The detection of the dynamics of fuzzy objects is the second point to be addressed in this paper. The paper will elaborate an example where the dynamics of sediments along the Dutch coast are monitored.
The paper is organized as follows. Next section discusses the relationship between existential uncertainty and extensional uncertainty. An approach to identify the spatial extent of fuzzy objects is discussed. It is followed by a discussion of fuzzy spatial overlap in section 3 in order to detect the state transition of objects. Section 4 presents the identification of dynamics of fuzzy objects by linking the state transitions. The last section of the paper summarizes the major findings and further researches.
## 2 Fuzzy Spatial Extent and Fuzzy Boundary
This section discusses the inter-relationship between thematic and geometric aspects. The discussion will follow the procedure to identify objects from field observation data (Cheng et al., 1997) to tract the uncertainty propagation. In this procedure data is converted from a low level form (field sampling) to a high levelform (distinct objects) through interpolation, classification and segmentation. Here we will discuss the uncertainty transformation from classification to segmentation, i.e., from thematic data to geometric aspects of objects. It is discussed that due to the vagueness of object class definitions and the errors in field sampling points, each grid cell \\(P_{i}\\) will generate a membership function value vector \\(\\{L(P_{i}^{\\prime},C_{1}),L(P_{i}^{\\prime},C_{2})\\_{-L(P_{i}^{\\prime},C_{3})}\\} ^{T}\\) (\\(\\leq\\)\\(L(P_{i}^{\\prime},C_{1})\\)) after classification. Here \\(L(P_{i}^{\\prime},C_{3})\\) represents the membership function value of grid cell \\(P_{i}\\) belonging to class \\(C_{n}\\). \\(N\\) is the total number of the classes. For each class \\(C_{n}\\) captions can be identified consisting of cells with \\(L(P_{i}^{\\prime},C_{k})\\_{-Threshold_{k}}\\). Each region can then be interpreted as the fuzzy extent of a spatial object belonging to a class \\(C_{k}\\). If the classes are assumed to be spatially exclusive then each grid cell belongs to at most one class, and consequently to only one object, if the objects form a spatial partition then each grid cell belongs to exactly one object. In other applications, fuzzy spatial overlaps among objects are permitted, i.e. the objects have fuzzy transition zones that may overlap (Burrough, 1996(User), 1996). In the transition zones, the pixels might belong to multiple objects. The fuzzy topologic relationships of spatial objects are discussed in (Dircmierier & De Hoo, 1996) and (Zhan, 1997). However, here we will not discuss this issue, as in our case the objects form spatial partitions. So each grid cell belongs to exactly one class and one object, which can be determined by criteria such as we define as follows.
Let \\(NL[P_{i},C_{i}]\\)= 1- \\(L[P_{i},C_{i}]\\) represent no-membership, i.e., the certainty that \\(P_{i}\\) does not belong to class \\(C_{n}\\) and let \\(XL[P_{i},C_{i}]\\) express the membership that \\(P_{i}\\) belongs exclusively to \\(C_{n}\\) and not to any other classes \\(C_{i}\\) for any \\(j_{nk}\\). Because \\(XL[P_{i},C_{i}]\\) expresses that the grid cell belongs to class \\(C_{n}\\) and not to any other classes, it can be derived by applying minimum operations as
\\[XL[P_{i},C_{k}]\\_{-MIN(XL[P_{i}^{\\prime},C_{k}]\\_{MIN(XL[P_{i}^{\\prime},C_{k} ])})}. \\tag{1}\\]
As \\(P_{i}\\) can only belong to one class, it requires only one class for which the function \\(XL[]\\) has maximum value for \\(P_{i}\\). If there are more classes with the same maximum values then additional evidences are required to come to a selection of a unique class. It can be represented as
\\[\\textit{if }XL[P_{i},C_{k}]\\_{-MAX}\\,{}_{C_{1}}\\,(XL[P_{i},C_{i}]\\ \\ \\ (i=1 \\_,-N)\\ \\ \\text{then let }Q[P_{i},C_{i}]\\_{=}\\ 1, \\tag{2}\\] \\[\\text{otherwise }Q[P_{i},C_{i}]\\_{=}\\ 0.\\]
After assigning the cells to classes, an area \\(S_{n}\\) of class type \\(C_{n}\\) will be formed by the following two conditions (Molenaw, 1996),
\\[\\textit{for all grid cells }P_{i}\\ <_{a}\\,\\textit{ }Q[P_{i},C_{i}]\\_{=}\\ 1,\\textit{ and}\\] \\[\\textit{if }r_{\\textit{ }i},^{\\than their overlaps with the region of any other object. Under this assumption we can find the successor of a region at epoch t, by calculating its spatial overlaps with all the regions that appeared at epoch t\\({}_{\\rm sat}\\). The one that has maximum overlap will be identified as the successor.
The overlap of two regions \\(S_{a}\\) and \\(S_{b}\\) can be found through the intersection of their two cells. It is a very simple raster-based operation.
\\[Over[S_{a}S_{b}]\\!\\!=\\!\\!\\!\\!\\!\\!\\!\\
We also calculated the similarities between region 3 (as S\\({}_{\\lambda}\\)) and region 7 (as S\\({}_{\\lambda}\\)),
\\(\\textit{RCV}(S\\_{\\lambda}|S\\_{\\lambda})\\) = 27.5/644.3=0.043
\\(\\textit{RCV}(S\\_{\\lambda}|S\\_{\\lambda})\\) = 27.5/28.0 = 0.982
\\(\\textit{Similarity}(S\\_{\\lambda}|S\\_{\\lambda})\\)=2.05
Therefore, we can conclude that these two regions are not similar to each other, but region 7 is more or less contained in region 3. It can be identified as a new object appearing in 1990, and is split from object 3 (region 3 represents its spatial extent in 1989). By analyzing the overlap between regions of 1990 and 1991, we found that region 7 disappeared in 1991, it was merged into object 3 (region 10 in 1991). Using the above approach, the objects and the processes involved an object developments are identified as illustrated by Figure 4. The icons represent the regions (states) of objects at different times. The symbols represent the types of state transition. It can be seen from the figure that object 4 split off from object 3 between 1989 and 1990; it is merged again into object 3 between 1990 and 1991.
Regions in different years
## 5 Conclusions
This paper presented a method to identify fuzzy objects and their dynamics from field data sampled at different times. The methodology has been demonstrated by an empirical example in a coastal geomorphological study of Ameland. It will also be applicable to modeling natural environments and physical processes in other fields.
It is revealed in our experiment that the uncertainties in the field observation data and in the definition of object classes have obvious influences on the identification of the spatial extent of objects at different epochs. Therefore, the geometric uncertainty of objects is due to the uncertainties of thematic aspect and semantic domain. It means that the extensional, existential and geometric aspects of objects all have a degree of uncertainty and they are related to each other.
The dynamics of fuzzy objects are revealed through the spatial extents (states of objects) at different epochs. They are determined by comparison of the relationship of these spatial extents. Simultaneously, the processes through which these objects evolve are identified.
## References
* Cheng et al. (1997) Cheng, T., Molenaar, M. and Bouloucos, T., 1997, Identification of fuzzy objects from field observation data. _Spatial Information Theory: A Theoretical Basis for GIS_, (Lecture Notes in Computer Sciences, Vol. 1329), edited by S.C. Hirte and A.U. Frank, (Berlin: Spring-Verlag), pp. 241-259.
* Galton (1997) Galton, A., 1997, Continuous change in spatial regions, In Hirte, S.C., & Frank, A.U., (Eds.): _Spatial Information Theory: A TheoryTheoretical Basis for GIS, Lecture Notes in Computer Sciences, Vol. 1329, pp. 1-13. Berlin: Spring-Verlag.
* [1996] Molenaar, M., 1996, A syntactic approach for handing the semantics of fuzzy spatial objects. _Geographic objects with indeterminate boundaries_, edited by P.A. Burrough and A.U. Frank, (London: Taylor & Francis), pp. 207-224.
* [1997] Zhan, F.B., 1997, Approximation of topological relationship between fuzzy regions satisfying a linguistically described query. _Spatial Information Theory: A Theoretical Basis for GIS_, (Lecture Notes in Computer Sciences, Vol. 1329), edited by S.C. Hirtle and A.U. Frank, (Berlin: Spring-Verlag), pp.509-510. | The determination of the spatial extent of geo-objects is generally approached through the boundaries or more precisely through the position of the boundary points. The analysis of the geometric uncertainty of the objects is therefore often based on accuracy models for the coordinates of these points. The accuracy evaluation in land surveying and photogrammetry generally refers to the mapping of crisp objects. In many other survey disciplines objects are mapped that are not crisp, in that case the geometric uncertainty is not only a matter of coordinate accuracy, but also a problem of object definition and thematic vagueness. It can not be handled by only a geometric approach such as epsilon band method. This paper proposes an approach to map the spatial extent of the objects and their uncertainties when objects are measured from field observation data.
Beyond that, this paper presents a method of detecting the dynamics of these fuzzy objects from time series. They are determined by comparing their spatial extents at successive epochs. Simultaneously, the processes through which objects evolve are identified and are represented by several types of state transition, such as shift, merge, and split of objects. The proposed method is applied in a coastal geomorphologic study of a barrier island in The Netherlands. | Give a concise overview of the text below. | 238 |
arxiv-format/1805_11348v1.md | # Uncertainty Gated Network for Land Cover Segmentation
Guillem Pascual
University of Barcelona
[email protected]
Santi Segui
University of Barcelona
[email protected]
Jordi Vitria
University of Barcelona
[email protected]
## 1 Introduction
Land cover segmentation deals with the problem of multi-class semantic segmentation of remote sensing images. This problem, which consists of assigning a unique label (or class) to every pixel of an image, is particularly difficult due to (i) the high resolution of the images and diversity of size of the objects, (ii) the diversity of classes and, usually, the similarities among them, (iii) the noisy labeling and implicit rules such as not considering small/isolated areas and (iv) data domain: the model is usually trained with a set of images that highly differ from the target area where it is expected to generalize and perform predictions.
In the recent literature, most of the methods solving these problems are based on deep learning. In [6] Long _et al_. popularized the use of fully convolutional networks for segmentation. This method without any dense layer, allowed to create segmentation maps for images of any size. Based on this idea, and also trying to solve the exact alignment problem associated with the pooling layers, several methods have been presented [2, 8, 12, 4]. U-Net [8] is a popular architecture defined as an encoder-decoder scheme, where in the encoder stage the spatial dimension is gradually reduced with pooling layers and then decoder stage gradually recovers the object details and spatial dimension to finally obtain the output segmentation map. In RefineNet [4], proposed by Lin _et al_., the ResNet architecture is used as a encoder step while in the decoder step as a set of RefineNet blocks which fuse high resolution features from the encoder and low resolution features from previous RefineNet block.
In the domain of satellite images, several methods trying to solve this problem in high-resolution images have been presented [9, 10, 7, 5]. The most relevant publication for our work is Gated Convolutional Network (GCN) [10], where the segmentation is computed from the outputs of each block of a pre-trained ResNet, using entropy as a gate to fine-tune the prediction at each level.
In this paper, we propose a novel method that tackles the problem of land cover segmentation using the data and protocol proposed by the DeepGlobe Land Cover Classification Challenge at CVPRW [1]. Figure 1 shows two samples from the dataset and the predictions of our model. The proposed method is built over a GCN using a ResNet architecture and exploits the uncertainty of the predictions in each layer. The uncertainty measure, built on the basis of
Figure 1: Prediction examples from the model. Each row is a sample, the left column is the input image and the right column is the predicted segmentation map.
the publication by Alex Kendall and Yarin Gal [3], is used to define a set of memory gates between layers that allow for a principled method to select the optimal decision for each pixel.
The remainder of this paper is organized as follows. In the next section we present the proposed method. In Section III, we present the training setup. In Section IV, we present the experimental results. Finally, Section IV concludes the paper with remarks on the proposed approach.
## 2 Method
Our model builds upon the GCN architecture proposed by Wang _et al_. in [10]. In that paper a new architecture was proposed to combine the feature maps learned at different blocks of a ResNet model by using memory gates instead of more classical operations such as summation or concatenation. The gating mechanism was based on the relationship between the information entropy of the feature maps and the label-error map, allowing for a better feature map integration.
To further develop the concept of gated convolutions, we consider the use of a more principled concept: assigning a credibility measure to each feature map. According to the Bayesian viewpoint proposed by Alex Kendall and Yarin Gal in [3], it is possible to characterize the concept of uncertainty into two categories. On the one hand, if the noise applies to the model parameters, we will refer to _epistemic uncertainty_. On the other hand, if the noise occurs directly in the observation, we will refer to it as _aleatoric uncertainty_. Additionally, aleatoric uncertainty can further be categorized into two more categories: _homoscedastic uncertainty_, when the noise is constant for all the outputs (thus acting as a \"measurement error\"), or _heteroscedastic uncertainty_ when the noise of the output also depends explicitly on the specific input.
We propose to use a measure of heteroscedastic uncertainty when classifying specific pixels as a gating mechanism. In this case, we have to measure the heteroscedastic uncertainty in a classification task, where the noise model is placed in the _logit_ space.
Let \\(\\sigma_{i}\\) and \\(l_{i}\\) be two predicted vectors of unaries of dimension \\(C\\), the number of classes, for every input pixel \\(x_{i}\\). The latter, \\(l_{i}\\), are the logits used to output a probability distribution by using a softmax, while \\(\\sigma_{i}\\) aims to bound its uncertainty. By taking \\(T\\) random samples of \\(l_{i}\\) perturbed by \\(\\sigma_{i}\\), we can derive a stochastic loss \\(\\mathcal{L}_{x}\\) that allows the computation of an uncertainty value \\(\\gamma_{i}\\) for each \\(x_{i}\\) input as follows:
\\[\\mathcal{L}_{x}=\\sum_{i}\\gamma_{i}\\]
\\[\\gamma_{i}=-\\log\\frac{1}{T}\\sum_{t}\\exp(\\hat{l}_{i,t,c}-\\log\\sum_{c^{\\prime}} \\exp\\hat{l}_{i,t,c^{\\prime}})\\]
\\[\\hat{l}_{i,t}\\sim\\mathcal{N}(l_{i},\\sigma_{i}),1\\leq t\\leq T\\]
where \\(\\hat{l}_{i,t,c^{\\prime}}\\) is the \\(t\\) sampled logit vector from class \\(c^{\\prime}\\), and \\(\\hat{l}_{i,t,c}\\) is the logit vector of the winner class for each pixel and sample.
Our architecture is illustrated in Figure 2. As it can be seen, an uncertainty measure \\(\\gamma^{(j)}\\) is computed after each of the ResNet blocks \\(g_{j}\\), \\(0\\leq j\\leq 4\\). The blocks \\(g_{4}\\) through \\(g_{1}\\) correspond to each of the original residual blocks, while \\(g_{0}\\) is composed by the first max-pooling and convolution.
The refinement process through uncertainty gates starts by setting \\(\\bar{b}_{4}=g_{4}\\) and \\(b_{j}\\) the upsampled version of \\(\\bar{b}_{j}\\) to match \\(g_{j-1}\\) dimensions. Then for each \\(j=4,..,1\\) the process of obtaining an uncertainty and segmentation is defined as follows:
\\[l^{(j)} =b^{(j)}\\mathbin{\\raisebox{-1.29pt}{\\scalebox{0.8}{$\\,\\mathrm{ \\char 37}$}}}_{\\mathrm{C}}\\mathbf{w}_{\\mathbf{1x1}}^{(\\mathbf{j,1})}\\] \\[\\sigma^{(j)} =b^{(j)}\\mathbin{\\raisebox{-1.29pt}{\\scalebox{0.8}{$\\,\\mathrm{ \\char 37}$}}}_{\\mathrm{C}}\\mathbf{w}_{\\mathbf{1x1}}^{(\\mathbf{j,2})}\\] \\[\\tilde{l}_{i,t}^{(j)} \\sim\\mathcal{N}(l_{i}^{(j)},\\sigma_{i}^{(j)})\\] \\[\\gamma^{(j)} =\\log\\frac{1}{T}\\sum_{t}\\exp(\\hat{l}_{i,t,c}^{(j)}-\\log\\sum_{c^{ \\prime}}\\exp\\hat{l}_{i,t,c^{\\prime}}^{(j)})\\] \\[\\bar{b}_{j-1} =\\gamma^{(j)}*g_{j-1}+b_{j}\\]
Where \\(\\mathbin{\\raisebox{-1.29pt}{\\scalebox{0.8}{$\\,\\mathrm{\\char 37}$}}}_{ \\mathrm{C}}\\) is the convolution operator with a \\(1\\times 1\\) kernel and dimension \\(C\\), and \\(*\\) denotes the element-wise multiplication, but defined in such a way that gradient can only flow through the \\(\\bar{g_{j}}\\) operand during the backpropagation step. If gradient is allowed to flow through \\(\\gamma^{(j)}\\) in the backward pass, we can no longer talk about heteroscedastic uncertainty, as external factors aside from pure classification would condition them. Finally, \\(\\gamma^{(0)}\\) and \\(L^{(0)}\\) is computed in the same manner.
To compute the final segmentation, instead of taking new logits from the last block as GCN does, it is proposed a method that takes advantage of all logits \\(l^{(j)}\\) and uncertainties \\(\\gamma^{(j)}\\) calculated at each block. The final probabilities of each pixel and class are obtained with a \\(\\gamma\\)-weighted sum of the probabilities at each intermediate step, as:
\\[\\frac{1}{C}\\sum_{i=0}^{4}softmax(l^{(i)})*(1-\\gamma^{(i)})\\]
## 3 Training
To train the model, we first reduce the original resolution down to \\(1024\\times 1024\\), which simplifies the problem space while keeping enough details. The network is further trained by taking 8 random crops, each of \\(250\\times 250\\), out of each image. Each crop is then randomly rotated and flipped, and is further processed by adding gaussian noise and adjusting hue, contrast and brightness.
The model is trained by minimizing, at each level \\(j\\), both \\(L^{(j)}\\) and a classification loss given by a softmax crossentropy between the labels and sampled unaries from the logits. Overall, loss is minimized with WNAdam optimizer [11], using an standard piecewice learning rate decay for a total of 100 epochs.
## 4 Results
The data for the DeepGlobe Land Cover Classification Challenge consists of 1.146 satellite RGB images of size 2448x2448 pixels, split into training/validation/test, each with 803/171/172 images. Each satellite image is paired with a class labeled image using the following 7 categories: 1) Urban land; 2) Agriculture land; 3) Rangeland; 4) Water (Rivers, oceans, lakes, wetland, ponds); 5) Barren land (Mountain, land, rock, dessert, beach, no vegetation) and 7) Unknown (clouds and other artifacts).
The pixel-wise mean Intersection over Union (mIoU) score, calculated by averaging the IoU over all classes, is used as evaluation metric. The IoU is defined as: True Positive / (True Positive + False Positive + False Negative). The _unknown_ class is not an active class used in evaluation.
The final model uses, as discussed in the previous section, the ResNet with 18 layers and is trained for 100 epochs. Figure 3 shows the prediction process. Segmentations at each level are generated for visualization and interpretation and further combined to obtain the final result. Deeper levels are more general and can not accurately predict each pixel, which can be both attributed to the down-sampling process and the abstraction done through all the convolutions. That is why upper levels refine the result and are richer in details. In particular, it can be seen that pixels where the output does not match the ground truth, a high uncertainty is obtained. Averaging across all levels improves the result by reducing artifacts and producing smoother segmentation maps. The model runs inference in real time, taking only 250ms to produce a segmentation at full \\(2448\\times 2448\\) resolution on an NVIDIA Titan X. This architecture achieves a mIoU score of 0.485 in the final test set of the challenge.
## 5 Conclusions
In this paper, an uncertainty gated convolutional neural network has been proposed for land-cover semantic segmentation. The proposed method leverages the computation of a heteroscedastic measure of uncertainty when classifying individual pixels in an image. This classification uncertainty measure is used to define a set of memory gates between layers that allow for a principled method to select the optimal decision for each pixel. The result reported on the DeepGlobe Land Cover Classification Challenge is 0.485 mIoU on the final test set. Future improvement on the domain adaptation problem will be considered, since we have observed some inconsistencies due to this specific issue.
## Acknowledgements
This work was partially founded by MINECO Grant TIN2015-66951-C2 and by an FPU grant (Formacin de Profesorado Universitario) from the Spanish Ministry of Education, Culture and Sport (MECD) to Guillem Pascual (FPU16/06843). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
## References
* [1] I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia, and R. Raskar. Deepglobe 2018: A challenge to parse the earth through satellite images. _arXiv preprint arXiv:1805.06561_, 2018.
* [2] S. Jgou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In _2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, pages 1175-1183, July 2017.
* [3] A. Kendall and Y. Gal. What uncertainties do we need in bayesian deep learning for computer vision? In _Advances in
Figure 2: Uncertainty gated convolutional neural network. Black arrows represent weighted connections between different layers. Green arrows represent forward-only weighted connections, where gradient flows in the backpropagation process are not allowed.
Neural Information Processing Systems_, pages 5580-5590, 2017.
* [4] G. Lin, A. Milan, C. Shen, and I. Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2017.
* [5] Y. Liu, D. Minh Nguyen, N. Deligiannis, W. Ding, and A. Munteanu. Hourglass-shapenetwork based semantic segmentation for high resolution aerial imagery. _Remote Sensing_, 9(6), 2017.
* [6] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3431-3440, 2015.
* [7] K. Nogueira, M. Dalla Mura, J. Chanussot, W. Robson Schwartz, and J. A. dos Santos. Dynamic Multi-Scale Semantic Segmentation based on Dilated Convolutional Networks. _ArXiv e-prints_, Apr. 2018.
* MICCAI 2015_, pages 234-241, Cham, 2015. Springer International Publishing.
* [9] J. Sherrah. Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery. _CoRR_, abs/1606.02585, 2016.
* [10] H. Wang, Y. Wang, Q. Zhang, S. Xiang, and C. Pan. Gated convolutional neural network for semantic segmentation in high-resolution images. _Remote Sensing_, 9(5), 2017.
* [11] X. Wu, R. Ward, and L. Bottou. Wngrad: Learn the learning rate in gradient descent. _arXiv preprint arXiv:1803.02865_, 2018.
* [12] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In _Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2017.
Figure 3: Prediction process of an image. All \\(b_{j}\\) are intermediate segmentation outputs, resulting from a \\(softmax(l^{(j)}\\), and upsampled to target resolution. Low uncertainty is represented by blue, yellow indicates high uncertainty. | The production of thematic maps depicting land cover is one of the most common applications of remote sensing. To this end, several semantic segmentation approaches, based on deep learning, have been proposed in the literature, but land cover segmentation is still considered an open problem due to some specific problems related to remote sensing imaging. In this paper we propose a novel approach to deal with the problem of modelling multiscale contexts surrounding pixels of different land cover categories. The approach leverages the computation of a heteroscedastic measure of uncertainty when classifying individual pixels in an image. This classification uncertainty measure is used to define a set of memory gates between layers that allow a principled method to select the optimal decision for each pixel. | Condense the content of the following passage. | 142 |
arxiv-format/2307_10495v1.md | # Novel Batch Active Learning Approach and Its Application to Synthetic Aperture Radar Datasets
James Chapman
Bohan Chen
Zheng Tan
Jeff Calder
Kevin Miller
Andrea L. Bertozzi
University of California, Los Angeles, Department of Mathematics, 520 Portola Plaza, Los Angeles, CA 90095, USA University of Minnesota, School of Mathematics, 538 Vincent Hall, 206 Church Street SE, Minneapolis, MN 55455, USA
## 2 Introduction
Synthetic Aperture Radar (SAR) is a valuable tool in Automatic Target Recognition (ATR). SAR imaging uses a moving device, which repeatedly transmits and receives radio signals, to simulate a large radar dish and achieve high resolution images. Multiple standard SAR data sets have been established as benchmark data sets within the SAR community. For example, MSTAR contains SAR images of land based vehicles[2], whereas OpenSARShip and FUSAR-Ship contain SAR images of different types of ships at sea[3, 4]. The main problem here is to identify objects from SAR images. Although the images are of higher resolution, they are difficult to classify via human inspection, often requiring lots of time from domain experts. This motivates the use of machine learning to improve speed and accuracy for object classification in SAR images.
Due to its recent success in SAR classification and it's high data efficiency, we propose to use semi-supervised learning (SSL) for SAR classification[1]. Algorithms in this domain exploit the geometric structure of the entire dataset in addition to the small amount of label information. In particular, we use graph-based Laplace learning as the underlying classifier. In order to exploit the geometry of the SAR data, the methods must first obtain useful features from theimages for making predictions. Convolutional neural network (CNN) architectures have been highly successful on image classification tasks as they learn rich, structured representations of image data. They are also equipped to deal with the large amounts of noise contained in SAR images. Past works have successfully used Convolutional Variational Auto-Encoders (CNNVAE) to embed SAR data into a lower dimensional embedding space [1]. Transfer learning presents another promising method for obtaining useful image features [5, 6, 7]. This involves training a neural network on similar data for which labels are much more readily available. Later convolutional layers in the neural network contain image features useful for the prior task and may be used for SAR classification. These features can be used immediately, or may be fine tuned to get more task specific features after replacing the fully connected layers by linear layers and training on the current dataset. We employ CNNVAE, transfer learning without fine tuning and transfer learning with fine tuning in this paper.
While SSL methods perform relatively well with few labeled data, their performance drastically improves when combined with active learning [8]. Active learning supports the machine learning process by judiciously selecting a small number of unlabeled datapoints to query for labels, with the aim of maximally improving the underlying classifier's performance [9]. Active learning has been shown to significantly improve classifier performance at very low label rates and minimize the cost of labeling data by domain experts [1, 10, 11]. Central to active learning is the development of an acquisition function. This function quantifies the benefit of acquiring the label of each datapoint in the candidate set \\(\\mathcal{C}\\) (a subset of the unlabeled datapoints). Based on the acquisition function, the active learning process selects a query set \\(\\mathcal{Q}\\subset\\mathcal{C}\\) to label. Sequential active learning selects the maximizer of the acquisition function
\\[k^{*}=\\operatorname*{arg\\,max}_{k\\in\\mathcal{C}}\\,\\mathcal{A}(k),\\]
where \"sequential\" refers to the case in which the query set has size one (i.e. \\(|\\mathcal{Q}|=1\\)). Batch active learning was developed to select a non-singleton query set in each step of the active learning process (i.e. \\(|\\mathcal{Q}|>1\\)). Batch active learning is more challenging than the sequential case as datapoints which maximize sequential acquisition functions usually contain similar information. Merely mimicking the sequential scheme to choose the top \\(|\\mathcal{Q}|\\) maximizers of the acquisition function \\(\\mathcal{A}(k)\\) is usually not optimal, since these maximizers are often close in the embedding space. Batch active learning methods remedy this issue by enforcing diversity in the query set, either by adding restrictions to sequential active learning or by designing objectives specific to batch active learning [12, 13, 14, 15, 16, 17]. We opt for the former strategy and build on sequential acquisition functions. Later experiments compare our method, LocalMax, to other active learning methods.
Recent works in ATR fall under two main approaches: supervised learning and SSL. Deep learning has enabled many recent advances as it can obtain useful image features. Zhang, et al. developed a supervised deep learning model called Hog-ShipCLSNet to classify the OpenSAR Ship and FUSAR-Ship data sets [18]. Other notable supervised learning methods use transfer learning from simulated SAR datasets [6, 7]. Simulating SAR data reduces the labeling burden on experts, but presents serious challenges due to distribution shift in the datasets [19]. A notable SSL contribution was made in Miller, et al. where the authors applied a CNNVAE and graph-based sequential active learning to classify images in MSTAR [1]. This result achieved state of the art performance on the MSTAR dataset.
In this paper, we improve the feature embedding process on OpenSARShip and FUSAR-Ship using transfer learning from neural networks pre-trained on ImageNet [20]. We also present a novel core-set construction method, DAC, and batch active learning procedure, LocalMax, to decrease the human labeling time by an order of magnitude. This method is compared with other active learning methods, including acquisition-weighted sampling, sequential active learning,random sampling and naive batch sampling. The combination of transfer learning and batch active learning beats the current state of the art methods on OpenSARShip and FUSAR-Ship, while simultaneously using less training data than state of the art methods [18]. Results are also compared on the MSTAR dataset as a standard benchmark.
## 3 Math Background
This section reviews some mathematical tools used in our pipeline, including CNNVAE, transfer learning, \\(K\\)-nearest neighbor similarity graph construction, graph Laplace learning and active learning. The pipeline starts by embedding the data using either a CNNVAE or transfer learning. The next step in the pipeline constructs a similarity graph based on the data embedding. Following this, the DAC algorithm computes a core-set. Finally, the active learning loop cycles through: fitting the model with Laplace learning, selecting new points for labeling with LocalMax and querying the human for labels. The full pipeline is shown in Figure 3.
### Data Embeddings
As mentioned in Section 2, our semi-supervised learning methods depend on a good representation of data, where distances between data measure similarity between those data. CNN architectures provide a good way to process image data. We utilize CNNVAEs from previous work [1] and use transfer learning from pre-trained PyTorch CNN neural networks to process the SAR datasets. A convolutional layer near the end of the neural network is designated as the _feature layer_ since it captures complex features of the dataset. The outputs of the feature layer are called _feature vectors_; these are used in our subsequent graph construction where graph-based learning will take place. The transfer learning case is shown in Figure 2, where the orange layers denote feature layers.
Transfer learning works by reusing a neural network trained on a similar dataset and task for a new task. One strategy is to immediately use the neural network parameters from a pretrained neural network. Another strategy is to fine tune the parameters of the original neural network with respect to the new dataset. We denote these methods by _zero-shot_ transfer learning and _fine-tuned_ transfer learning, respectively. Fine-tuned transfer learning is shown with greater detail in Figure 2. Alternatively, CNNVAEs work by first encoding and then decoding the dataset. The neural network architecture is designed so that the encoding is into a lower dimensional space. This forces the neural network to learn useful image features so that it can reconstruct the original image. In this method, the feature layer is chosen as the last CNN layer in the encoder portion of the CNNVAE.
### Graph Construction
As mentioned in Section 2, our methods operate on a graph constructed from the data. Consider the set of \\(d\\)-dimensional feature vectors \\(X=\\{x_{1},x_{2},\\ldots,x_{N}\\}\\subset\\mathbb{R}^{d}\\). We build a graph \\(G=(X,W)\\), where \\(X\\) is the set of vertices and
Figure 1: Three samples of SAR images. From left to right, the images show vehicles from MSTAR [2], OpenSARShip [3] and FUSAR-Ship [4].
\\(W\\in\\mathbb{R}^{N\\times N}\\) is the weighted adjacency matrix. The entry \\(W_{ij}\\) denotes the weight on the edge between vertices \\(x_{i},x_{j},\\ i\
eq j\\), which quantifies the similarity between the feature vectors \\(x_{i}\\) and \\(x_{j}\\). In general, the weight matrix is defined by
\\[W_{ij}=f\\left(\\frac{d(x_{i},x_{j})^{2}}{\\sigma_{i}\\sigma_{j}}\\right), \\tag{1}\\]
where \\(d(x_{i},x_{j})\\) is the distance between feature vectors \\(x_{i}\\) and \\(x_{j}\\), \\(f(\\cdot)\\) is the kernel function and \\(\\sigma_{i},\\sigma_{j}\\) are normalization constants corresponding to \\(x_{i},x_{j}\\), respectively. The choice of the distance function \\(d\\) in this paper is the angular distance, i.e.
\\[d(x,y)=\\arccos\\left(\\frac{x^{\\top}y}{\\|x\\|_{2}\\|y\\|_{2}}\\right). \\tag{2}\\]
The normalization constant \\(\\sigma_{i}\\) associated to node \\(i\\) is chosen according to the distance to the \\(K^{th}\\) nearest neighbor of \\(i\\), i.e., \\(\\sigma_{i}=\\sqrt{d(x_{i},x_{i_{K}})}\\), where \\(x_{i_{K}}\\) is the \\(K^{th}\\) nearest neighbor to \\(x_{i}\\). The kernel function \\(f(\\cdot)\\) is the Gaussian (exponential) kernel \\(f(x)=\\exp(-x)\\).
In order to improve computational efficiency, we only calculate the weight \\(W_{ij}\\) between nearby pairs of nodes \\(x_{i},x_{j}\\). For each node \\(x_{i}\\), we calculate edge weight \\(W_{ij}\\) between \\(x_{i}\\) and \\(x_{j}\\) if and only if \\(x_{j}\\) is among the \\(K\\)-nearest neighbors (KNN) of \\(x_{i}\\) in the sense of the angular distance [2]. We use an approximate nearest neighbor search algorithm [21] to find the KNN of each node. This results in a KNN weight matrix \\(\\hat{W}\\) defined by
\\[\\hat{W}_{ij}=\\left\\{\\begin{array}{rl}&W_{ij},\\ j=i_{1},i_{2},\\ldots,i_{K},\\\\ &0,\\ \\text{otherwise}.\\end{array}\\right. \\tag{3}\\]
The parameter \\(K\\) should be chosen to ensure that the corresponding graph \\(G\\) is connected. We symmetrize the sparse
Figure 2: Flowchart for fine-tuned transfer learning. The parameters in the convolutional layers (contained in dotted boxes) of the pretrained CNN are transferred to the new CNN. In fine-tuned transfer learning, all the transferred parameters are trained for a few iterations on the new dataset. The training occurs by first adding new fully connected layers at the end of the neural network and performing supervised learning with the new dataset. When training is complete, only the layers in the dotted boxes are kept for the embedding process. The orange layer denotes the feature layer, and the outputs of this layer provide the feature vectors used later in the pipeline.
weight matrix to obtain our final weight matrix by redefining \\(W_{ij}:=(\\bar{W}_{ij}+\\bar{W}_{jl})/2\\). Note that \\(W\\) is sparse, symmetric and non-negative (i.e. \\(W_{ij}\\geq 0\\)). In the following experiments of this paper, we choose the parameter \\(K=50\\) for the K-nearest neighbor search algorithm.
### Graph Learning
With a graph \\(G=(X,W)\\) constructed as described above, we now describe a graph-based approach for semi-supervised learning and present previous work in this field. We denote the ground truth one-hot encoding mapping by \\(\\mathbf{y}^{\\dagger}:X_{L}\\rightarrow\\{e_{1},e_{2},\\ldots,e_{n_{c}}\\}\\), \\(\\mathbf{y}^{\\dagger}(x_{j})=e_{y_{j}^{\\dagger}}\\), where \\(y_{j}^{\\dagger}\\in\\{1,2,\\ldots,n_{c}\\}\\) is the ground-truth label of the node \\(x_{j}\\) and \\(e_{k}\\) is the \\(k^{\\text{th}}\\) standard basis vector with all zeros except a \\(1\\) at the \\(k^{\\text{th}}\\) entry. We have information on the ground truth labels for the subset \\(X_{L}\\subset X\\). The goal for the SSL task is to predict the labels of the _unlabeled_ nodes \\(x_{i}\\in X\\setminus X_{L}\\). We are interested in solving for a classifier \\(\\hat{\\mathbf{u}}:X\\rightarrow\\mathbb{R}^{n_{c}}\\) that reflects the probability of belonging to a certain class. In other words, \\(\\hat{\\mathbf{u}}(x_{i})\\) is a vector whose \\(j^{\\text{th}}\\) entry reflects the probability that \\(x_{i}\\) belongs to class \\(j\\). The graph-based SSL model that we consider obtains a classifier by identifying \\(\\hat{\\mathbf{u}}\\) with its matrix representation \\(\\hat{U}_{i,j}=\\hat{\\mathbf{u}}(x_{i})_{j}\\) and solving the following optimization problem
\\[\\hat{U}=\\operatorname*{arg\\,min}_{U\\in\\mathbb{R}^{N\\times n_{c}}}J_{F}(U, \\mathbf{y}^{\\dagger})=\\operatorname*{arg\\,min}_{U\\in\\mathbb{R}^{N\\times n_{c }}}\\frac{1}{2}\\langle U,LU\\rangle_{F}+\\sum_{j\\in Z_{0}}\\ell(\\mathbf{u}(j), \\mathbf{y}^{\\dagger}(j)), \\tag{4}\\]
where \\(\\langle\\cdot,\\cdot\\rangle_{F}\\) is the Frobenius inner product for matrices. In equation (4), \\(L=D-W\\) is the unnormalized graph Laplacian matrix[22], where \\(D\\) is the diagonal matrix with diagonal entries \\(D_{j,j}=\\sum_{k\\in Z}W_{jk}\\) for \\(1\\leq j\\leq N\\).
The first term of (4) optimizes over the smoothness of the classifier, whereas the second term imposes a penalty which ensures that the output of \\(\\mathbf{u}\\) at the labeled nodes stays close to the observed labelings \\(\\mathbf{y}^{\\dagger}\\). The loss function \\(\\ell:\\mathbb{R}^{n_{c}}\\times\\mathbb{R}^{n_{c}}\\rightarrow\\mathbb{R}\\) measures the difference between the prediction \\(\\mathbf{u}(x_{j})\\) and the ground-truth \\(\\mathbf{y}^{\\dagger}(x_{j})\\) for \\(x_{j}\\in X_{L}\\). While there are several choices for the loss function, we simply apply a hard-constraint penalty
\\[\\ell_{h}(x,y)=\\begin{cases}+\\infty,\\text{ if }x\
eq y,\\\\ 0,\\text{ if }x=y.\\end{cases} \\tag{5}\\]
This hard-constraint penalty function \\(\\ell_{h}\\) forces the minimizer \\(\\hat{U}\\) to be exactly the same as the ground-truth \\(\\mathbf{y}^{\\dagger}\\) on the observation set \\(X_{L}\\). We refer to this SSL scheme as _Laplace learning_[23]. The predicted label of an unlabeled node \\(x_{i}\\in X\\setminus X_{L}\\) of this graph SSL model is given by
\\[\\hat{y}_{i}=\\operatorname*{arg\\,max}_{k}\\{\\hat{\\mathbf{u}}_{k}(x_{i})|\\ k=1,2, \\ldots,n_{c}\\}.\\]
### Active Learning
In domains with high labeling costs (ie time and/or money), the accuracy of a machine learning classifier is often constrained by the size of the training set. Active learning was developed to address this issue by providing machine learning algorithms with a way to quantify the value of obtaining labels from data. With a good determination of the labeling value, the machine learning algorithm seeks to maximize the amount of information it gets from each piece of data. This enables models trained using active learning to attain near-optimal performance with a relatively small training/labeled set[9].
Active learning is an iterative process, whereby the model is continually improved through acquiring more labeled data. In each step, given a candidate set \\(\\mathcal{C}\\), active learning aims to select a query set \\(\\mathcal{Q}\\subset\\mathcal{C}\\) that the model considers \"most helpful\" to obtain labels for. Usually, the candidate set \\(\\mathcal{C}\\) is the set of current unlabeled nodes at each step. At the core of the active learning process is the acquisition function \\(\\mathcal{A}:\\mathcal{C}\\rightarrow\\mathbb{R}\\), which quantifies the value of acquiring a label for each unlabeled node in the dataset. We consider four choices for the acquisition function, namely, uncertainty (UC) [24, 25, 26]. Model-Change (MC) [11, 25], Variance Optimality (VOpt) [12] and Model-Change Variance Optimality (MCVOpt) acquisition functions [1].
To recap, assume we want to classify a certain feature vector set \\(X=\\{x_{1},x_{2},\\ldots,x_{N}\\}\\subset\\mathbb{R}^{d}\\) through graph Laplace learning and active learning. The active learning process (illustrated in Figure 3) starts with an initial labeled set \\(Y_{0}\\). In the \\(k^{\\text{th}}\\) iteration, denote the current labeled set and current candidate set by \\(C_{k}=X\\setminus Y_{k}\\). We evaluate the acquisition function \\(\\mathcal{A}_{k}\\) on the candidate set \\(C_{k}\\) then select the query set \\(\\mathcal{Q}_{k}\\subset C_{k}\\). An oracle then labels the data in the query set \\(\\mathcal{Q}_{k}\\). Lastly, we update \\(Y_{k+1}=Y_{k}\\cup\\mathcal{Q}_{k}\\) and \\(C_{k+1}=X\\setminus Y_{k+1}\\) before repeating the process. In practice, the oracle is a human who can provide ground-truth labels for datapoints in the query set.
## 4 Core-Set Selection and Batch Active Learning
As introduced in Section 3.4, active learning is an iterative process which selects a query set \\(\\mathcal{Q}\\) to obtain labels in each iteration. Sequential active learning selects one datapoint for each query (the case \\(|\\mathcal{Q}|=1\\)) while batch active learning selects multiple datapoints as the query set ( the case \\(|\\mathcal{Q}|>1\\)). Typically, the human labeling time presents the largest bottleneck in the process. In sequential active learning, the human labeling process is limited by only being able to label one point at a time. It is advantageous for the active learning procedure to return a batch of points so that multiple humans/teams can work in parallel to label the data. Consider the example of labeling 200 points with a team of 10 people. In the sequential case \\(|\\mathcal{Q}|=1\\), labeling requires 200 human queries. In the batch case with \\(|\\mathcal{Q}|=10\\), only 20
Figure 3: Flowchart of our batch active learning pipeline. First, images are embedded as feature vectors using either transfer learning or a CNNVAE. These feature vectors are then processed to create a weighted graph based on approximate \\(k\\) nearest neighbors. Following this, the core-set is computed using DAC. Lastly, the graph is passed to the active learning loop. The active learning loop includes the classification of unlabeled nodes via Laplace learning, the evaluation of an acquisition function based on the classification and the selection of a query set according to our LocalMax method. After a batch is determined, the active learning loop queries a human to label the selected query set in each iteration manually. The solid red and blue nodes denote classifications of the data, while dark grey nodes denote unlabeled data. The SAR image shown above is from FUSAR-Ship [4].
human queries are required and the humans can label the data in parallel. This reduces the total labeling time by an order of magnitude as compared to the sequential case.
Many methods rely on using the current predictions of the model to evaluate the uncertainty, variance, or other criteria to quantify the information gained by labeling a new data point. These methods rely on a key assumption: the model has enough information to make a determination of what data would help it learn best. This leads to two interesting requirements: constructing a good set of initial labels, called a _core-set_, for the early stages of active learning and quantifying information shared by unlabeled points in a batch. It is important that we construct a good core-set so that the model can make good estimates of the acquisition function early in training when model accuracy is relatively low. The core-set construction can have a long-term impact on the performance of the active learning procedure and it is critical that the method adequately explore the data before exploiting knowledge with active learning. Fortunately, graph-based active learning methods perform well with relatively small amounts of labeled data.[1]
The main problem with transitioning from the sequential case to the batch case is that sequential active learning methods fail to capture shared information between unlabeled data points. Applying batch active learning naively often leads to batches of very similar points. For example, the \\(k\\) points with the highest acquisition value tend to be close to one another in the embedding space. This leads to a great amount of redundancy in the batch and the model learns as fast as in the sequential case, but with more labeling done at each step. Therefore, batch active learning methods must encourage a diverse selection of points which high acquisition values. Proper implementation of batch active learning must also take into account the time needed to acquire good batches. Optimizing \\(\\mathcal{Q}\\) with \\(|\\mathcal{Q}|=B\\) over all subsets of the unlabeled data is a combinatorial problem with complexity \\(O(N^{B})\\). This time complexity would drastically limit the size of the batches that could be produced, so heuristics should be used to efficiently maximize the acquisition value of batches.
### Core-Set Selection
The goal of core-set selection is to sufficiently explore the data so that the active learning procedure can perform well. We propose Algorithm 1 for core-set selection as it generates a core-set which is nearly uniform in the dataset. For a given feature vector set \\(X\\), we construct a graph \\(G=(X,W)\\) according to Section 3.2. The algorithm iteratively selects nodes in \\(X\\) to construct a core-set such that all points are a distance at least \\(r\\) from each other but no more than distance \\(R\\) from another point. At each step of the core-set selection process, assuming the currently selected node set is \\(Y\\) (i.e. current labeled set), the algorithm creates an _annular set_\\(C\\) and a _seen set_\\(S\\):
\\[C=(\\cup_{x\\in Y}B_{R}(x))\\cup_{x\\in Y}B_{r}(x)\\qquad S=\\cup_{x\\in Y}B_{r}(x),\\]
where
\\[B_{r}(x)=\\{y\\in V(G):\\ d_{G}(x,y)<r\\}\\]
and \\(d_{G}(x,y)\\) is the distance from \\(x\\) to \\(y\\) computed using Dijkstra'
clusters in the data. The output of Algorithm 1 is the core-set \\(Y\\), which serves as the initial labeled set in the active learning process. An example of the algorithm on a simple dataset is shown in Figure 4.
```
Given: Graph \\(G=(X,W)\\), initial labeled set \\(Y\\), inner radius \\(r\\) and outer radius \\(R\\) Initialize candidate set \\(C=\\emptyset\\) and already seen set \\(S\\) for each \\(x\\in Y\\)do\\(\\triangleright\\) Compute candidate set from already labeled set Compute \\(B_{r}(x),B_{R}(x)\\) \\(S\\gets S\\cup B_{r}(x)\\) \\(C\\leftarrow(C\\cup B_{R}(x))\\setminus B_{r}(x)\\) endfor while\\(S\
eq X\\)do if\\(C=\\emptyset\\)then pick \\(x\\in X\\setminus S\\) uniformly at random elseif\\(C\
eq\\emptyset\\)then pick \\(x\\in C\\) uniformly at random endif Compute \\(B_{r}(x)\\), \\(B_{R}(x)\\) \\(Y\\gets Y\\cup\\{x\\}\\) \\(S\\gets S\\cup B_{r}(x)\\) \\(C\\leftarrow(C\\cup B_{R}(x))\\setminus B_{r}(x)\\) endwhile Return \\(Y\\)
```
**Algorithm 1** Dijkstras Annulus Core-Set (DAC)
Note that this algorithm is also used with \\(r,R\\) adaptively determined by the density of the data around a point. For example, the user could specify that \\(r\\) be picked so that \\(5\\%\\) of the data lies in \\(B_{R}(x)\\). Using the density radius leads to greater exploration in high density regions and less exploration in low density regions. This causes the core-set to focus more on where the majority of the data lies. Additionally, the density based covering is independent of the average distances between data points, which reduces the need for parameter tuning. We further reduce parameter tuning by setting \\(r=R/2\\).
Figure 4: An example of the sampling process of the DAC algorithm with an outer density radius of 0.3. The dataset is generated by sampling uniformly at random in the unit square. The blue, black and gold points denote the unseen points, the seen points and the points in the annular set, respectively. In iteration 0, the annular set is empty and the unseen set isn’t empty. This means the algorithm picks a point at random from the unseen points to add to the core-set. In subsequent iterations, the algorithm picks a point at random from the annular set. It then updates the annular region as described in Algorithm 1. This process terminates at iteration 14 when the entire dataset is the seen set. The set of red points in panel (e) is the output DAC core-set, which is nearly uniformly distributed in the whole dataset.
### LocalMax
We propose a novel batch active learning approach, named LocalMax. Based on a feature vector set \\(X\\) and the corresponding similarity graph \\(G\\), LocalMax selects a query set of multiple nodes that satisfy the local maximum condition (Definition 4.1) on the graph \\(G\\) from the candidate set. Informally, a node is a local maximum of a function on the nodes if and only if its function value is at least that of its neighbors.
**Definition 4.1** (Local Max of a Graph Node Function): _Consider a KNN-generated graph \\(G=(X,W)\\), where \\(X\\) is the set of nodes and \\(W\\) is the edge weight matrix. For a graph node function \\(\\mathcal{A}:X\\rightarrow\\mathbb{R}\\), \\(x_{i}\\in X\\) is a **local maximum node** if and only if for any \\(x_{j}\\) adjacent to \\(x_{i}\\), \\(\\mathcal{A}(x_{i})\\geq\\mathcal{A}(x_{j})\\). Equivalently, \\(x_{i}\\in X\\) is a local maximum if and only if:_
\\[\\mathcal{A}(x_{i})\\geq\\mathcal{A}(x_{j}),\\;\\forall j\\;\\text{s.t.}\\;W_{ij}>0. \\tag{6}\\]
Assuming a batch size \\(B\\), at iteration \\(k\\) LocalMax selects the query set \\(\\mathcal{Q}_{k}\\) as the top-\\(B\\) local maximums in the candidate set \\(\\mathcal{C}_{k}\\) (as detailed in Algorithm 2). LocalMax benefits from many useful properties including simplicity, efficiency and its grounding in well studied sequential acquisition functions. Building this acquisition function on sequential active learning allows us to borrow properties from the sequential acquisition functions. Controlling for local maxes enforces a minimum pairwise distance between points in the query set, which counteracts the redundancy seen in naively optimizing sequential acquisition functions.
LocalMax also maintains good computational complexity. The computational complexity of Algorithm 2 is \\(O(KN)\\) where \\(N\\) is the number of nodes in the graph and \\(K\\) is the KNN parameter used when generating the graph. In practice, \\(K\\) is much smaller than \\(N\\), so the computational complexity is \\(O(N)\\). Let \\(\\mathcal{G}(N)\\) denote the model fitting time for Laplace learning on a graph \\(G\\) with \\(N\\) nodes and let \\(\\mathcal{H}\\) denote the human labeling time for each node. In the case that the weight matrix \\(W\\) is sparse and the graph Laplacian matrix \\(L\\) is well-conditioned, we have \\(\\mathcal{G}(N)=O(N)\\). The human labeling time, \\(\\mathcal{H}\\), is typically much greater than \\(O(N)\\) since labeling SAR data can take significantly more time than is required by the rest of the active learning pipeline. Since human labeling can be processed in parallel, fewer batches are required to label the same amount of data. Consider the active learning process that samples in total \\(m\\) nodes to obtain ground-truth labels. The time complexities of sequential active learning and LocalMax batch active learning with batch
Figure 5: An example of DAC and LocalMax on the checkerboard dataset. In all panels, red points denote the labeled core-set generated by DAC. Panel (a) shows the ground truth classification. Panel (b) shows the classification results of Laplace learning based on the labeled core-set. Panel (c) shows the heatmap of the uncertainty acquisition function evaluated on the dataset. For the uncertainty acquisition function, high acquisition values concentrate near the decision boundary. In panel (c), the purple stars denote points in the query set returned by LocalMax with a batch size of 10.
size \\(B\\) are:
\\[\\text{Sequential Active Learning:}\\quad m\\times O(\\mathcal{G}(N)+ \\mathcal{G}) =O(m\\mathcal{G}), \\tag{7}\\] \\[\\text{LocalMax Batch Active Learning:}\\quad m/B\\times O(\\mathcal{G}(N)+O(N)+ \\mathcal{H}) =O(m\\mathcal{G}(/B). \\tag{8}\\]
Since \\(\\mathcal{G}(N)=O(N)\\) and \\(\\mathcal{H}\\gg O(N)\\), LocalMax provides a \\(B\\) times speed up based on sequential active learning which is observed both theoretically and in practice as shown in Table 2.
## 5 Experiments and Results
We now present the results of our active learning experiments on the MSTAR, OpenSARShip, and FUSAR-Ship datasets. We first test the accuracy and efficiency of our methods as compared to other batch active learning methods and sequential active learning. We then test the accuracy of our methods with different embedding techniques on each dataset. Lastly, we look at the impact of data augmentation and the choice of neural network architecture in transfer learning. It is important to note that our methods use less data than state-of-the-art methods. LocalMax surpasses state-of-the-art accuracy on OpenSARShip, and FUSAR-Ship with 35% and 68% of the data labeled, respectively. The previous state of the art utilized 44% and 70% of the data for OpenSARShip and FUSAR-Ship, respectively [18].
In our transfer learning experiments, we use PyTorch CNNs pretrained on the ImageNet dataset to perform image classification on SAR datasets. Unless otherwise stated, we use AlexNet for OpenSARShip and ShuffleNet for FUSAR-Ship since preliminary experiments suggested that they would achieve the best performance among the neural networks tested. The transfer learning results for MSTAR were generally poor and we only present the results with transfer learning from ResNet. These choices are later examined in our comparison between different neural network architectures. Our experiments use both zero-shot and fine-tuned transfer learning. In all the embeddings mentioned, the data is first transformed using the following PyTorch data transformations in order: Resize, CenterCrop, RandomRotation, GaussianBlur, and ColorJitter. Also, LocalMax may not always find batches of the specified size (15), so it selects up to 15 points in a batch. As evidenced by the later efficiency improvements, we see that this occurs infrequently.
Table 1 contains all the parameters used in our experiments. The experiment time measures the time taken to complete the entire active learning process after the core-set selection, including batch selection and model fitting. The time calculation neglects the human labeling time, so the performance enhancements seen in practice will be much
larger. Accuracy is measured as the percent of correct predictions by the model in the unlabeled dataset. The source code to reproduce all the results is available [28, 29, 30]. All experiments were performed in Google Colab with high RAM.
### Accuracy And Efficiency
As seen in Table 2, LocalMax generally outperforms the other batch active learning methods. Among the batch active learning methods tested, LocalMax attained the highest accuracy with comparable efficiency to the other methods. The time efficiency of LocalMax is slightly worse than random sampling, but this is due to the fact that random sampling is a very naive approach which achieves much lower accuracy than LocalMax in each dataset. The TopMax method has comparable time and accuracy to LocalMax, but in each test the accuracy is lower. This is likely due to the fact that TopMax does not enforce separation of the data in a batch and may select points with lots of shared information. Acq Sample also performs worse than LocalMax by a noticeable amount.
We can see from Table 2 that LocalMax has comparable accuracy to sequential active learning, deviating by at most 0.64% on each dataset. Additionally, LocalMax uses an order of magnitude less computational time than sequential active learning. The discrepency between theoretical efficiency analysis and the experimental time efficiency is due to the fact that human querying time is negligible in these experiments. The code immediately provides ground truth labels when queried, so most of the time in these experiments is observed in the model fit time. In practice, these time differences will better match theoretical predictions where the human query time is the bottleneck. These experiments attest to the efficiency and accuracy of LocalMax relative to other active learning methods.
More detailed plots of the accuracy can be seen in Figure 6. These plots show accuracy as a function of the size of the labeled set. In each case, we see a large jump in accuracy with few labels, which is characteristic of the active learning process. After this point, the accuracy appears to grow linearly with the amount of labeled data. Again, we notice that LocalMax and sequential active learning tend to perform best on both datasets. We can also see that random sampling fails to capture the importance of labeling certain data as its accuracy is much lower throughout the active learning process. This shows that LocalMax is improving the model in a more substantial way than by just increasing the size of the labeled dataset.
Following the comparison between different batch and sequential active learning methods, we analyze the impact of the acquisition function on LocalMax. The results of these experiments are shown in Figure 7, where we see that the uncertainty acquisition function performs best in all experiments. This matches results from previous works on sequential active learning [1]. Figure 7 also shows that the uncertainty based LocalMax beats state of the art in both transfer learning experiments on FUSAR-Ship and OpenSARShip.
\\begin{table}
\\end{table}
Table 1: Tables of parameters used in our experiments. All experiments use these parameters unless otherwise stated. In Table 0(a), “transfer learning data” refers to the amount of data used in fine-tuned transfer learning. This data is sampled uniformly at random and is then used as part of the core-set before performing DAC. In Table 0(b), “final labels” refers to the size of the labeled dataset as a percent of the total dataset size at the end of the active learning process. Also, “TL architecture” refers to the pretrained PyTorch neural network used for transfer learning on each dataset.
### Sensitivity Analysis
We now look at the sensitivity of our results based on choices in the pipeline. We first look at the impacts of data augmentation and fine tuning on the final results. Table 3 contains summary statistics for an experiment regarding the benefits to using data augmentation and fine tuning in the transfer learning portion of the pipeline. The results of this experiment are not conclusive between OpenSARShip and FUSAR-Ship. For OpenSARShip, we see that variance is consistently low for the embeddings and zero-shot transfer learning performed best. In contrast, the FUSAR-Ship results had notably higher accuracy and variance for fine-tuned transfer learning without data augmentation. In fact, adding data augmentation reduced variance for fine-tuned transfer learning on both datasets. It is also important to note that each of these experiments showed better accuracy than the previous state of the art. Additionally, zero-shot transfer learning may be the most practical method as it attains comparable performance to the other embeddings, does not
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline & & _LocalMax_ & _Random_ & _TopMax_ & _Acq\\_sample_ & _Sequential_ \\\\ \\hline \\multirow{3}{*}{_Time Consumption_} & _MSTAR_ & 26.70s & **24.17s** & 25.96s & 26.71s & 338.11s \\\\ \\cline{2-7} & _OpenSARShip_ & 5.33s & **4.68s** & 4.86s & 4.86s & 47.43s \\\\ \\cline{2-7} & _FUSAR-Ship_ & 26.80s & **21.21s** & 26.08s & 21.55s & 322.99s \\\\ \\hline \\multirow{3}{*}{_Accuracy_} & _MSTAR_ & 99.69\\% & 92.66\\% & 99.69\\% & 96.27\\% & **99.93\\%** \\\\ \\cline{2-7} & _OpenSARShip_ & 81.25\\% & 71.60\\% & 80.64\\% & 73.34\\% & **81.65\\%** \\\\ \\cline{1-1} \\cline{2-7} & _FUSAR-Ship_ & **89.83\\%** & 68.73\\% & 85.59\\% & 72.72\\% & 89.19\\% \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Time consumption and accuracy comparison among different active learning methods. This experiment uses a CNNVAE embedding for MSTAR and zero-shot transfer learning for OpenSARShip and FUSAR-Ship. Additionally, the parameters for all the methods are listed in Table 1. LocalMax is the batch sampling method introduced in Section 4.2. Random is a batch active learning method which randomly chooses a new batch with the desired size. TopMax is a batch active learning method which chooses the \\(n\\) points with highest acquisition values. The acq_sample method assigns each point with a probability to be picked proportional to the acquisition value, and randomly samples \\(n\\) points as a batch. All batch active learning methods have comparable efficiency and are 9 to 15 times faster than the sequential case. The local max method always achieved higher accuracy than other batch active learning methods and is comparable to the accuracy of sequential active learning.
Figure 6: Plots of accuracy v.s. the number of labeled points for five different active learning methods. Details about these active learning methods are shown in the caption of Table 2. Panel (a) and (b) contain the results for the OpenSARShip and FUSAR-Ship datasets, respectively. In each panel, our LocalMax method (blue curve) and the sequential active learning (purple curve) are almost identical and are the best performing methods. According to Table 2, LocalMax is much more efficient, proportional to the batch size.
Figure 7: Plots of accuracy v.s. number of labeled points for each embedding and dataset. In each panel, we show four curves generated by LocalMax using different active learning acquisition functions, UC, VOpt, MC and MCVOpt (Section 3.4), together with the state-of-the-art, CNN-based method (denoted SoTA[18]). There are nine panels in total - the Three rows from top to bottom correspond to the OpenSARShip, FUSAR-Ship and MSTAR datasets while three columns from left to right correspond to CNNVAE embedding, zero-shot transfer learning embedding and fine-tuned transfer learning embedding. The UC acquisition function has the best performance among all the acquisition functions tested. The parameters for these experiments are the same as those specified in Table 1.
require labels in the new dataset, and has no training time.
Lastly, we study the impact of neural network architecture on model performance. Table 4 shows the results of one run of LocalMax for different choices of neural network architectures. The range of model performance across architectures is 8.10% for OpenSARShip and 4.82% for FUSAR-Ship. Although this is somewhat large variation, this is a common issue encountered in deep supervised learning. It is important to note that the neural network architectures tested are standard neural networks and weren't designed for SAR data. It is possible that transfer learning from architectures designed for SAR data will experience less variance in the final accuracy [6, 7].
## 6 Discussion
Improving our understanding of SAR data and the methods used for ATR is critical for designing new ATR algorithms. This paper seeks to understand the latter but gives rise to some surprising results the about SAR datasets tested. In particular, it is surprising to find that neural networks trained on ImageNet could perform well when applied to OpenSARShip and FUSAR-Ship. The images from ImageNet are standard pictures of objects and the main problem is to classify them into many different categories which have nothing to do with ships. However, the neural networks trained on ImageNet somehow capture image features that end up being useful for distinguishing different types of ships. Another interesting point is that transfer learning did not perform well with MSTAR. Understanding these differences could be a valuable research direction.
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline & OpenSARShip & FUSAR-Ship \\\\ \\hline AlexNet & **80.24\\%** & 85.14\\% \\\\ \\hline ResNet18 & 72.14\\% & 87.39\\% \\\\ \\hline ShuffleNet & 72.34\\% & 88.22\\% \\\\ \\hline DenseNet & 75.69\\% & **89.96\\%** \\\\ \\hline GoogLeNet & 73.28\\% & 86.74\\% \\\\ \\hline MobileNet V2 & 74.15\\% & 86.68\\% \\\\ \\hline ResNeXt & 76.29\\% & 88.29\\% \\\\ \\hline Wide ResNet & 73.14\\% & 85.52\\% \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Accuracy values of one run of LocalMax for different choices of neural networks. Each experiment uses zero-shot transfer learning (no fine-tuning) and the parameter values specified in Table 1. The highest accuracy value in each column is bolded. As shown in the table, the range of model performance across architectures is 8.10% and 4.82% for OpenSARShip and FUSAR-Ship, respectively.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline & State of the Art & Zero-shot & Fine-tuned & Fine-tuned + Augmentation \\\\ \\hline OpenSARShip & \\(78.15\\%\\pm 0.57\\%\\) & \\(81.02\\%\\pm 0.76\\%\\) & \\(79.66\\%\\pm 0.90\\%\\) & \\(80.00\\%\\pm 0.75\\%\\) \\\\ \\hline FUSAR-Ship & \\(86.69\\%\\pm 0.47\\%\\) & \\(88.57\\%\\pm 0.35\\%\\) & \\(91.54\\%\\pm 3.07\\%\\) & \\(89.06\\%\\pm 1.90\\%\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Sample statistics of accuracy after 20 experiments of the batch active learning pipeline with zero-shot transfer learning, fine-tuned transfer learning, and fine-tuned transfer learning with data augmentation (laTst column). The number in each cell represents the mean \\(\\pm\\) one standard deviation across the 20 experiments. The zero-shot and fine-tuned embeddings are the same as mentioned in Section 3.1. The parameters in these experiments are the same as those specified in Table 1.
This is related to works on distribution shift in simulated-to-real (sim-to-real) scenarios [6, 7, 19]. The key challenge here is to generate many synthetic training examples which reliably capture the testing distribution (true SAR data). We suspect that our methods could be helpful when trained on synthetic data. In particular, we believe that our methods will perform better when the neural networks are specifically designed to handle SAR data and are trained on that data. These specially designed neural networks should better capture symmetries in the data and the image features will likely be more useful for ATR [18].
## 7 Conclusion
In this work, we developed a novel core-set construction method (DAC) and a batch active learning method, LocalMax. Using these methods together with the embedding techniques (CNNVAE and transfer learning) and Laplace learning, these methods beat state-of-the-art SAR classification methods on OpenSARShip and FUSAR-Ship. Additionally, they have dramatically improved efficiency relative to sequential methods, while maintaining the accuracy of sequential methods. These methods also attain higher accuracy than other common batch active learning methods on the datasets tested.
###### Acknowledgements.
This material is based upon work supported by the National Geospatial-Intelligence Agency under Award No. HM0476-21-1-0003. Approved for public release, NGA-U-2023-00750. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Geospatial-Intelligence Agency. Jeff Calder was also supported by NSF-CCF MoDL+ grant 2212318.
## References
* [1] Miller, K., Mauro, J., Setiadi, J., Baca, X., Shi, Z., Calder, J., and Bertozzi, A. L., \"Graph-based active learning for semi-supervised classification of sar data,\" in [_Algorithms for Synthetic Aperture Radar Imagery XXIX_], **12095**, 126-139, SPIE (2022).
* [2] AFRL and DARPA, \"Moving and stationary target acquisition and recognition (MSTAR) dataset.\" [https://www.sdms.afrl.afr.mil/index.php?collection=mstar](https://www.sdms.afrl.afr.mil/index.php?collection=mstar). Accessed: 2021-07-10.
* [3] Huang, L., Liu, B., Li, B., Guo, W., Yu, W., Zhang, Z., and Yu, W., \"Opensarship: A dataset dedicated to sentinel-1 ship interpretation,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**11**(1), 195-208 (2017).
* [4] Hou, X., Ao, W., Song, Q., Lai, J., Wang, H., and Xu, F., \"Fusar-ship: Building a high-resolution sar-ais matchup dataset of gaofen-3 for ship detection and recognition,\" _Science China Information Sciences_**63**(4), 1-19 (2020).
* [5] Bansal, M., Kumar, M., Sachdeva, M., and Mittal, A., \"Transfer learning for image classification using vgg19: Caltech-101 image data set,\" _Journal of Ambient Intelligence and Humanized Computing_, 1-12 (2021).
* [6] Arnold, J. M., Moore, L. J., and Zelnio, E. G., \"Blending synthetic and measured data using transfer learning for synthetic aperture radar (sar) target classification,\" in [_Algorithms for Synthetic Aperture Radar Imagery XXV_], **10647**, 48-57, SPIE (2018).
* [7] Inkawhich, N., Inkawhich, M. J., Davis, E. K., Majumder, U. K., Tripp, E., Capraro, C., and Chen, Y., \"Bridging a gap in sar-atr: Training on fully synthetic and testing on measured data,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_**14**, 2942-2955 (2021).
* [8] Zhu, X. J., \"Semi-supervised learning literature survey,\" (2005).
* [9] Settles, B., [_Active Learning_], vol. 6, Morgan & Claypool Publishers LLC (June 2012).
* [10] Dasgupta, S., \"Two faces of active learning,\" _Theoretical Computer Science_**412**, 1767-1781 (Apr. 2011).
* [11] Miller, K. and Bertozzi, A. L., \"Model-change active learning in graph-based semi-supervised learning,\" _arXiv preprint arXiv:2110.07739_ (2021).
* [12] Ji, M. and Han, J., \"A variance minimization criterion to active learning on graphs,\" in [_Artificial Intelligence and Statistics_], 556-564, PMLR (2012).
* [13] Ma, Y., Garnett, R., and Schneider, J., \"\\(\\sigma\\)-optimality for active learning on gaussian random fields,\" _Advances in Neural Information Processing Systems_**26** (2013).
* [14] Cai, W., Zhang, Y., and Zhou, J., \"Maximizing expected model change for active learning in regression,\" in [_2013 IEEE 13th international conference on data mining_], 51-60, IEEE (2013).
* [15] Gal, Y., Islam, R., and Ghahramani, Z., \"Deep bayesian active learning with image data,\" in [_International Conference on Machine Learning_], 1183-1192, PMLR (2017).
* [16] Kushnir, D. and Venturi, L., \"Diffusion-based deep active learning,\" _arXiv preprint arXiv:2003.10339_ (2020).
* [17] Zhdanov, F., \"Diverse mini-batch active learning,\" _arXiv preprint arXiv:1901.05954_ (2019).
* [18] Zhang, T., Zhang, X., Ke, X., Liu, C., Xu, X., Zhan, X., Wang, C., Ahmad, I., Zhou, Y., Pan, D., Li, J., Su, H., Shi, J., and Wei, S., \"Hog-shipclsnet: A novel deep learning network with hog feature fusion for sar ship classification,\" _IEEE Transactions on Geoscience and Remote Sensing_**60**, 1-22 (2022).
* [19] Lewis, B., Scarnati, T., Sudkamp, E., Nehrbass, J., Rosencrantz, S., and Zelnio, E., \"A sar dataset for art development: the synthetic and measured paired labeled experiment (sample),\" in [_Algorithms for Synthetic Aperture Radar Imagery XXVI_], **10987**, 39-54, SPIE (2019).
* [20] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L., \"ImageNet Large Scale Visual Recognition Challenge,\" _International Journal of Computer Vision (IJCV)_**115**(3), 211-252 (2015).
* [21] Arya, S., Mount, D. M., Netanyahu, N. S., Silverman, R., and Wu, A. Y., \"An optimal algorithm for approximate nearest neighbor searching fixed dimensions,\" _Journal of the ACM (JACM)_**45**(6), 891-923 (1998).
* [22] Von Luxburg, U., \"A tutorial on spectral clustering,\" _Statistics and computing_**17**(4), 395-416 (2007).
* [23] Zhu, X., Ghahramani, Z., and Lafferty, J. D., \"Semi-supervised learning using Gaussian fields and harmonic functions,\" in [_Proceedings of the 20th International conference on Machine learning (ICML-03)_], 912-919 (2003).
* [24] Bertozzi, A. L., Luo, X., Stuart, A. M., and Zygalakis, K. C., \"Uncertainty quantification in the classification of high dimensional data,\" _SIAM/ASA J. Uncertainty Quantification_**6**(2), 568-595 (2018).
* [25] Miller, K., Li, H., and Bertozzi, A. L., \"Efficient graph-based active learning with probit likelihood via Gaussian approximations,\" in [_ICML Workshop on Experimental Design and Active Learning_], International Conference on Machine Learning (ICML) (July 2020). arXiv: 2007.11126.
* [26] Qiao, Y., Shi, C., Wang, C., Li, H., Haberland, M., Luo, X., Stuart, A. M., and Bertozzi, A. L., \"Uncertainty quantification for semi-supervised multi-class classification in image processing and ego-motion analysis of body-worn videos,\" _Electronic Imaging_**2019**(11), 264-1 (2019).
* [27] Dijkstra, E. W., \"A note on two problems in connexion with graphs:(numerische mathematik, 1 (1959), p 269-271),\" (1959).
* [28] Calder, J., \"Graphlearning python package,\" (Jan. 2022).
* [29] Calder, J., Cook, B., Thorpe, M., and Slepcev, D., \"Poisson learning: Graph based semi-supervised learning at very low label rates,\" in [_International Conference on Machine Learning_], 1306-1316, PMLR (2020).
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E., \"Scikit-learn: Machine learning in Python,\" _Journal of Machine Learning Research_**12**, 2825-2830 (2011). | Active learning improves the performance of machine learning methods by judiciously selecting a limited number of unlabeled data points to query for labels, with the aim of maximally improving the underlying classifier's performance. Recent gains have been made using sequential active learning for synthetic aperture radar (SAR) data[1]. In each iteration, sequential active learning selects a query set of size one while batch active learning selects a query set of multiple datapoints. While batch active learning methods exhibit greater efficiency, the challenge lies in maintaining model accuracy relative to sequential active learning methods. We developed a novel, two-part approach for batch active learning: Dijkstra's Annulus Core-Set (DAC) for core-set generation and LocalMax for batch sampling. The batch active learning process that combines DAC and LocalMax achieves nearly identical accuracy as sequential active learning but is more efficient, proportional to the batch size. As an application, a pipeline is built based on transfer learning feature embedding, graph learning, DAC, and LocalMax to classify the FUSAR-Ship and OpenSARShip datasets. Our pipeline outperforms the state-of-the-art CNN-based methods.1
Footnote 1: Source Code: [https://github.com/chapman20j/SAR_BAL](https://github.com/chapman20j/SAR_BAL)
Please direct all correspondence to James Chapman: [email protected] | Summarize the following text. | 283 |
arxiv-format/1808_03753v1.md | MARVIN: An Open Machine Learning Corpus and Environment for Automated Machine Learning Primitive Annotation and Execution
Chris A. Mattmann
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109 USA
[email protected]
Sujen Shan
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109 USA
[email protected]
Brian Wilson
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109 USA
[email protected]
## 1 Introduction
The world of machine learning (ML) is exploding and new ways of applying ML for mundane, repetitive, and complex tasks is increasing: from using ML to reply to e-mail messages automatically [1] to self-driving cars and using ML for sustainable mobility [2]. Today, machine learning requires human capital to perform a variety of tasks including 1) composition and aggregation of machine learning \"primitives\" [3], e.g., kernels of code that _clean_, _decide_, _classify/label_, _cluster/organize_, and _rank_ the input dataset to produce an output; 2) selection of ML models that have already been generated such as from OpenML [4], ModelZoo [5], UC Irvine ML repository [6], Kaggle [7], etc. or the generation of new models, datasets, and problems; and 3) development of metrics to evaluate how well machine learning is working on a particular problem.
In 2016, the United States Defense Advanced Research Projects Agency (DARPA) incepted the Data Driven Discovery of Models (D\\({}^{3}\\)M) program to automate the three core tasks of machine learning identified in the preceding paragraph. Twenty four performers on D\\({}^{3}\\)M are working together in three technical areas corresponding to each of the core ML tasks (TA1: ML primitives development; TA2: Automated Machine Learning using Datasets, Problems, and Challenges; and TA3: User/SME Metrics for evaluating automated machine learning). The Jet Propulsion Laboratory, California Insti
Figure 1: D\\({}^{3}\\)M Government Team responsibilities.
tute of Technology (NASA-JPL) is part of the D\\({}^{3}\\)M government team, which consists of NASA-JPL, MIT Lincoln Laboratories, and the US National Institutes of Standards and Technology (NIST). The government team responsibilities, shown in Fig. 1, involve the creation of a baseline open source model corpus (TA1); the curation of 10s of thousands of ML datasets, problems, and baseline solutions (TA2) for proxy government and even direct governmental challenges across defense, civil agencies, and space; and a set of automated evaluation metrics to assess the performance of the automatic machine learning systems.
Our team at NASA-JPL has over the past two years designed and implemented a system that we call MARVIN (a reference to the character from the _Looney Tunes_ cartoon series [8]) whose main goals are to provide an environment for creating a performer baseline for machine learning primitive annotation, schema, and evaluation. MARVIN is a web-based tool that is built on ElasticSearch [9] and FacetView [10] and Kibana [11]. ElasticSearch is used to capture JSON instances of metadata about machine learning primitives representing the world of ML - from SKLearn [12], to DL4J [13], to Keras [14] and so on. Primitive \"annotations\" capture their inputs, outputs, and characteristics for composing the primitives into an ML pipeline for use by the TA2 performers. MARVIN allows for search and exploration of these primitive instances, and we have written an application programming interface to leverage MARVIN for invocation of the associated primitives (e.g., pick your LogisticRegression that supports particular hyperparameters, programming languages, etc.). In addition, Marvin also provides the ability to search ML datasets, and challenge problems for over 400 challenges at present (and soon 10s of thousands) ranging from introductory machine learning activities and data types e.g., predict Hall of Fame decisions based on baseball player statistics all the way to complex multi-source classification and decision making e.g., identify objects in videos and images with high accuracy. All primitives, datasets, and challenge problems are searchable and indexed in ElasticSearch and browseable using FacetView and Kibana.
MARVIN also provides the ability to generate a Dockerfile [] and allow for execution of ML primitives and pipelines based on the primitive type and associated preferred data and problem domain. We have classified domains into areas of video, text/NLP, classification, labeling, regression, etc. MARVIN also provides base Docker images containing the curated library of ML primitives and a base execution environment that D\\({}^{3}\\)M performers can extend and use to run generated pipelines.
## 2 Data model
The Primitive \"annotations\" follow a [15] that allow TA2 systems to infer metadata about the primitive, like its \"algorithm type\", \"primitive family\", \"hyperparameters\", etc. (JPL developed an initial schema but now it is jointly developed by the entire D\\({}^{3}\\)M community - Performers and the government teams.)
**Algorithm Types:** The algorithm type field intends to capture the underlying implementation of the primitive. The field currently is an enumeration of values from a controlled vocabulary including, but not limited to, algorithms such as ADABOOST [16], BAYESIAN LINEAR REGRESSION [17], DECISION TREE [18], etc.
**Primitive Family:** This field intends to capture the high level purpose of the primitive. It is an enumeration of controlled values e.g., \"CLASSIFICATION\", \"FEATURE_SELECTION\", \"DATA_TRANSFORMATION\", \"TIME-SERIES_FORECASTING\", etc.
**Hyperparameter:** The hyperparameter field captures all the parameters that a TA2 system could pass to a TA1 primitive to control its behavior. These could include tunable hyperparameters which affect the model performance, resource use hyperparameters that control the resource usage of that primitive, and metafeature parameters that control which meta-features are computed by the primitive.
Figure 2: A screenshot of the MARVIN interface showing primitives (on the right) and their metadata, along with search criteria on the left, e.g., NO_JAGGED_VALUES, etc.
Apart from the above, there are more key fields that help a TA2 system to determine whether to use the given primitive in a pipeline for solving a problem or not. For example, the **preconditions** field indicates what preconditions must evaluate to true before using this primitive. For instance the \"NO_MISSING_VALUES\" precondition would inform a TA2 system that this primitive cannot handle missing values in the data. Similarly there could be other preconditions like \"NO_CONTINUOUS_VALUES\", \"NO_JAGGED_VALUES\", etc., as show in Fig. 2.
The **effects** field in the schema informs a TA2 about the set of post conditions obtained by data processed by this primitive. Some of the effects include - \"NO_MISSING_VALUES\", this indicates that the primitive removes missing values, \"NO_CATEGORICAL_VALUES\", indicates that the primitive removes categorical columns, etc.
## 3 Execution environment
MARVIN leverages Docker containers and Kubernetes clusters (K8s) [19] to provide an environment for D\\({}^{3}\\)M performer testing (called the \"Play
Figure 3: Our MARVIN execution environment architecture running on top of Kubernetes (K8s).
Space\" in Fig. 1 upper left corner). Depending on the machine learning problem type e.g., Image/Video, NLP/Text, Classification, performers can use MARVIN to create a Dockerfile containing a set of Primitives for a particular problem set and then create a running Docker image on our D\\({}^{3}\\)M Kubernetes cluster. As shown in Fig. 3 starting from the left side of the figure, TA1 performers can publish a Docker image of their TA1 primitive if it requires special services such as e.g., graph databases, or particular Python libraries which can then be consumable and included in other performer containers. In addition, we offer base Master containers for our TA2 systems (top-middle of Fig. 3) to leverage e.g., NLP, Image/Video base, or a combination of all required libraries in the program. Then, using MARVIN's API, TA2 performers can dynamically select, and execute their primitives. Data (shown in the middle-left of Fig. 3) is made available via shared K8s volumes to all the containers on the cluster.
## 4 Conclusion
We presented MARVIN, an automated system for discovering machine learning primitives, and executing them in an automated fashion. MARVIN is a core tool in the DARPA D3M program. At present MARVIN provides a web-based interface to browse over hundreds of ML primitives and 400 ML datasets and challenge problems, and over the next two years the tool will be expanded to support thousands of ML problems and primitives for the performers in the program.
## Acknowledgement
This effort was supported in part by JPL, managed by the California Institute of Technology on behalf of NASA, and additionally in part by the DARPA Memex/XDATA/D3M/ASED programs and NSF award numbers ICER-1639753, PLR-1348450 and PLR-144562 funded a portion of the work. The authors would like to thank Maziyar Boustani, Kyle Hundman, Asitang Mishra, Tom Barber and Giuseppe Totaro at JPL who materially contributed to the design and construction of MARVIN. We also thank our collaborators at MIT Lincoln Labs and NIST. The library of primitives is now being further populated by the entire D3M community. We also credit Mr. Wade Shen for his support of our work.
## References
* [1] A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukacs, M. Ganea, P. Young _et al._, \"Smart reply: Automated response suggestion for email,\" in _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_. ACM, 2016, pp. 955-964.
* [2] L. D. Burns, \"Sustainable mobility: a vision of our transport future,\" _Nature_, vol. 497, no. 7448, p. 181, 2013.
* [3] T. M. Mitchell, _The discipline of machine learning_. Carnegie Mellon University, School of Computer Science, Machine Learning Department, 2006, vol. 9.
* [4] J. Vanschoren, J. N. Van Rijn, B. Bischl, and L. Torgo, \"Openml: networked science in machine learning,\" _ACM SIGKDD Explorations Newsletter_, vol. 15, no. 2, pp. 49-60, 2014.
* [5] Y. Jia and E. Shelhamer, \"Caffe model zoo,\" 2015.
* [6] A. Frank and A. Asuncion, \"Uci machine learning repository [http://archive. ics. uci. edu/ml]. irvine, ca: University of california,\" _School of information and computer science_, vol. 213, 2010.
* [7] Wikipedia, \"Kaggle -- Wikipedia, the free encyclopedia,\" [http://en.wikipedia.org/w/index.php?title=Kaggle&oldid=838984387](http://en.wikipedia.org/w/index.php?title=Kaggle&oldid=838984387), 2018, [Online; accessed 24-May-2018].
* [8] J. Beck, _Looney Tunes: The Ultimate Visual Guide_. DK Publishing (Dorling Kindersley), 2003.
* [9] C. Gormley and Z. Tong, _Elasticsearch: The Definitive Guide: A Distributed Real-Time Search and Analytics Engine_. \" O'Reilly Media, Inc.\", 2015.
* [10] Open Knowledge Foundation, _FacetView_, Open Knowledge Foundation, 2018. [Online]. Available: [https://github.com/okfn/facetview](https://github.com/okfn/facetview)
* [11] Y. Gupta, _Kibana Essentials_. Packt Publishing Ltd, 2015.
* [12] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg _et al._, \"Scikit-learn: Machine learning in python,\" _Journal of machine learning research_, vol. 12, no. Oct, pp. 2825-2830, 2011.
* [13] S. Bahrampour, N. Ramakrishnan, L. Schott, and M. Shah, \"Comparative study of caffe, neon, theano, and torch for deep learning,\" 2016.
* [14] S. Raschka, _Python machine learning._ Packt Publishing Ltd, 2015.
* [15] D. P. A. Schema, 2017. [Online]. Available: [https://gitlab.com/datadrivendiscovery/metadata/blob/master/d3m_metadata/schemas/v0/definitions.json](https://gitlab.com/datadrivendiscovery/metadata/blob/master/d3m_metadata/schemas/v0/definitions.json)
* [16] Y. Freund and R. E. Schapire, \"A decision-theoretic generalization of on-line learning and an application to boosting,\" 1997.
* [17] T. P. Minka, \"Bayesian linear regression,\" 1998.
* [18] J. QUINLAN, \"Induction of decision trees,\" 1986.
* [19] D. Bernstein, \"Containers and cloud: From lxc to docker to kubernetes,\" _IEEE Cloud Computing_, vol. 1, no. 3, pp. 81-84, 2014. | In this demo paper, we introduce the DARPA D\\({}^{3}\\)M program for automatic machine learning (ML) and JPL's MARVIN tool that provides an environment to locate, annotate, and execute machine learning primitives for use in ML pipelines. MARVIN is a web-based application and associated back-end interface written in Python that enables composition of ML pipelines from hundreds of primitives from the world of Scikit-Learn, Keras, DL4J, and other widely used libraries. MARVIN allows for the creation of Docker containers that run on Kubernetes clusters within DARPA to provide an execution environment for automated machine learning. MARVIN currently contains over 400 datasets and challenge problems from a wide array of ML domains in cluding routine classification and regression to advanced video/image classification and remote sensing. | Give a concise overview of the text below. | 163 |
arxiv-format/2012_05017v1.md | A web-tool for calculating the economic performance
of precision agriculture technology
M. Medici1, S.M. Pedersen2, M. Canavari1\\({}^{\\star}\\), T. Anken3, P. Stametelopoulos4, Z. Tsiropoulos4, A. Zotos4, G. Tohidloo3
Footnote 1: [https://www.sciencedirect.com/journal/computers-and-electronics-in-agriculture](https://www.sciencedirect.com/journal/computers-and-electronics-in-agriculture)A web-tool for calculating the economic performance of precision agriculture technology
**Keywords**: _precision agriculture (PA), technology, adoption, cost-benefit analysis, economic performance, financial analysis, sustainability, web application_
## 1 Introduction
This study describes the development of an on-line web-tool, intended as a prototype application, for calculating the economic performance of precision agriculture (PA) technology within the European research project PAMCoBA (Precision Agriculture Methodologies for Cost-Benefit Analysis). Farmers as well as any agricultural stakeholder can freely access the proposed methodology through a guided process that allows to evaluate and compare profitability associated to PA technologies, assessing the financial viability derived from the potential adoption of various PA systems.
Precision agriculture (PA) is a farm management system involving crop management based on field variability and site-specific conditions (Seelan et al., 2003). It includes a number of technologies ranging from sensing systems that map crop and soil variability to guidance systems and variable-rate (VR) systems that dose agricultural inputs onto the field. These technologies have a wide potential to improve agricultural performance, ranging from more efficient crop nutrient use, increased crop quality and quantity, and reduced field overlaps. For any of these reasons, PA is generally associated to potential economic and environmental benefits (Li et al., 2018).
Although the implementation of new technologies is essential for farmers to remain competitive in their business, PA is a complex management system requiring changing from empirical decision-making towards data-driven decision processes, with benefits hard to quantify in advance. As a result, the adoption of PA technology among farmers remain low in Europe (STOA, 2016). In recent years, many authors have stressed the uncertainty regarding the non-
clear perception of benefits when using PA technologies, as well as the potential cost savings and reduction in environmental impacts resulting from their adoption (Eastwood et al., 2016; Kutter et al., 2011; McBratney et al., 2005; Medici et al., 2019).
It appears that the large knowledge gap between farmers and technology developers results in difficulties in explaining the economic and environmental benefits characterizing PA. In this context, the successful adoption of PA technologies requires efforts and confidence from farmers, who need to continuously integrate new knowledge (Oreszczyn et al., 2010), and this is likely to contribute in mitigating uncertainty about potential benefits arising from PA. The development of integrated performance assessments, which could influence the rate of dissemination among farmers too, may constitute a possible solution to the issues above described (Lamb et al., 2008). For these reasons, appropriate economic criteria for technology adoption requires urgent and ongoing attention in order to develop PA to its full potential.
The web-tool was designed within the context of the ICT-Agri project '_PAMCoBA_' (_Precision Agriculture Methodologies for Cost Benefit Analysis_) to provide guidelines for farmers over their decisions to invest in selected precision agricultural technologies. It is freely available at: [http://tool.pamcoba.eu/](http://tool.pamcoba.eu/). The tool explores data regarding existing PA technologies, crops and agricultural operations, guiding farmers in the selection of the most appropriate technologies for farm specific context. It is particularly suitable for arable crops, with wheat,, sugar beet, canola, and potato, modelled, but it also offers the possibility of modelling any other custom
crop, by inserting values defining yield, price, and agricultural treatments. As a final result, the tool can evaluate the profitability, supporting farm decision-making processes.
## 2 Web-tool structure and database
The web-tool was designed according to the latest trends on software development for the World Wide Web. The scripting language used was PHP (Hypertext Pre-processor), and specifically, the Laravel framework, which is the most popular tool for developing fast and stable web applications, was adopted. The database is hosted on a MySQL server instance and it was designed according to best practices to maintain data integrity and security and empowered with custom scripts and functions to enable automated backup mechanisms.
Users interact with the webtool and perform their analysis through the user interface that was developed using the VueJS framework, which is one of the most popular JavaScript frameworks enabling the creation of fast and reliable applications.
The design of the webtool resides in the interplay of global data structured in a relational database at administration (server) level in which connections between crops, agricultural activities (following named as 'operations') and technologies are managed. Moreover, in this context information regarding costs related to operations and underlying crops (e.g. seed, fertilizer and pesticide prices, no. treatments per year, fuel price, fuel consumption per hectare, labour cost and labour hours required per hectare) and PA technology costs are defined. The final user can adjust this auto-fill data and in any case, he is required to define entry values regarding farm-level information (e.g., location, farm size, yields, prices and selected crops), as explained in Section 3.
In the web-tool the following operations have been outlined: seeding, fertilization (i.e. application of mineral nitrogen), spraying (i.e. application of pesticides - fungicides, herbicides, insecticides, and growth regulators), mechanical weeding, tillage, limiting and manure application.
Consequently, a broad number of PA technologies able to perform these activities were integrated: auto-steer, section control, VR fertilization system, VR seeding system, VR spraying system, VR liming system, VR manure application system, inter-row hoeing system (with camera or GPS).
Also, several support technologies were additionally considered to be coupled with the aforementioned technologies, in order to give the chance to increase the expected performance of the PA systems in the field; the support technology includes geolocation systems (normal GPS antenna, RTK-GPS antenna, or satellite), controlled traffic farming (CTF), surveying Unmanned Aerial Vehicles (UAVs), N-sensor, yield mapping systems, and devices for the measurement of soil electrical conductivity (EC) and soil pH.
The webtool allows users to retrieve information about each possible technology combination useful to assess the related economic performance.
### Benefits of PA technologies
Modelled benefits for PA technologies are agricultural input savings, yield increase, fuel saving and labour saving. Most reference values have been quantified according to evidences from scientific and technical literature. Missing values have been assessed by the authors based on their experience and common sense, staying on the safe side with conservative estimates. In this work, we mostly refer to literature reviews and analysis recently performed by the Authors (Medici et al., 2019; Pedersen et al., 2019). As numerous technologies are included in the tool and this study focus on the methodological approach of the web-tool, only the most relevant benefits regarding PA technologies are discussed in the following. Table 1 reports the suggested \"default\" benefits for each technology system modelled in the web-tool, which in most cases are based on conservative estimations compared to those available in the literature. Nonetheless, the original idea at the basis of the tool is that the user can take the proposed library as a reference and adjust/insert new values based on his knowledge and experience.
_Auto steer & CTF._ The use of auto-steer can lead to overlap reductions. For instance, Batte & Ehsani (2006) found a 3-7% overlap reduction for auto-guidance systems, and then observed that the saving of agricultural inputs increase proportionally to un-overlapped area. Another work indicates that the reduced overlap with auto-steering can reach about 5-10% (Petersen et al., 2006). Conversely, one study has shown no relevant effect in terms of input saving (Holpp et al., 2013).
_Section control._ Automatic section control on boom sprayers can reduce overlaps when turning on headlands, and this feature is particularly significant when fields are irregularly shaped. There is a potential for reduced overlap of up to 5% for section control when using a sprayer but it might be smaller in case of dry granular fertilizer products The real potential is case bounded, as field shape and headland size can vary a lot (Pedersen and Pedersen 2018).
_VR applications_. Regarding VR fertilization, it is documented that lower rates of nitrogen fertiliser carefully applied can reduce fertiliser use without affecting yield and crop quality in maize (Schmidt et al., 2002). In cereal crops Bourgain & Llorens (2009) reported a 11.1% saving of nitrogen, while (Casa et al., 2011) reported a 22% reduction in N used in comparison to uniform applications, with no relevant effect on yields. A number of approaches have been developed to produce prescription maps for site-specific applications generated via satellite, survey UAV or through a Yara N-sensor mounted on tractor.
The adoption of VR spraying is associated to agronomic benefits too. Dammer & Adamek, (2012) estimated a 13.4% of insecticide saving in cereals; regarding fungicide application, (Pedersen and Pedersen 2018) documented a 1% increase in crop yield.
Economic benefits from VR lime application strongly depend on crop and soil (e.g. soil acidity and homogeneity) and optimal lime rates (Pedersen, 2003). A couple of studies reported yield benefits regarding VR lime application: (Weisz et al., 2003) explored the long-term impact of site-specific lime application on a soybean field reporting significant increases in soil pH and higher yields (up to 14%), while (Bongiovanni & Lowenberg-Deboer, 2000)observed yields increasing between 1.8% and 4.8% in maize and soybean.
To date, there is poor evidence regarding the economic impact of VR manure application. Lack of accuracy, basically consisting in nutrient overlaps, and nutrient content information are present issues; the nutrient content can be monitored via near-infrared spectroscopy (NIR) but this technology is still in its infancy. In the web-tool, reference values for input reduction within 4% were considered, depending on the type of technology system used.
_Weeding._ Since weed control can be performed mechanically or with herbicide application, it would be hard to model any input reduction; in this work we refer to a 50% labour saving as standard benefit. This parameter is intended as compared to a hoe steered by an additional operator.
In the web-tool, the most relevant precision agriculture technologies for European farmers are considered, including monitoring technologies, guidance technologies and VR technologies. Often monitoring technology providers offer services in addition to physical equipment, like software applications able to convert sensing data into readable parameters. For this reason, we have intended additional services like zone sampling and prescription maps as included in the monitoring technology already defined in the tool. In any case, the user can model more accurately this issue, choosing an option and then inserting the related costs and benefits.
\\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \\hline
**Option name** & **Main technology** & **Support technology** & **Support technology** & **Proceedings** \\\\ \\hline
**1 Auto steer \\& CTF** & & Normal GPS & 3 & - & 3 & 1 \\\\ \\hline
**2 Section control** & Section control & Normal GPS & 3 & - & 3 & 1 \\\\ \\hline
**3 VR applications** & & & RTK-GPS, CTF & 3 & 1 & 5 & 1 \\\\ \\hline
**4.8 VR** & Section control & Normal GPS & 2 & - & - & - \\\\ \\hline
**5.1 Auto steer \\& CTF** & & RTK-GPS & 4 & - & - & - \\\\ \\hline
**6.1 CFF** & & Satellite & 3 & - & - & - \\\\ \\hline
**7.1 CFF** & & Survey UAV & 3 & - & - & - \\\\ \\hline
**8.1 CFF** & & Yield
## 3 User interface and application features
The user is guided in the PA technology assessment through an easy three-step procedure through: input data specification (_MY FARM_), definition of the list of operations and suitable PA technologies (_OPTIONS/TECHNOLOGY_) and potential benefit evaluation with final calculation (_ECONOMIC BENEFITS_).
### Input data, operations and technologies
In the first step in the web-tool homepage the language and the login, as well as default values of different countries can be chosen, as outlined from Figure 1. Default values are differentiated
by geographical area (Northern Europe, Central Europe, South & Southwestern Europe, Southeast Europe) and by crop (Wheat, Maize, Sugar beet, Canada, Potato). The web app user can fill data regarding current yield (t/ha) for each selected crop, total crop surface (ha) and price (E/t).
User can select one or more options from the second screen (Figure 2), then suitable PA technologies can be selected according to the chosen operations, including controlled-traffic farming (CTF), if applicable.
### Evaluation of economic benefits
In the next screen, the proposed costs and benefits for each crop, options and technology system can be checked and changed, if needed (Figure 3). The user can also adjust the applied discount rate (interest rate) associated to the modelled investment in PA technologies, as well as the investment costs associated to each suitable PA technology made available from chosen crop and operations.
Figure 1: PAMCoBA web-tool, MY FARM interface
Figure 2: PAMCoBA web-tool, OPTIONS/TECHNOLOGY interfaceThe financial analysis embedded in the web-tool is based on the estimation of the differential cash flows from selected PA technologies with a description of initial investment costs / (\\(\\in\\)), input costs \\(C\\) (\\(\\in\\)/year) and expected benefits in the form of potential revenues \\(R\\) (\\(\\in\\)/year).
The proposed investment costs are thought to be appropriate for a farm up to 50 hectares. Thus, when the crop surface is increased, investment cost should be adequately re-adapted. It was decided to apply the \"0.6 rule\" to consider economies of scale (Tribe & Alpine, 1986), which origins from the relationship between the increase in equipment cost (i.e. investment) and the increase in capacity (i.e. crop surface). For more details, please refer to (Pedersen et al., 2019).
Benefit calculation related to each chosen operation was modelled adopting the Net Present Value approach (NPV) with cash flows that are modelled throughout a set period (in this case 8 years) as described by Equation 1:
\\[NPV=-I^{\\prime}+\\sum_{t=1}^{8}\\frac{R_{t}-C_{t}}{(1+r)^{t}}\\]
Equation 1In which \\(I^{\\prime}\\) (\\(\\in\\)) is the total investment cost associated to the chosen operation scaled with the \"0.6 rule\", \\(R_{t}\\) (\\(\\in\\)/year) are operational revenues equal to potential yield increase multiplied for current yield and crop price, \\(C_{t}\\) (\\(\\in\\)/year) is the sum of costs at time \\(t\\) such as input, fuel and labour reduction, \\(r\\) is the discount rate. The internal rate of return (IRR) and the benefit-to-cost ratio (BCR) are also calculated for each selected system. By doing so, the tool allows to integrate several technologies at the same time and thereby integrate the synergies from
Figure 3: PAMCoBA web-tool, ECONOMIC BENEFITS interface
applying, for instance, RTK-GPS systems for different operations and fields, as would be the case on real farms.
Figure 4 shows the web-tool flow chart for benefit calculation. Finally, the web-tool allows user to download a pdf report including the detailed economic performance of PA technology system suitable for the farm, as well as the information about the quantity of input saved.
### Additional features
The web-tool allows the user to choose initial parameters recommended for those unfamiliar with the production impacts of precision technologies. It foresees to adjust all input data needed to calculate economic benefit as well as the possibility to build-up a custom crop with their own values to be specified. As the potential benefits and savings for site-specific application may vary depending on specific locations, users can freely save and compare multiple runs, adjusting operations and parameters in order to have a complete overview of potential PA technology performance.
## 4 Conclusions
The development of PAMCoBA web-tool focused in managing data for the benefit of farmers. It was designed to provide guidelines for farmers over their decisions to invest in selected PA technologies, by increasing the knowledge level about novel technologies characteristics and the related benefits. We believe that considering and evaluating investments in agriculture is a good management practice, and if any farmer would manage investments in the best way possible, then considerations regarding the profitability of technology should be considered.
Figure 4: Flowchart for benefit calculation of PAMCoBA web-tool
To our knowledge, this tool is unique, capable of helping stakeholders to understand the economics of PA technologies in an easy and friendly way. Its increased modifiability and adaptability, as well as the capability of allowing users to enter their specific data and to re-run the analysis by adjusting the system's values, can help users to understand how each technology can affect their farms positively. Moreover, the tool is novel as it integrates the application of multiple PA technologies in a whole farm management perspective. A powerful database was developed to achieve a high level of user customization, allowing the modification of the tool and its parameters through its administration panel.
The information about the quantity of input saved, reported in the downloadable report, may also be used outside this context, for instance to model environmental assessments scenarios. The tool can also be potentially useful to policy-makers if many farmers use it and provide (anonymous) data suitable to do policy analysis.
Appropriate criteria for the economic assessment of novel technology adoption is recognized as being one of the most significant issues requiring urgent and ongoing attention in order to develop PA to its full potential. We hope that this web-tool can help to improve the adoption rate of PA technologies in the future.
## Acknowledgement
This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 618123 ERA-NET - ICT-Agri in the project PAMCOBA.
## References
* Batte & Ehsani (2006) Batte, M. T., & Ehsani, M. R. (2006). The economics of precision guidance with auto-boom control for farmer-owned agricultural sprayers. _Computers and Electronics in Agriculture_, _53_(1), 28-44. [https://doi.org/10.1016/j.compag.2006.03.004](https://doi.org/10.1016/j.compag.2006.03.004)
* Bongiovanni & Lowenberg-Deboer (2000) Bongiovanni, R., & Lowenberg-Deboer, J. (2000). Economics of variable rate lime in Indiana. _Precision Agriculture_, _2_(1), 55-70. [https://doi.org/10.1023/A:1009936600784](https://doi.org/10.1023/A:1009936600784)
* Bourgain & Lorenz (2009) Bourgain, O., & Lorenz, J. M. (2009). Methodology to estimate levels of profitability of precision agriculture: simulation for crop systems in Haute-Normandie. _EFITA Conference '09. Proceedings of the 7th FFITA Conference, Wageningen, The Netherlands, 6-8 July 2009._[https://www.google.com/url?sa=&rct=j&q=&serc=s&source=web&cd=&ved=2ahUKEWiElZTYq](https://www.google.com/url?sa=&rct=j&q=&serc=s&source=web&cd=&ved=2ahUKEWiElZTYq) _npAnXGXRUIHSy[C54dFjAagQIBBAB&url=http%3A%2F%2FWww.efta.net%2Fapps%2Faccesba se%2Fbindocad.asp%3Fd%3D6490%26I%3D0%26ident0b%3DYlkgtri5%26uid%3D57305290%265id%3D57%26id%3D18u5g=AOVaw1WKPqwq52ZvUdbl_N8CKJg
* Casa et al. (2011) Casa, R., Cavalieri, A., & Cascio, B. Lo. (2011). Nitrogen fertilisation management in precision agriculture: a preliminary application example on maize. _Italian Journal of Agronomy_. [http://agris.fao.org/agris-search/search.do?recordID=DJ2012061307](http://agris.fao.org/agris-search/search.do?recordID=DJ2012061307)Dammer, K.-H., & Adamek, R. (2012). Sensor-Based Insecticide Spraying to Control Cereal Aphids and Preserve Lady Beetles. _Agronomy Journal_, _104_(6), 1694-1701. [https://doi.org/10.2134/agronj2012.0021](https://doi.org/10.2134/agronj2012.0021)
* Eastwood et al. (2016) Eastwood, C. R., Jago, J. G., Edwards, J. P., & Burke, J. K. (2016). Getting the most out of advanced farm management technologies: roles of technology suppliers and dairy industry organisations in supporting precision dairy farmers. _Animal Production Science_, _56_(10), 1752. [https://doi.org/10.1071/AN141015](https://doi.org/10.1071/AN141015)
* Holpp et al. (2013) Holpp, M., Kroulik, M., Kriz, Z., Anken, T., Sauter, M., & Hensel, O. (2013). Large-scale field evaluation of driving performance and ergonomic effects of satellite-based guidance systems. _Biosystems Engineering_, _116_(2), 190-197. [https://doi.org/10.1016/j.biosystemseng.2013.07.018](https://doi.org/10.1016/j.biosystemseng.2013.07.018)
* Kutter et al. (2011) Kutter, T., Tiemann, S., Siebert, R., & Fountas, S. (2011). The role of communication and co-operation in the adoption of precision farming. _Precision Agriculture_, _12_(1), 2-17. [https://doi.org/10.1007/s11119-009-9150-0](https://doi.org/10.1007/s11119-009-9150-0)
* Lamb et al. (2008) Lamb, D. W., Frazier, P., & Adams, P. (2008). Improving pathways to adoption: Putting the right P's in precision agriculture. _Computers and Electronics in Agriculture_, _61_(1), 4-9. [https://doi.org/10.1016/j.compag.2007.04.009](https://doi.org/10.1016/j.compag.2007.04.009)
* Li et al. (2018) Li, C., Wang, Y., Lu, C., & Huai, H. (2018). Effects of precision seeding and laser land 271 leveling on winter wheat yield and residual soil nitrogen. International Journal of 272 Agriculture and Biology. [https://doi.org/10.17957/IJAB/15.0820](https://doi.org/10.17957/IJAB/15.0820)
* McBratney et al. (2005) McBratney, A., Whelan, B., Ancev, T., & Bouma, J. (2005). Future directions of precision agriculture. _Precision Agriculture_, _6_(1), 7-23. [https://doi.org/10.1007/s11119-005-0681-8](https://doi.org/10.1007/s11119-005-0681-8)
* Medici et al. (2019) Medici, M., Pedersen, S. M., Carli, G., & Tagliaventi, M. R. (2019). Environmental Benefits of Precision Agriculture Adoption. _Economia Agro-Alimentare / Food Economy_, _21_(3), 637-565. [https://doi.org/10.3280/ECAG2019-003004](https://doi.org/10.3280/ECAG2019-003004)
* Oreszczyn et al. (2010) Oreszczyn, S., Lane, A., & Carr, S. (2010). The role of networks of practice and webs of influencers on farmers' engagement with and learning about agricultural innovations. _Journal of Rural Studies_, _26_(4), 404-417. [https://doi.org/10.1016/j.jrurstud.2010.03.003](https://doi.org/10.1016/j.jrurstud.2010.03.003)
* Papers Presented at the 12th European Conference on Precision Agriculture, ECPA 2019_, 833-839. [https://doi.org/10.3920/978-90-8686-888-9_103](https://doi.org/10.3920/978-90-8686-888-9_103)
* Technology assessment of site-specific input application in cereals\", PhD Thesis, Department of Technology and Social Sciences, Technical University of Denmark.
* Pedersen and Pedersen (2018) Pedersen, M. F., Pedersen, S. M., 2018 \"Erhvervs&konomiske gevinster ved anvendelse af pracisionslandbrug\" (in eng. Production economic net-benefits from precision farming), 49 s., IFRO Udredning, Department of Food and Resource Economics, University of Copenhagen, Nr. 2018/02
* Petersen et al. (2006) Petersen, H H, Hansen J. P. og Ollgaard T., 2006. &konomiske og milj?naessige fordele ved autostyring, NordVest Agro. (in eng: Economic and environmental benefits from Auto steering). Available at: [https://www.landbrugsinfo.dk/planteavl/praecisionsjordbrug-og-gis/sider/1537_bilag3.pdf](https://www.landbrugsinfo.dk/planteavl/praecisionsjordbrug-og-gis/sider/1537_bilag3.pdf)
* Schmidt et al. (2002) Schmidt, J. P., Deloia, A. J., Ferguson, R. B., Taylor, R. K., Young, R. K., & Havlin, J. L. (2002). Corn Yield Response to Nitrogen at Multiple In-Field Locations. _Agronomy Journal_, _94_(4), 798-806.
[https://doi.org/10.2134/agronj2002.7980](https://doi.org/10.2134/agronj2002.7980)
* Seelan et al. (2003) Seelan, S. K., Laguette, S., Casady, G. M., & Seielstad, G. A. (2003). Remote sensing applications for precision agriculture: A learning community approach. _Remote Sensing of Environment_, _88_(1-2), 157-169. [https://doi.org/10.1016/j.rse.2003.04.007](https://doi.org/10.1016/j.rse.2003.04.007)
* STOA (2016) STOA. (2016). _Precision Agriculture and the Future of Farming in Europe_. [https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_STU](https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_STU)(2016)581-892
* Tribe and Alpine (1986) Tribe, M. A., & Alpine, R. L. W. (1986). Scale economies and the \"0.6 rule.\" _Engineering Costs and Production Economics_. [https://doi.org/10.1016/0167-188X](https://doi.org/10.1016/0167-188X)(86)90053-4
* Weisz et al. (2003) Weisz, R., Heiniger, R., White, J. G., Knox, B., & Reed, L. (2003). Long-Term Variable Rate Lime and Phosphorus Application for Piedmont No-Till Field Crops. _Precision Agriculture_, _4_(3), 311-330. [https://doi.org/10.1023/A:1024908724491](https://doi.org/10.1023/A:1024908724491) | To develop precision agriculture (PA) to its full potential and make agriculture progress toward sustainability and resilience, appropriate criteria for the economic assessment are recognised as being one of the most significant issues requiring urgent and ongoing attention. In this work, we develop a web-tool supporting the assessment of the net economic benefits of integrating precision farming technologies in different contexts. The methodological approach of the tool is accessible to any agricultural stakeholder through a guided process that allows to evaluate and compare precision agriculture technologies with conventional systems, leading the final user to assess the financial viability and environmental impact resulting from the potential implementation of various precision agriculture technologies in his farm. The web-tool is designed to provide guidelines for farmers over their decisions to invest in selected PA technologies, by increasing the knowledge level about novel technologies characteristics and the related benefits. Possible input reduction also offers the possibility to investigate the mitigation of environmental impacts. | Write a summary of the passage below. | 175 |
arxiv-format/1903_10624v2.md | # Surface expression of a wall fountain: application to subglacial discharge plumes
Craig D. McConnochie
[email protected] Department of Civil and Natural Resources Engineering, The University of Canterbury, Christchurch, New Zealand
Claudia Cenedese
Physical Oceanography Department, Woods Hole Oceanographic Institution, Woods Hole, MA, USA
Jim N. McElwaine
Department of Earth Sciences, University of Durham, Durham, UK Planetary Science Institute, Tucson, AZ, USA
## 1 Introduction
The mass loss from the Greenland and Antarctic ice sheets is an increasingly significant component of global sea level rise (e.g. Chambers et al., 2017). Interactions between the polar oceans and the ice sheets are an important control on the rate of glacier melting, but are not fully understood (Straneo and Cenedese, 2015). An area of recent focus has been the subglacial discharge plumes that are released at the base of Greenland glaciers and that have been linked to elevated melt-rates (e.g. Slater et al., 2016; Carroll et al., 2015; Straneo and Cenedese, 2015).
These subglacial discharge plumes originate from surface melting of the ice sheet. The meltwater then flows to the base of the ice sheet where it travels underneath the ice through a complex network of channels. Eventually, the meltwater is released into the fjord at the grounding line -- the location where the ice sheet becomes afloat. There is currently a high degree of uncertainty regarding the geometry of the subglacial discharge sources with possibilities ranging from relatively confined point sources to extended line sources (Jackson et al., 2017). The fresh (\\(\\approx 0\\,\\mathrm{g/kg}\\)) and cold (\\(\\approx 0^{\\mathrm{o}}\\mathrm{C}\\)) meltwater is lighter than the surrounding fluid and forms a turbulent plume that rises up the ice face entraining relatively warm and salty water from the fjord. The entrained fjord water rapidly mixes throughout the plume, enhances the transport of heat and salt to the ice boundary and drives rapid melting, as demonstrated in laboratory experiments by McConnochie and Kerr (2017) and Cenedese and Gatto (2016). Here we focus on glaciers that have vertical or near-vertical termini as opposed to glaciers with a near-horizontal floating ice shelf or ice tongue. With this focus, it is typically assumed that the ice face near subglacial plumes is vertical although recent observations have suggested that the ice face can be undercut (Fried et al., 2015).
Greenland fjords typically have an approximately two-layer stratification (Straneo et al., 2011, 2012). Due to this density stratification, subglacial plumes often rise to a level where they are denser than the ambient fluid. At this point, the flow is best described as a fountain that rises due to its momentum but decelerates due to its buoyancy. The flow continues to entrain relatively warm and salty water from the fjord but instead of accelerating it beginsto decelerate. Throughout this paper we typically use the term plume to describe the general flow resulting from the subglacial discharge of meltwater and the term fountain to describe the specific part of the flow containing upward vertical momentum and downward buoyancy. After rising vertically, the meltwater plume can either intrude horizontally into the ambient stratification at mid-depth if its density is higher than the upper layer, propagate away from the ice at the free surface if its density is lower than the upper layer, or reach the free surface due to excessive vertical momentum before sinking back down to a level of neutral buoyancy (Beaird et al., 2015; Cenedese and Gatto, 2016; Mankoff et al., 2016; Sciascia et al., 2013).
In this study we focus on the third scenario where the meltwater rises through the upper layer as a fountain and then sinks. In contrast to canonical fountains in semi-infinite environments where the flow rises until it has zero vertical momentum and then falls back around the rising fluid to a level of neutral buoyancy (Hunt and Burridge, 2015), we focus here on flows that reach the free surface with significant vertical momentum. This causes the fluid to spread horizontally at the surface for some distance before sinking to a level of neutral buoyancy. Since the subglacial discharge is typically highly turbid, it is often visible as a pool of sediment-laden fluid at the free surface (e.g. How et al., 2017; Mankoff et al., 2016). Although the suspended sediment could have important dynamical effects on the plume and the surface pool, throughout this study we will assume that it is carried as a passive tracer.
Mankoff et al. (2016) photographed a well defined region of turbid fluid in front of Saqqarliup Sermia, Greenland, that was interpreted as a subglacial discharge _pool_. The pool was triangular in shape, approximately \\(300\\,\\mathrm{m}\\) wide at the glacier face and stretched \\(300\\,\\mathrm{m}\\) away from the glacier. Considering that subglacial discharge plumes are typically assumed to be semi-circular (Slater et al., 2016; Mankoff et al., 2016), the triangular shape is somewhat surprising and as yet, has not been explained.
There are several possible explanations for the triangular shaped surface expression such as a sloping glacier face, a secondary circulation induced by the narrow fjord, and increased melting induced by the plume itself leading to an incised glacier face that redirects the surface outflow. In this paper we use a set of laboratory experiments to investigate several controlling mechanisms on the surface expression of subglacial discharge plumes which we model as a fountain, next to a wall, that reaches the free surface. We consider a fountain rather than a plume as we are interested in a flow that will reach the free surface, spread for some distance and then sink. By considering a fountain we effectively limit the investigation to the region where the flow is above its level of neutral buoyancy.
Although both the vertical plume (e.g. Slater et al., 2016) and horizontal intrusion (e.g. Jackson et al., 2017) have been extensively studied, much less attention has been given to how the flow transitions from vertical to horizontal. This transition is important not just for determining the size of the surface expression, but also for understanding how to force the boundary of fjord-scale models (e.g. Cowton et al., 2015; Carroll et al., 2015). As such, although this study is primarily focused on the size of the surface expression, it will also offer insight into how the subglacial discharge flow transitions from vertical to horizontal which has broader implications for Greenland fjords.
Turbulent fountains have been extensively studied in the past (see Hunt and Burridge, 2015). Many of the previous studies have examined the entrainment of ambient fluid into an axisymmetric fountain (e.g. Burridge and Hunt, 2016; Bloomfield and Kerr, 1998). Although turbulent fountains and turbulent plumes are governed by approximately the same force balance, entrainment into fountains is significantly more complex due to the potential entrainment of sinking fluid into the rising fountain. The problem of a fountain adjacent to a wall has also been of interest to many authors given its applicability to building fires and enclosed convection (e.g. Goldman and Jaluria, 1986; Kapoor and Jaluria, 1989; Kaye and Hunt, 2007).
Despite the previous work on turbulent fountains, there are features of subglacial discharge fountains that have not been fully studied. First, much of the previous work on wall fountains has focused on two-dimensional flows whereas we are interested in wall fountains generated from a point source. In addition, subglacial discharge fountains reach the free surface with a significant vertical momentum that causes them to spread horizontally before sinking. As such the upward and downward flows can be spatially separated, causing the horizontal flow field at the free surface to be important to the overall dynamics. As well as the applicability to subglacial discharge surface expressions, understanding this surface flow could be important in a variety of similar problems as it will control where the source fluid will come to rest and how much entrainment of ambient fluid will occur.
From the definition of a fountain given in Hunt and Burridge (2015), the fact that the sinking fluid is not re-entrained into the rising flow would suggest that the considered situation is not in fact a fountain but is a vertical, negatively buoyant jet. However, we retain the term fountain for simplicity and to clarify that the rising flow is denser than the surrounding fluid, unlike what would be expected for a plume.
The purpose of this paper is to investigate some of the physical processes that could control the dimensions and shape of the surface expression of a subglacial discharge plume. Our hypothesis is that if the processes controlling the surface expression are well understood, then the subglacial discharge properties could be inferred from visual observations of the fjord surface.
In SS2 we present a theoretical model of the fountain that results from a subglacial discharge plume and of its surface expression. In SS3 we describe the experimental apparatus. SS4 and SS5 give the experimental results and comparisons with predictions from the theoretical model. Finally in SS6 we apply the model to the observations of a surface pool described in Mankoff et al. (2016) and attempt to infer the sub-surface properties of the plume.
## 2 Theory
We consider a scenario similar to that occurring at the Greenland glacier fronts where floating ice shelves are not present: the steady and vertical release of freshwater, from a single point source located next to a wall, into a relatively deep (many source radii) two-layer stratification. The release of freshwater will produce buoyant fluid that is often modelled as a semi-circular plume (Slater et al., 2016; Mankoff et al., 2016). Although the source of the freshwater discharge is likely to have some horizontal momentum in the geophysical case, it is common to model the discharge as a purely vertical flow as the length scale whereby the discharge attaches to the wall is typically much smaller than the total water depth (e.g. Cowton et al., 2015; Xu et al., 2013).
To produce a surface expression of the plume that is denser than the upper layer, we set the ambient density profile and subglacial discharge characteristics such that the plume is initially positively buoyant but, due to entrainment of the lower layer fluid, it becomes more dense than the upper layer. However, the vertical momentum at the interface between the lower and upper layers is assumed to be sufficient that the plume will rise to the free surface and spread horizontally for some distance before sinking back to the interface depth. This is shown schematically in figure 1.
The flow can be considered in two separate regions: first, a vertical flow next to the wall and second, a horizontal negatively buoyant jet that is generated at the free surface. These regions are shown on figure 1 and shall be referred to as \"region 1\" and \"region 2\". As the flow transitions from the first to the second region we assume that all vertical momentum is converted to horizontal momentum. In the appendix we show how the vertical momentum is converted into horizontal momentum in the transition region. Details of the two separate regions and the transition are described below.
To simplify the experiments (SS3), we will ignore the positively buoyant plume in the lower layer and consider a dynamically equivalent system only comprising the lighter upper layer. The initial discharge in the simplified system is equivalent to the plume at the density interface in the full geophysical system. The experimental system is shown as the unhatched region in figure 1. In the following section we consider only the simplified system but note that the same equations that are used for the fountain in the upper layer can be applied to the buoyant plume that forms in the lower layer.
### Wall fountain
The canonical equations for a buoyant plume from Morton et al. (1956) are used for both the positively buoyant flow (the plume) in the lower layer and the negatively buoyant flow (the fountain) in the upper layer of region 1. Following Slater et al. (2016) and Ezhova et al. (2018) we have assumed that the wall causes the plume to be semi-circular and adjusted the plume model accordingly. The fountain volume flux \\(Q\\), momentum flux \\(M\\), and buoyancy flux \\(B\\) are calculated from
\\[\\frac{\\mathrm{d}Q}{\\mathrm{d}z} = \\pi\\alpha bw, \\tag{1}\\] \\[\\frac{\\mathrm{d}M}{\\mathrm{d}z} = \\frac{\\pi b^{2}g^{\\prime}}{2}-2C_{d}bw^{2}, \\tag{2}\\]
and
\\[\\frac{\\mathrm{d}B}{\\mathrm{d}z} = \\frac{\\mathrm{d}}{\\mathrm{d}z}\\left(\\frac{\\pi b^{2}wg^{\\prime}}{ 2}\\right)=0 \\tag{3}\\]
where \\(z\\) is the height above the source, \\(b\\) is the top-hat fountain radius, \\(w\\) is the top-hat fountain velocity, \\(g^{\\prime}\\) is the top-hat reduced gravity between the fountain and the surrounding ambient fluid, \\(C_{d}\\) is the drag coefficient assumed to be 0.0025 (Cowton et al., 2015) and \\(\\alpha=w_{e}/w\\) is the entrainment coefficient with \\(w_{e}\\) being the velocity with which ambient fluid is entrained into the fountain. We note that \\(M\\) is actually the specific momentum flux (i.e. the momentum flux divided by the density) but will be referred to as the momentum flux throughout the paper
Figure 1: Schematic showing the flow being considered. Relatively fresh fluid is released into a two-layer stratification with upper layer density \\(\\rho_{u}\\) and lower layer density \\(\\rho_{l}\\). Labels 1, 2, and T show the two regions of the flow and the transition from vertical to horizontal flow. The experiments only consider the upper (unhatched) layer. Therefore the interface between the two density layers on this figure is the base of the experimental tank. \\(s_{y}\\) shows the length of the surface expression in the \\(y\\) (wall-perpendicular) direction as measured in our experiments and \\(\\delta\\) is the thickness of the horizontal negatively buoyant jet.
for simplicity. The value for \\(\\alpha\\) is determined to be 0.10 by the experiments described in SS4. It is assumed that the drag against the wall is negligible compared to the buoyancy forces. Using a drag coefficient of 0.0025, the drag force is estimated to be approximately 5% of the buoyancy force in the laboratory experiments and typically \\(<3\\%\\) of the buoyancy force for a geophysical scenario.
Equations (1)-(3) are initialised at \\(z=0\\) (the base of the unhatched layer on figure 1) using the values of the volume, momentum and buoyancy fluxes and the discharge area at the source. As demonstrated by Kaye and Hunt (2006), for (1)-(3) to be valid the fountain must have a source Froude number (\\(\\text{Fr}_{0}=w_{0}(g_{0}^{\\prime}b_{0})^{-1/2}\\)) greater than approximately 3, where subscript 0 refers to source properties. The width of the plume must also be much less than the thickness of the top layer. In the context of a subglacial discharge, these conditions will generally be met in cases where the flow reaches the surface. Equations (1)-(3) are used to determine the vertical fluxes at the free surface.
Following Ezhova et al. (2018), we expect the time-averaged density and velocity profiles in the wall-parallel direction to be roughly Gaussian. Perpendicular to the wall, the velocity and density profiles outside of the viscous sublayer can also be adequately modelled by a Gaussian curve, although the rate of spread is significantly lower in the wall-perpendicular than the wall-parallel direction. The more rapid spreading in the wall-parallel direction is explained by Launder and Rodi (1983) in the context of wall jets by the interaction with the wall leading to non-isotropic turbulent fluctuations -- eddies normal to the wall cannot be as large as eddies parallel to the wall.
At the level of the free surface, we expect the fountain to have a density maximum located at the wall but a velocity maximum that is offset some distance due to the no-slip boundary condition imposed by the wall. The distance of this offset, \\(y_{\\text{o}}\\), is taken from direct numerical simulations of a wall plume (Ezhova et al., 2018) and we approximate the velocity profile between the wall and the maximum velocity location as linear in the \\(y\\) direction and Gaussian in the \\(x\\) direction. The velocity and density profiles at the height of the undisturbed free surface can then be described as
\\[w_{s}(x,y)=\\left\\{\\begin{array}{ll}\\overline{w}\\exp\\left[-\\frac{1}{2}\\left( \\frac{x^{2}}{m^{2}}+\\frac{(y-y_{0})^{2}}{n^{2}}\\right)\\right],&y>y_{\\text{o}} \\\\ \\left(\\frac{y\\overline{w}}{y_{\\text{o}}}\\right)\\exp\\left[-\\frac{x^{2}}{2m^{2}} \\right],&y<y_{\\text{o}}\\end{array}\\right. \\tag{4}\\]
and
\\[g_{s}^{\\prime}=\\overline{g^{\\prime}}\\exp\\left[-\\frac{1}{2}\\left(\\frac{x^{2}}{ m^{2}}+\\frac{y^{2}}{n^{2}}\\right)\\right], \\tag{5}\\]
where \\(\\overline{w}\\) and \\(\\overline{g^{\\prime}}\\) are the maximum values of the fountain velocity and reduced gravity, \\(x\\) and \\(y\\) are the distances in the wall-parallel and wall-perpendicular direction, and \\(m\\) and \\(n\\) define the size of the fountain in the \\(x\\) and \\(y\\) directions, respectively. The values of \\(\\overline{w}\\) and \\(\\overline{g^{\\prime}}\\) are taken from the free surface values of equations (1)-(3) and depend on the source conditions and ambient fluid properties (i.e. depth and density structure). The values of \\(m\\) and \\(n\\) are taken from experiments that measured the width and the length of the fountain before it reached the surface (SS4). Finally, \\(x\\) and \\(y\\) are defined such that \\((x,y)=(0,0)\\) gives the centre of the fountain in the \\(x\\) direction and the position of the wall in the \\(y\\) direction. The exact form of the velocity profile for the region \\(y<y_{\\text{o}}\\) is relatively unimportant to the spreading of the surface expression. The fluid between \\(y_{\\text{o}}\\) and the wall spreads parallel to the wall rather than away from the wall, so it has almost no effect on the length of the surface expression (the size in the wall-perpendicular direction). The velocity profile has only a small effect on the width of the surface expression (the size in the wall-parallel direction) due to the small fluxes in this region. As the Reynolds number increases the region between \\(y=0\\) and \\(y=y_{\\text{o}}\\) will contain a smaller proportion of the total fluxes so in a geophysical setting, with a very large Reynolds number, this region is most likely completely insignificant.
### Transition from vertical to horizontal fluxes
We assume that the fountain's momentum causes the free surface to rise a small amount so that the flow can be treated as equivalent to the solution for the flow around a \\(90^{\\circ}\\) corner. This assumption is justified in the appendix. The pressure at the free surface \\(p_{s}\\) leads to the free surface rising according to
\\[Z(x,y)=\\frac{p_{s}(x)}{g}=\\frac{w_{s}^{2}-u_{s}(x)^{2}/2}{g}, \\tag{6}\\]
where \\(g\\) corresponds to the acceleration due to gravity and not the reduced gravity. \\(u_{s}(x)\\) is the horizontal velocity at the surface and the second part of this equation follows from Bernoulli's principle.
Figure 2 shows the modelled free surface deviation \\(Z\\), normalised by the maximum value. A threshold free surface deviation of \\(Z=0.01Z_{\\text{max}}\\) is used to define the outside edge of the fountain (blue line on figure 2). Following Zgheib et al. (2015), we separate the fountain into independent sectors. The sectors are defined such that the arc angle that the sector boundaries make with the centre of the fountain is constant and that the sector edges follow the maximum gradient in the free surface (black dashed lines on figure 2). As such, all sectors start from the centre of the fountain, follow the steepest gradient of the free surface and finish at the fountain boundary at uniformly spaced angles. Due to the asymmetric Gaussian velocity profiles, this results in sector boundaries that are slightly curved rather than being straight lines as in Zgheib et al. (2015). All of the fluid between \\(y_{\\text{o}}\\) and the wall will travel
in the wall-parallel direction so this region is treated as two sectors: one in the positive \\(x\\) direction and the other in the negative \\(x\\) direction.
The vertical volume, momentum and buoyancy fluxes entering each sector are calculated from the velocity and reduced gravity profiles given in (4) and (5) as
\\[\\hat{Q} = \\oint\\limits_{S}w\\,\\mathrm{d}A, \\tag{7}\\] \\[\\hat{M} = \\oint\\limits_{S}w^{2}\\,\\mathrm{d}A,\\] (8) \\[\\hat{B} = \\oint\\limits_{S}wg^{\\prime}\\,\\mathrm{d}A, \\tag{9}\\]
where \\(\\hat{}\\) refers to the vertical flux in a sector and \\(\\oint\\limits_{S}\\mathrm{d}A\\) is the surface integral over the area of a sector. The fluxes are combined with the sector width \\(W\\) (see figure 2) to calculate the top-hat velocity \\(u\\), reduced gravity \\(g^{\\prime}\\), and thickness \\(\\delta\\) of the negatively buoyant jets that leave the sector horizontally:
\\[u = \\frac{\\hat{M}}{\\hat{Q}}, \\tag{10}\\] \\[g^{\\prime} = \\frac{\\hat{B}}{\\hat{Q}},\\] (11) \\[\\delta = \\frac{\\hat{Q}^{2}}{\\hat{M}W}. \\tag{12}\\]
The horizontal velocity \\(u\\) leaving a given sector is assumed to be in the direction of maximum free-surface gradient at the centre of the sector as shown by the arrow on figure 2 and consistent with the definition of the sector boundaries.
### Horizontal negatively buoyant jet
The second region is composed of a series of negatively buoyant jets, directed horizontally, and emanating from the outside boundary of the sectors shown in figure 2. The velocity \\(u\\), reduced gravity \\(g^{\\prime}\\), thickness \\(\\delta\\), width \\(W\\), and direction \\(\\beta\\) of the jet are all obtained from SS2b. It is envisaged that a separate jet is leaving from each sector. The jets have a cross-sectional area given by the thickness \\(\\delta\\) and the sector width \\(W\\) and are bounded on the top by the free surface, on either side by the neighbouring jets (or the wall), and ambient fluid on the base. As such, once the surface expression has reached a quasi-steady size, the jets only entrain ambient fluid through the base. Once the surface expression begins to sink, the jets will be able to entrain ambient fluid from above as well but this is outside the focus of our model. The equations that govern the propagation of each jet are similar to those given in (1)-(3) but are adapted for the different geometry and orientation:
\\[\\frac{\\mathrm{d}Q}{\\mathrm{d}\\mathbf{s}} = \\alpha uW, \\tag{13}\\] \\[\\frac{\\mathrm{d}M_{x,y}}{\\mathrm{d}\\mathbf{s}} = 0,\\] (14) \\[\\frac{\\mathrm{d}M_{z}}{\\mathrm{d}\\mathbf{s}} = W\\delta g^{\\prime}, \\tag{15}\\]
and
\\[\\frac{\\mathrm{d}B}{\\mathrm{d}\\mathbf{s}} = 0, \\tag{16}\\]
where \\(\\mathbf{s}\\) is the distance along the path length of each jet, and \\(M_{x,y}\\) and \\(M_{z}\\) are the momentum fluxes in the horizontal and vertical directions, respectively. An entrainment coefficient of \\(\\alpha=0.1\\) is used in this region, which is a value between that of a pure jet and a pure plume (Carazzo et al., 2006).
The jet centreline position, \\(\\mathbf{s}=(s_{x},s_{y},s_{z})\\), is also tracked for each sector over time as:
\\[\\frac{\\mathrm{d}s_{x}}{\\mathrm{d}\\mathbf{s}} = \\cos\\beta\\,\\cos\\left[\\tan^{-1}\\left(\\frac{M_{z}}{M_{x,y}}\\right) \\right], \\tag{17}\\] \\[\\frac{\\mathrm{d}s_{y}}{\\mathrm{d}\\mathbf{s}} = \\sin\\beta\\,\\cos\\left[\\tan^{-1}\\left(\\frac{M_{z}}{M_{x,y}}\\right) \\right],\\] (18) \\[\\frac{\\mathrm{d}s_{z}}{\\mathrm{d}\\mathbf{s}} = -\\sin\\left[\\tan^{-1}\\left(\\frac{M_{z}}{M_{x,y}}\\right)\\right], \\tag{19}\\]
where \\(\\beta\\) is the horizontal angle of jet propagation taken from SS2b and measured from a plane that is parallel to the wall, as shown on figure 2. The value \\(\\gamma=\\left(\\tan^{-1}\\left(M_{z}/M_{x,y}\\right)\\right)\\) gives the angle of jet propagation in the vertical plane, measured from the horizontal and increasing downwards (i.e. \\(\\gamma\\) is initially zero and increases as the horizontal jet becomes a vertical plume). Finally, \\(s_{z}\\) is constrained such that the distance between the jet centreline and the free surface is at least half the jet thickness. Equations (13)-(19) are evolved in \\(\\mathbf{s}\\) from the edge of the fountain (blue line on figure 2) until the upper surface of the jet is deeper than a predetermined threshold based on the experimental setup, at which point \\(s_{x}\\) and \\(s_{y}\\) define the outside edge of the fountain surface expression for each sector.
Different sectors have different momentum and buoyancy fluxes due to both the fountain asymmetry and the offset between the Gaussian velocity and density profiles. The fountain asymmetry results in the edge of the fountain (blue line on figure 2) having a larger radius of curvature in the centre (\\(x\\approx 0\\)) than near the wall (\\(y\\to y_{0}\\)). This causes the jet to decelerate more rapidly in the wall-parallel direction than in the wall-perpendicular direction. The offset in the velocity and density profiles results in the sectors in the centre of the fountain having a lower reduced gravity and higher velocity than the sectors near the wall. Both of these factors will cause the surface expression to spread further in the wall-perpendicular direction than in the wall-parallel direction, as will be shown when we compare the model predictions with the experimental observations (figure 5).
## 3 Experiments
Experiments were conducted in a glass tank that was 61.5 cm wide in the horizontal directions and 40 cm high. A section of Perspex, almost as wide as the tank, was attached to the base of the tank, approximately 1 cm from one wall, via a hinge. The Perspex could be rotated to represent a vertical or sloping ice face.
A point source was installed at the centre of the hinged wall and 10 cm above the base of the tank. The source had a radius of 0.27 cm and was designed such that the discharge was turbulent from the point of release, as described in Kaye and Linden (2004). The source rotated with the hinged wall such that the discharge was always parallel to the wall and upwards. The fountain typically became attached to the wall after a few centimetres.
The tank was initially filled with a mixture of oceanic salt water and fresh water to provide a predetermined density. The density was measured using an Anton Parr densimeter to an accuracy of \\(10^{-6}\\) g cm\\({}^{-3}\\). The temperature of the ambient fluid was thermally equilibrated at room temperature by resting the fluid in a storage drum for at least 12 hours prior to filling the tank. Residual motions caused by filling the experimental tank were left to decay for at least 30 minutes before the experiment was started.
Negatively buoyant seawater, with a small amount of rhodamine dye added for visualisation, was discharged from the source with a flow rate that was controlled by a pump. The flow rate was sufficiently high to impart enough vertical momentum for the negatively buoyant fluid to reach the surface. Similarly to the ambient fluid, the density was measured prior to an experiment with an Anton Parr densimeter and the fluid was allowed at least 12 hours to thermally equilibrate. The pump could provide flow rates from 2.5-7.5 cm\\({}^{3}\\) s\\({}^{-1}\\). Lower flow rates would have been possible but were avoided to ensure that the discharge was turbulent.
For most of the experiments (SS5), the flow was illuminated with a horizontal light sheet located near the free surface of the tank. Adjacent to the tank, green LEDs produced light that was passed through a cylindrical lens to form a horizontal sheet of green light with a thickness of approximately 0.5-1 cm in the region of interest. Despite the lens, the light sheet spread slightly in the vertical direction causing its thickness to slightly increase away from the wall and its intensity to slightly decrease within the upper 0.5-1 cm. We expect the negatively buoyant jet to be visible near the free surface until its upper surface falls below the base of the light sheet (i.e. 0.5-1 cm below the free surface). For experiments that were designed to measure the horizontal spreading rate of the fountain (SS4) the light sheet was placed horizontally at varying depths in the water column. For these experiments the light sheet had a thickness of approximately 0.5 cm in the region occupied by the fountain.
Figure 2: The modelled free surface deviation caused by the fountain impacting the free surface. The blue line shows the defined edge of the fountain and the black dashed lines show the sector boundaries. \\(\\beta\\) is the angle normal to the fountain edge that the jet will propagate in. \\(W\\) is the width of the sector. In this case, the value of \\(y_{0}\\) is 0.71 cm.
The fountain surface expression was recorded with a Nikon camera placed directly above the tank. The camera recorded a video of the entire experiment that was later processed using 'Streams' (Nokes, 2014). Approximately 30 s of video was time averaged to remove the turbulent fluctuations in the surface expression. Letting the experiment continue for longer times resulted in the sinking surface expression fluid entraining back into the fountain. Such a situation would not be expected in a geophysical context primarily because the surface expression is considerably larger than the fountain, but also due to further mixing and advection within the fjord that was not present in the laboratory experiments. A reference image from before the fountain was started was subtracted from the averaged experimental image to remove any effects from inconsistent lighting. Finally the intensity of red light was calculated for each pixel of the averaged and subtracted image and a threshold was applied to the resulting intensity field to determine the edge of the dyed fluid (e.g. figure 3).
We note that the dye intensity shouldn't be interpreted as a quantitative measure of the dye concentration throughout the surface expression. Firstly, the dye concentration is uncalibrated and there is no reason to expect a linear relationship between concentration and the intensity of reflected light observed by the camera. Secondly, across the length of the surface expression we would expect the light sheet to be significantly attenuated. Having said this, the results do suggest that the observed light intensity primarily decreases due to the advection of dyed fluid below the light sheet. Several experiments were conducted with the entire tank lit by ambient lighting which allowed the sinking of dye from the free surface to the base of the tank to be qualitatively observed. Furthermore, as discussed later in SS5, the measured surface expression dimensions were relatively insensitive to the chosen intensity threshold. This insensitivity is consistent with advective sinking of dye but would not be expected if attenuation of light or mixing of the dyed fluid was causing the observed reduction in intensity.
## 4 Fountain spreading rate
A small set of experiments was conducted to measure the rate at which the rising fountain spreads horizontally due to entrainment of ambient fluid (i.e. the spreading in region 1 of the theoretical model). From Ezhova et al. (2018) we expect the flow to spread more rapidly in the wall-parallel direction than the wall-perpendicular direction. As such, these experiments had two purposes. First, to measure the relative length of the fountain in the wall-parallel and the wall-perpendicular directions and second, to measure a bulk entrainment coefficient for use in the fountain model described in SS2a.
We assume that the wall fountain is a semi-ellipse and calculate the top-hat radius of an equivalent semi-circular fountain with the same cross-sectional area as
\\[b=\\sqrt{r_{1}r_{2}}, \\tag{20}\\]
where \\(r_{1}\\) and \\(r_{2}\\) are the measured half-widths of the rising fountain in the wall-parallel and wall-perpendicular directions, respectively.
Figure 3 shows two images from the spreading rate experiments. The images have been processed as described in SS3 to show the normalised light intensity. The top image shows the fountain shape at a height of 4.8 cm above the source while the second image shows the fountain 10.5 cm above the source. It is clear that the fluid has spread much more rapidly in the wall-parallel direction than in the wall-perpendicular direction, leading to an increasing asymmetry with height. The black lines on figure 3 show the fountain edge based on a threshold intensity of 52% of the maximum value. A threshold value of 52% is used as representative of a top-hat profile where the concentration field spreads slightly more rapidly than the velocity field (Turner, 1973).
Figure 4 shows all measurements of the top-hat fountain half-widths in the wall-parallel and wall-perpendicular directions as a function of height above the source. Also shown is the equivalent radius of a semi-circular fountain from equation (20). A linear regression has been used to calculate the spreading rate of the fountain which is used to calculate an entrainment coefficient based on the canonical self-similar plume model (Morton et al., 1956):
\\[\\alpha=\\frac{5}{6}\\frac{\\mathrm{d}b}{\\mathrm{d}z}. \\tag{21}\\]
Figure 3: Normalised light intensity showing the horizontal subsurface spreading of the fountain. The top image shows the fountain 4.8 cm above the source while the bottom image shows the fountain 10.5 cm above the source. The black line shows the 52% contour level used to determine the top-hat fountain width.
For the range of heights that were tested, the semi-circular fountain radius is given by
\\[b=0.12z+0.57 \\tag{22}\\]
where \\(b\\) and \\(z\\) are measured in cm and \\(z\\) is measured from the source. Thus, the bulk entrainment coefficient that we used in SS2a is \\(\\alpha=0.10\\), consistent with the range of values typically found for turbulent jets and plumes (Carazzo et al., 2006). The ratio of half-widths in the wall-parallel and wall-perpendicular directions is given by
\\[\\frac{r_{1}}{r_{2}}=0.11z+0.71. \\tag{23}\\]
This ratio, as well as the calculated fountain radius from (1)-(3), is used to calculate \\(m\\) and \\(n\\) in (4) and (5). Since our source is circular we would expect the initial ratio of half-widths to be 1. The lower value of 0.71 is likely due to the fountain being drawn towards the wall by the Coanda effect (Wille and Fernholz, 1965) whereby entraining flows create a low pressure region near boundaries and hence are attracted to the boundary.
We note that far away from the source it is expected that the aspect ratio would reach a constant value given by the ratio of the spreading rate in the wall-parallel direction to that in the wall-perpendicular direction. Thus, although the aspect ratio is seen to increase with height for the experimental fluid depths used in the present study, we would expect it to be constant at the greater fluid depths relevant to geophysical situations. We estimate this constant aspect ratio from the upper panel of figure 4 as
\\[\\left(\\frac{\\mathrm{d}r_{1}}{\\mathrm{d}z}\\right)\\bigg{/}\\bigg{(}\\frac{ \\mathrm{d}r_{2}}{\\mathrm{d}z}\\bigg{)}\\approx 4. \\tag{24}\\]
We note that the ratio of spreading rates is very similar to that from numerical simulations of a buoyant plume next to a vertical wall (Ezhova et al., 2018).
## 5 Surface expression
The majority of the experiments were designed to examine the surface expression of the fountain. The initial volume flux and reduced gravity of the fountain, as well as the ambient fluid depth, were all varied over the experiments. These changes are equivalent to changing the properties of the plume penetrating into the upper layer of the fjord and changing the depth of the upper layer.
Burridge and Hunt (2017) considered a two-dimensional dense jet released horizontally at the free surface and found that the non-dimensionalised distance that the jet stays at the surface increases linearly with the source Froude number defined as \\(\\mathrm{Fr}=u(g^{\\prime}\\delta)^{-1/2}\\). Although the flow that we are considering is significantly different from a two-dimensional surface jet, we expect that the length and width of the surface expression in our experiments will have a similar dependence on \\(\\mathrm{Fr}\\).
Values of the experimental parameters and calculated values of \\(\\delta\\) and \\(\\mathrm{Fr}\\) are provided for each of the experiments in table 1. The values of \\(\\delta\\), \\(u\\), and \\(g^{\\prime}\\) (and hence \\(\\mathrm{Fr}\\)) are calculated at the transition to the horizontal negatively buoyant jet region based on the model presented in SS2. Since the values of \\(\\delta\\) and \\(\\mathrm{Fr}\\) are different for each sector in the model we have shown the two extreme values: the wall-parallel direction \\(x\\), and the wall-perpendicular direction \\(y\\).
### Dimensions of the surface expression
Figure 5 shows processed images of the surface expression for three experiments. The edge of the surface expression, defined by the normalised 10% light intensity contour, is shown in black and the model prediction of the surface expression given by \\((s_{x},s_{y})\\) from (17)-(19) is superimposed in blue. A 10% threshold was used, different from the 52% threshold used to determine the top-hat fountain width (figure 4), because we are interested in the actual size of the surface expression rather than a
Figure 4: Experimental measurements of horizontal fountain spreading as a function of height above the source. The top panel shows the measured half widths in the wall-parallel and wall-perpendicular directions, the middle panel shows the top-hat radius of an equivalent semi-circular fountain and the bottom panel shows the ratio of half widths in the wall-parallel (\\(r_{1}\\)) to the wall-perpendicular (\\(r_{2}\\)) direction.
top-hat scale. As such, we selected the minimum threshold possible while remaining above the noise levels of the experimental observations. The model prediction shows reasonable agreement with the experiments but for large values of Fr tends to predict a more semi-circular surface expression than is observed in the experiments.
The reduction in observed light intensity is caused by a variety of processes and is not correlated directly with the concentration of dye within the surface expression. The relatively low sensitivity of the observed surface expression dimensions to changes in the intensity threshold defining the edge of the surface expression (described later) suggests that the primary process reducing the observed light intensity is the advective sinking of dyed fluid below the light sheet. However, there are a number of secondary processes that could also cause the observed reduction of the light intensity. The most important of these are the attenuation of the light sheet as it passes through the dyed fluid, averaging the temporal variability when producing the light intensity figures, and dilution of the surface expression dyed fluid due to mixing. Dilution of the surface expression is increasingly important for experiments with larger Froude numbers. Since the experiments were not designed to measure dye concentration (which would have required a camera with a larger dynamic range and careful calibration) we are unable to quantify the effects of dilution in the experiments. However, based on our model of the surface expression flow, we expect the dye to dilute to 85% and 43% of its maximum value across the surface expression for the top and bottom panel of figure 5, respectively. At high Fr numbers both the dilution and the attenuation of light are significant which could help to explain why the observed surface expression is smaller than the model predictions for large Fr.
Figure 6 shows the experimental and predicted length and half-width of the surface expression, non-dimensionalised by \\(\\delta\\), for each experiment given in table 1. The error bars for the model predictions are calculated by using a depth threshold for the negatively buoyant jet of 0.5 cm and 1 cm to reflect the uncertain position of the bottom of the light sheet during the experiments. The error bars for the experimental results are estimated to be 1 cm for all experiments. This is based on processing the experimental data with intensity thresholds of 5% and 20% when defining the edge of the surface expression for a selection of experiments. The relatively low sensitivity of the measured surface expression dimensions to the chosen intensity threshold suggests that the light intensity is decreasing due to the dyed surface expression fluid sinking
\\begin{table}
\\begin{tabular}{l c c c c c c} \\(Q_{0}\\) & \\(g^{\\prime}_{0}\\) & \\(H\\) & \\(\\delta_{x}\\) & \\(\\delta_{y}\\) & Fr\\({}_{x}\\) & Fr\\({}_{y}\\) \\\\ (ml s\\({}^{-1}\\)) & (cm s\\({}^{-2}\\)) & (cm) & (cm) & (cm) & (-) & (-) \\\\
2.71 & 3.54 & 10.0 & 1.23 & 1.52 & 1.17 & 1.44 \\\\
5.89 & 12.10 & 11.2 & 1.24 & 1.50 & 1.43 & 1.81 \\\\
4.30 & 7.00 & 10.0 & 1.08 & 1.33 & 1.69 & 2.08 \\\\
5.89 & 10.61 & 11.0 & 1.10 & 1.38 & 1.82 & 2.25 \\\\
3.42 & 3.32 & 10.0 & 0.98 & 1.22 & 2.11 & 2.86 \\\\
5.89 & 9.18 & 11.0 & 1.05 & 1.31 & 2.13 & 2.66 \\\\
4.30 & 3.73 & 10.0 & 0.96 & 1.19 & 2.59 & 3.09 \\\\
4.71 & 2.80 & 13.1 & 1.14 & 1.32 & 3.05 & 4.08 \\\\
5.09 & 4.06 & 10.0 & 0.89 & 1.10 & 3.83 & 4.72 \\\\
5.89 & 4.57 & 11.0 & 0.93 & 1.16 & 3.87 & 4.82 \\\\
5.89 & 4.44 & 10.0 & 0.88 & 1.08 & 4.36 & 5.42 \\\\
5.89 & 3.32 & 10.0 & 0.86 & 1.06 & 5.28 & 6.52 \\\\
7.48 & 4.56 & 10.0 & 0.85 & 1.06 & 5.85 & 7.13 \\\\
6.39 & 3.61 & 9.0 & 0.78 & 0.99 & 6.23 & 7.44 \\\\
6.68 & 2.92 & 10.0 & 0.85 & 1.05 & 6.60 & 8.13 \\\\
6.72 & 3.68 & 8.5 & 0.67 & 0.95 & 7.28 & 8.18 \\\\ \\end{tabular}
\\end{table}
Table 1: Source volume flux \\(Q_{0}\\), source reduced gravity \\(g^{\\prime}_{0}\\), free surface height above the source \\(H\\), and calculated values of \\(\\delta\\) and Fr for all of the surface expression experiments with a vertical wall. \\(Q_{0}\\), \\(g^{\\prime}_{0}\\), and \\(H\\) are external parameters that are measured based on the source and ambient properties while \\(\\delta\\) and Fr are calculated at the transition to the negatively buoyant jet based on the model presented in §2.
Figure 5: Normalised light intensity as measured from typical experiments focused on the surface expression. The particular experiments are those with Fr\\({}_{y}=1.44\\) (top), 4.82 (middle), and 8.18 (bottom) as given in table 1. The black line shows the normalised 10% light intensity threshold which was used to determine the edge of the surface expression and the blue line shows the predicted shape of the surface expression based on the model presented in §2.
below the light sheet rather than due to mixing or attenuation of the light sheet. Once the surface expression fluid starts sinking, the light intensity will quickly decrease as the dyed fluid sinks below the thin light sheet. In contrast, both mixing and light attenuation will reduce the observed light intensity continually with a initially rapid decrease followed by a slower decay.
A linear fit is plotted through the model predictions shown in figure 6. The linear fit is applied to the model predictions rather than the experimental values to test the similarity of our negatively buoyant jet model to that of Burridge and Hunt (2017). The applicability of the model presented by Burridge and Hunt (2017) is not obvious _a priori_. Burridge and Hunt (2017) give the applicability of their model to flows where the Froude number is greater than 12 whereas the Froude numbers in the present study vary between 1 and 9. Additionally, the flows being considered are significantly different. Burridge and Hunt (2017) considered a two-dimensional jet that only spreads vertically due to entrainment from the base, while we consider a jet that also spreads azimuthally as it propagates. However, figure 6 shows that the model presented in SS2 also results in the non-dimensionalised surface expression being linearly dependent on \\(\\mathrm{Fr}\\) (with a different constant of proportionality compared to the results of Burridge and Hunt (2017)). The dimensions of the surface expression are accurately predicted by the model presented in SS2 across most of the parameter space with a root mean square error of 2.95 and 1.30 for \\(s_{x}/\\delta_{x}\\) and \\(s_{y}/\\delta_{y}\\), respectively (approximately 15% in each case).
### Aspect ratio of the surface expression
We define an aspect ratio of the surface expression as the width parallel to the wall (\\(s_{x}\\)) divided by the length perpendicular to the wall (\\(s_{y}\\)). Therefore, if the surface expression was semi-circular it would have an aspect ratio of 2. Figure 7 shows the experimental and predicted aspect ratio of the surface expression for the experiments given in table 1. The different symbols indicate different fluid depths. The predicted values are shown in red while the experimental values are shown in blue. The aspect ratio is plotted against \\(\\mathrm{Fr}\\) which is the average of \\(\\mathrm{Fr}_{x}\\) and \\(\\mathrm{Fr}_{y}\\).
Figure 7 shows that the aspect ratio decreases as \\(\\mathrm{Fr}\\) increases. For low values of \\(\\mathrm{Fr}\\) the surface expression is predominantly defined by the shape of the fountain below the surface (figure 3, bottom panel) and the aspect ratio is larger than 2. As \\(\\mathrm{Fr}\\) increases, the negatively buoyant jet travels further away from the wall before sinking below the light sheet and the initial asymmetry in fountain shape becomes less important. Instead, the lower radius of curvature at the middle of the fountain and the offset velocity and density profiles lead to the jet travelling further away from the wall than along the wall (SS2c) and the aspect ratio decreases.
### The effect of a sloping wall
A supplementary set of experiments was undertaken to investigate the effect of a sloping wall on the surface
Figure 6: Experimental measurements of the fountain surface expression half-width (\\(s_{x}\\), top panel) and length (\\(s_{y}\\), lower panel) plotted as a function of \\(\\mathrm{Fr}_{x}\\) and \\(\\mathrm{Fr}_{y}\\), respectively. The length and half-width of the surface expression are non-dimensionalised by the jet thickness \\(\\delta\\) in the corresponding direction. Also shown are the model predictions for each experiment with a linear fit to the model predictions.
Figure 7: Experimental and predicted values of the surface expression aspect ratio as a function of \\(\\mathrm{Fr}\\). The value of \\(\\mathrm{Fr}\\) used is an average of \\(\\mathrm{Fr}_{x}\\) and \\(\\mathrm{Fr}_{y}\\). Experimental results are shown in blue while model predictions are shown in red. Experiments with a fluid depth of 10 cm are shown by crosses, a fluid depth of 11–11.2 cm by circles, and a fluid depth of 8.5–9 cm by squares.
expression. Experiments similar to those described in SS4 showed that the entrainment coefficient and fountain asymmetry were not significantly affected by a sloping wall. Due to refraction of light from the Perspex wall, visualising the fountain beneath the surface was much more challenging for a sloping wall than for a vertical wall. As such, the results were not used to determine the entrainment coefficient, only to confirm that the value is not significantly different from the vertical case. Furthermore, we assume that the distance that the maximum velocity is offset from the wall is unaffected by the slope.
The model that was presented in SS2 is slightly adapted to account for a sloping wall. In the vertical wall case, all of the fountain momentum that entered into a sector was converted to horizontal momentum with a direction normal to the sector boundary. For a sloping wall, the same process is undertaken for the vertical component of the fountain momentum, but the horizontal momentum is retained in the wall-perpendicular direction. This adjustment results in the length of the surface expression (\\(s_{x}\\)) being increased and the width of the surface expression (\\(s_{x}\\)) being decreased. The effect on the length is small as the sector boundary in the centre of the surface expression is approximately parallel to the wall. As such, both the vertical and horizontal components of the fountain momentum leave the sector in the wall-perpendicular direction (\\(\\beta=90^{\\circ}\\) on figure 2). In contrast, for the sector next to the wall, the vertical component of the fountain momentum will be directed along the wall while the horizontal component will be directed in the wall-perpendicular direction. More significant changes to the length would require transfer of momentum between sectors and it is not clear how this should be done.
The experimental parameters for the sloping experiments are given in table 2. Figure 8 shows the normalized light intensity field for two experiments with similar discharge characteristics but a vertical and a sloping wall. It can be seen that in the case of a sloping wall, the fountain fluid travels less distance in the wall-parallel direction and a greater proportion of the fluid ends up far away from the wall.
Figure 9 gives the experimental and model results for experiments conducted with a sloping wall and shows that the model predictions do not deviate significantly from the linear dependence on \\(\\mathrm{Fr}\\) that was determined for a vertical wall (solid lines on figure 9). We note that although the model results shown on figure 9 suggest that the dependence of the surface expression size on \\(\\mathrm{Fr}\\) is not significantly affected by the presence of a sloping wall, the modifications to the theoretical model described at the beginning of this section can have a large impact on the value of \\(\\mathrm{Fr}\\). As an example, the experiment shown in table 1 for a vertical wall with \\(Q_{0}=5.89\\,\\mathrm{ml\\,s^{-1}}\\) and \\(g^{\\prime}_{0}=4.57\\,\\mathrm{cm\\,s^{-2}}\\) has Froude number values of \\(\\mathrm{Fr}_{x}=3.87\\) and \\(\\mathrm{Fr}_{y}=4.82\\). In contrast, a similar experiment with a \\(55^{\\circ}\\) slope angle (\\(Q_{0}=5.89\\,\\mathrm{ml\\,s^{-1}}\\) and \\(g^{\\prime}_{0}=4.98\\,\\mathrm{cm\\,s^{-2}}\\) on table 2) has Froude number values of \\(\\mathrm{Fr}_{x}=3.26\\) and \\(\\mathrm{Fr}_{y}=6.64\\). However, even with the modified treatment of the fountain momentum through the transition region, the model under
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\(\\theta\\) & \\(Q_{0}\\) & \\(g^{\\prime}_{0}\\) & \\(H\\) & \\(\\delta_{x}\\) & \\(\\delta_{y}\\) & \\(\\mathrm{Fr}_{x}\\) & \\(\\mathrm{Fr}_{y}\\) \\\\ (\\({}^{\\circ}\\)) & (\\(\\mathrm{ml\\,s^{-1}}\\)) & (\\(\\mathrm{cm\\,s^{-2}}\\)) & (cm) & (cm) & (cm) & (-) & (-) \\\\
55 & 3.63 & 4.25 & 10.0 & 1.52 & 1.34 & 1.14 & 2.03 \\\\
55 & 4.30 & 4.32 & 10.0 & 1.38 & 1.22 & 1.62 & 3.26 \\\\
55 & 5.13 & 3.52 & 10.0 & 1.27 & 1.11 & 2.55 & 5.20 \\\\
55 & 5.89 & 4.25 & 10.0 & 1.26 & 1.11 & 2.71 & 5.47 \\\\
55 & 5.89 & 3.52 & 10.0 & 1.24 & 1.09 & 3.07 & 6.23 \\\\
55 & 5.89 & 4.98 & 10.0 & 1.23 & 1.08 & 3.26 & 6.64 \\\\
55 & 7.48 & 4.32 & 10.0 & 1.22 & 1.07 & 3.65 & 7.41 \\\\
70 & 3.54 & 4.25 & 10.0 & 1.27 & 1.24 & 1.43 & 2.50 \\\\
70 & 4.30 & 3.52 & 10.0 & 1.12 & 1.09 & 2.46 & 4.32 \\\\
70 & 5.13 & 3.52 & 10.0 & 1.07 & 1.05 & 3.21 & 5.60 \\\\
70 & 5.89 & 4.57 & 10.0 & 1.07 & 1.04 & 3.24 & 5.69 \\\\
70 & 7.48 & 4.25 & 10.0 & 1.03 & 1.01 & 4.61 & 8.03 \\\\ \\end{tabular}
\\end{table}
Table 2: Experimental slope angle \\(\\theta\\), source volume flux \\(Q_{0}\\), source reduced gravity \\(g^{\\prime}_{0}\\), free surface height above the source \\(H\\), and calculated values of \\(\\delta\\) and \\(\\mathrm{Fr}\\) for all of the surface expression experiments with a sloping wall. The angle \\(\\theta\\) is measured from the horizontal.
Figure 8: Normalised light intensity as measured from two typical experiments. The upper panel shows an experiment with a vertical wall and \\(Q_{0}=5.89\\,\\mathrm{ml\\,s^{-1}}\\), \\(g^{\\prime}_{0}=3.32\\,\\mathrm{cm\\,s^{-2}}\\) while the lower panel shows an experiment with \\(\\theta=55^{\\circ}\\) and \\(Q_{0}=5.89\\,\\mathrm{ml\\,s^{-1}}\\), \\(g^{\\prime}_{0}=3.52\\,\\mathrm{cm\\,s^{-2}}\\). The low light intensity around \\(y=5\\,\\mathrm{cm}\\) on the lower panel is an artefact of the top of the Perspex wall and is not physically meaningful. The blue line is showing the model prediction and the black line is showing the normalised 10% intensity threshold from the experiments.
predicts the length of the surface expression for both \\(55^{\\circ}\\) and \\(70^{\\circ}\\) angles.
Figure 10 shows the measured and predicted values of the surface expression aspect ratio for the sloping wall experiments. Similarly to figure 7, the value of \\(\\mathrm{Fr}\\) that is shown is the mean value of \\(\\mathrm{Fr}_{x}\\) and \\(\\mathrm{Fr}_{y}\\). Since the model systematically under predicts the length of the surface expression, the predicted aspect ratio is always too large. The discrepancy between the predicted and measured aspect ratio is larger for the \\(55^{\\circ}\\) experiments than for the \\(70^{\\circ}\\) experiments as the underestimation of the surface expression length is larger. The discrepancy is reduced for larger values of \\(\\mathrm{Fr}\\) as the surface expression becomes larger and the effect of an approximately constant error in the wall-perpendicular direction is reduced.
## 6 Application to observations
In this section we apply the model presented in SS2 to observations of a subglacial discharge plume from Saqqarliup Fjord, Greenland (Mankoff et al., 2016). Photographs of the fjord surface show a triangular surface expression that extends approximately \\(300\\,\\mathrm{m}\\) along the front of the glacier and \\(300\\,\\mathrm{m}\\) into the fjord (see figure 3a of Mankoff et al., 2016). Accompanying these aerial photographs of the surface expression are observations of the water properties within the fjord -- both some distance downfjord and through the surface expression. The oceanographic observations show a significantly different density profile downstream than through the surface expression.
Downstream in the fjord, the \\(150\\,\\mathrm{m}\\) water column has an approximately two-layer density profile with warmer and fresher water overlying cooler and saltier water (see figure 7 of Mankoff et al., 2016). We characterise the overlying layer as \\(S=29.5\\,\\mathrm{g}\\,\\mathrm{kg}^{-1}\\), \\(T=2.0^{\\circ}\\mathrm{C}\\), \\(\\rho=1023.4\\,\\mathrm{kg}\\,\\mathrm{m}^{-3}\\) and the underlying layer as \\(S=33\\,\\mathrm{g}\\,\\mathrm{kg}^{-1}\\), \\(T=1.0^{\\circ}\\mathrm{C}\\), \\(\\rho=1026.3\\,\\mathrm{kg}\\,\\mathrm{m}^{-3}\\), where \\(S\\) and \\(T\\) are representative values of the salinity and temperature for each layer taken from figure 7 of Mankoff et al. (2016) and \\(\\rho\\) is the density at those conditions calculated based on the thermodynamic equation of seawater (IOC, SCOR and IAPSO, 2010). Throughout the remainder of this section we will refer to these two layers as the upper and lower layers, respectively.
Salinity profiles through the surface expression show that the density profile is significantly altered with a weak and more uniform stratification (see figure 5 of Mankoff et al., 2016). It is expected that during periods when the subglacial discharge is not present, as is generally assumed throughout winter, the two-layer stratification will exist throughout the entire fjord but that the strong subglacial discharge displaces the upper layer down the fjord.
In addition to the photographic observation of the surface expression and temperature-salinity profiles, Mankoff et al. (2016) attempted to infer the subglacial discharge flux based on observed water mass properties. They estimated the subglacial discharge flux at the base of the glacier to be \\(105-140\\,\\mathrm{m}^{3}\\,\\mathrm{s}^{-1}\\), which compares favorably with estimated runoff for the 5 days prior to observation from the RACMO model of \\(101\\,\\mathrm{m}^{3}\\,\\mathrm{s}^{-1}\\). In their attempts at modelling the flow, Mankoff et al. (2016) assumed that the radius of the subglacial discharge source was \\(5-15\\,\\mathrm{m}\\).
We will apply the model presented in SS2 to the observations in two distinct ways. The first of these is most analogous to the laboratory experiments and corresponds to the expected conditions when the plume first develops at the start of the melt season. As mentioned above, without a strong subglacial discharge, we expect the two-layer stratification observed away from the surface expression in Mankoff et al. (2016) to exist across the entire fjord. The fluid immediately below the surface expression is taken to have properties equal to the upper layer. As such, the plume rises through this two-layer stratification until it hits the free surface and then sinks into the underlying fluid that is less dense than the surface expression.
The second way that we apply the model uses the observations in Mankoff et al. (2016) more directly. Based on the the density profiles through the surface expression (figure 5 of Mankoff et al., 2016), two adjustments to the previously described conceptual framework are required. First, the plume will not entrain fluid from the two-layer stratification which is positioned far downstream but from the fluid that directly surrounds the plume. The fluid surrounding the plume is best approximated by the salinity and the temperature profiles through the surface expression (dark lines on figure 5 of Mankoff et al., 2016) -- recall that the surface expression is roughly 10 times as large as the expected plume diameter (approximately \\(20\\,\\mathrm{m}\\), figure 12a of Mankoff et al., 2016), so the profiles through the surface expression are still outside the rising plume. Using either the downstream density profile or that through the surface expression for entrainment into the vertical plume has a minimal impact on either the vertical plume or the density of the surface expression, but our choice is more consistent with the observations than using the downstream density profile. The more significant conceptual adaptation that needs to be made arises from the observation that the water column through the surface expression is _stably_ stratified in the vertical direction (again, refer to the dark lines on figure 5 of Mankoff et al., 2016). As a result, the surface expression fluid is unable to sink vertically beneath a less dense underlying fluid layer and an alternate mechanism for limiting the surface expression size needs to be considered.
We propose that instead of _sinking_ into a less dense underlying layer, the surface expression fluid is _subducted_ below the less dense upper layer that is positioned further down the fjord. A subduction mechanism is reminiscent of an atmospheric cold front where two layers of air with different densities flow into one another and the less dense layer is displaced upwards. However in this case, instead of less dense air being pushed upwards, the denser fluid layer (the surface expression) is pushed downwards. The subduction mechanism is consistent with observational data (figure 7 of Mankoff et al., 2016) which shows strongly sloping isopycnals near the edge of the surface expression but no evidence of dense fluid on top of less dense fluid, as is required by the sinking mechanism explored in the experiments and is shown on the schematic in figure 1.
Under the subduction mechanism, the size of the surface expression is determined by the position of the density front -- i.e. how far down the fjord the upper layer is displaced. To estimate the displacement we consider that, in the absence of a subglacial discharge, the upper layer would occupy the full extent of the fjord. The relaxation back to this steady state would be forced by the buoyancy difference between the upper layer and the surface expression fluid. The velocity scale of such a flow is given by \\(u_{f}\\sim\\sqrt{g_{f}^{\\prime}\\delta_{f}}\\), where \\(g_{f}^{\\prime}\\) is the reduced gravity based on the upper layer and surface expression densities and \\(\\delta_{f}\\) is the thickness of the surface expression or the upper layer, which have approximately the same values as in Mankoff et al. (2016). We take \\(\\delta_{f}=15\\,\\)m based on figure 7 of Mankoff et al. (2016). Values for \\(g_{f}^{\\prime}\\) depend on the source volume flux and radius of the plume as they will impact the density of the surface expression. However, as a representative case, using a radius of \\(10\\,\\)m and a source volume flux of \\(120\\,\\)m\\({}^{3}\\,\\)s\\({}^{-1}\\), gives \\(g_{f}^{\\prime}=5.8\\times 10^{-3}\\,\\)m\\(\\,\\)s\\({}^{-2}\\) and \\(u_{f}=0.30\\,\\)m\\(\\,\\)s\\({}^{-1}\\). The velocity \\(u_{f}\\) can be thought of as the velocity that the upper layer would propagate towards the ice face with, if the subglacial discharge were to be suddenly stopped. However, the presence of the surface expression resists the flow of the upper layer towards the ice face and, at a steady state, the front between the upper layer and the surface expression will be stationary. It follows that the position of this front will be the location where the surface expression velocity is equal to \\(u_{f}\\). If the surface expression velocity was greater than \\(u_{f}\\) at the front then the upper layer would be pushed back downfjord. Alternatively, if the surface expression velocity was less than
Figure 10: Experimental and predicted values of the surface expression aspect ratio as a function of \\(\\mathrm{Fr}\\) for the experiments with a sloping wall. The value of \\(\\mathrm{Fr}\\) used is the average of \\(\\mathrm{Fr_{x}}\\) and \\(\\mathrm{Fr_{y}}\\).
Figure 9: Experimental measurements and model predictions for the fountain surface expression for experiments with a sloping wall. The top row shows the wall-parallel (\\(s_{x}\\)) direction and the bottom row shows the wall-perpendicular (\\(s_{y}\\)) direction. The left column shows experiments with a slope angle of \\(70^{\\circ}\\) and the right column shows experiments with a slope angle of \\(55^{\\circ}\\). The solid lines are the same linear fit shown in figure 6 for a vertical wall.
\\(u_{f}\\) at the front then the upper layer would be able to propagate towards the ice face until a balance is obtained.
The model presented in SS2 provides the velocity in the surface expression as a function of position. As such we can use it to find where the surface expression velocity is equal to \\(u_{f}\\) and use this location as an estimate of the surface expression size. Figure 11 shows estimates of the surface expression length as a function of discharge volume flux and discharge radius using both the sinking mechanism (top) and the subducting mechanism (bottom). We note that although the sinking mechanism is not consistent with figures 5 and 7 of Mankoff et al. (2016), it is expected to be representative of the dynamics occurring when the subglacial discharge starts at the onset of the melting season.
Figure 11 shows that the subduction mechanism predicts the size of the surface expression with more accuracy than the sinking mechanism. When considering the sinking mechanism (top panel of figure 11), the surface expression is typically much shorter than that which was observed by Mankoff et al. (2016). For the predicted surface expression size to be comparable to that which was observed, the density of the surface expression needs to be very similar to that of the upper layer, which leads to unrealistic sensitivity of the surface expression size to small changes in the discharge flow rate. In contrast, the subduction mechanism predicts a surface expression size that is comparable to the observations, particularly for larger radii and lower source volume fluxes. The lower end of the source volume flux range is also more consistent with the estimate based on RACMO data of \\(101\\,\\mathrm{m}^{3}\\,\\mathrm{s}^{-1}\\).
Combined with the observational evidence that vertical density profiles are stably stratified at all locations in the fjord (figures 5 and 7 of Mankoff et al., 2016), figure 11 provides strong support for a subduction mechanism determining the surface expression length rather than a direct sinking mechanism. We stress that subduction of the surface expression still relies on the presence of fluid that is less dense than the surface expression. The difference is that for the surface expression to \"sink\" the less dense fluid must be directly underneath the surface expression whereas if the less dense fluid is horizontally adjacent, the surface expression will be \"subducted\".
It is worth considering why the difference between the laboratory experiments and observations exists. The observational evidence suggests that the entire upper layer is displaced downfjord by the surface expression which is then subducted beneath the upper layer. In contrast, the experiments clearly demonstrate sinking of the surface expression into a less dense underlying layer of fluid. The key difference between the two situations is that in the fjord observations, the thickness of the surface expression is comparable to that of the upper layer, whereas in the laboratory experiments the upper layer is approximately ten times as thick as the surface expression. When the surface expression and upper layer are of comparable thickness, the entire upper layer must be displaced downfjord and the interface below the surface expression remains stably stratified. However, when the upper layer is significantly thicker than the surface expression, some portion of the upper layer remains below the surface expression and the interface becomes vertically unstable.
## 7 Conclusion
We have considered the surface expression of a subglacial discharge plume in a stratified fjord. Several processes that could control the surface expression size and shape have been considered to attempt to understand the triangular surface expression observed at Saqqarliup Fjord (Mankoff et al., 2016). Our hypothesis is that if these processes can be better understood, subsurface properties of these plumes could be inferred from observations of the surface expression which are considerably simpler to make.
To this end, we have presented a model and experiments that examine the surface expression of a fountain released adjacent to a wall that is either vertical or sloping. The
Figure 11: Calculation of the length of the subglacial discharge plume surface expression for a variety of source discharge volume fluxes and source radii. The top panel shows results using a two-layer stratification throughout the entire fjord and the _sinking_ mechanism. The bottom panel shows results using the stratification observed in Mankoff et al. (2016) through the surface expression and the _subduction_ mechanism. Note the significantly different scales in both the \\(x\\) and the \\(y\\) axes for the two panels. Solid black lines show the observed length of the surface expression and dashed black lines show the estimated range of the subglacial discharge flux from Mankoff et al. (2016).
model separates the flow into vertical and horizontal regions. A transition region where all of the vertical momentum is converted to horizontal momentum through a free-surface pressure gradient connects the two regions. The model predicts that the dimensions of the surface expression of the fountain, non-dimensionalised by the thickness of the horizontal jet, will be linearly dependent on \\(\\mathrm{Fr}\\), which is consistent with previous studies of two-dimensional negatively buoyant surface jets (Burridge and Hunt, 2017).
Experiments are first used to examine the shape and spreading rate of the subsurface fountain. Similar to Ezhova et al. (2018), we find that the fountain spreads more rapidly in the wall-parallel direction than in the wall-perpendicular direction. Neither the rate of spreading nor the asymmetry are significantly affected by a sloping wall.
Experimental measurements of the dimensions of the surface expression generally agree with the model for a vertical wall. For a sloping wall, the model slightly under predicts the length of the surface expression. At least for a vertical wall, the experiments appear to confirm that the transition from vertical to horizontal flow was treated appropriately in the model. This provides some insight into how subglacial plumes transition to horizontal intrusions, which could help constrain the forcing in models of fjord-scale circulation. However, a more complicated treatment of the transition is probably required if the ice face is significantly overcut.
The surface expression shape tends to become less semi-circular and more triangular as the size increases, consistent with the observed triangular surface expression at Saqqarliup Fjord (Mankoff et al., 2016). However, the predicted surface expression shape based on our model remains semi-elliptical rather becoming triangular (see, for example, figure 5). This inconsistency highlights that, although the model can predict the shape and aspect ratio of the surface expression, it is not able to predict the shape of the surface expression at high values of \\(\\mathrm{Fr}\\) and requires further development.
Finally, the model is applied to observations of a subglacial discharge plume in front of Saqqarliup Sermia (Mankoff et al., 2016). We apply the model in two separate ways exploring possible mechanisms by which the surface expression fluid could move away from the surface. The first of these mechanisms involves _sinking_ into a less dense fluid body that underlies the surface expression. The second mechanism involves the surface expression fluid being _subducted_ below a less dense fluid body that is horizontally adjacent. Consistent with the oceanographic observations presented in Mankoff et al. (2016), predictions of the surface expression size are more accurate if the subduction mechanism is considered rather than the sinking mechanism. The similarity between the model predictions and observations of the surface expression size suggests that it may be possible to infer the subglacial discharge properties at the source from observations of the downfjord density profile and observations of the surface expression size. However, the experiments presented in this study were focused on the sinking mechanism and further experiments examining the subduction mechanism (i.e. with an upper layer of comparable depth to the surface expression) would help to demonstrate the applicability of the subduction mechanism. In addition, comparison with observations from other fjords is needed to assess the generality of these results. In particular, in a fjord where the surface expression is significantly less deep than the upper layer, the dynamics are expected to be different and, in this scenario, the sinking mechanism may be more appropriate.
Acknowledgments.We gratefully acknowledge technical assistance from Anders Jensen and thank anonymous reviewers for improving the clarity of the manuscript. CM thanks the Weston Howard Jr. Scholarship for funding. Support to CC was given by NSF project OCE-1434041 and OCE-1658079.
## Conversion of vertical momentum flux to horizontal momentum flux
This appendix justifies the assumption made in SS2b that the transition from vertical to horizontal flow can be treated as the flow around a \\(90^{\\circ}\\) corner with no loss of momentum or mass fluxes. The following calculation assumes incompressible and two-dimensional flow. Within the vertical plane through the centreline, and in the vicinity of the corner, these assumptions are expected to be valid.
We note that within this appendix \\(x\\), \\(y\\), \\(z\\), and \\(w\\) are used differently to the rest of the paper. Instead we use the standard nomenclature of complex variables used in Lamb (1916). The nomenclature within the appendix should be considered to be self-contained and independent from the remainder of the manuscript.
The inner boundary of the fountain is in contact with the stationary ambient fluid and, after subtracting the hydrostatic pressure, at high Reynolds number it can be regarded as a free surface (i.e. the pressure is zero). There is a general method using complex variables due to Lamb (1916)(SS**33**) which can be used for solving these problems in the inviscid, irrotational case. That is it conserves momentum and mass which should be a good approximation near the surface.
In our geometry there is a single right angle bend corresponding to a power of \\(1/2\\), therefore following Lamb (1916) we must have
\\[\\frac{\\mathrm{d}z}{\\mathrm{d}w}=\\sqrt{\\coth w},\\] (A1)where \\(z=x+iy\\) is the complex coordinate, \\(w=\\phi+iy\\), \\(\\phi\\) is the velocity potential and \\(\\psi\\) is the stream function. \\(\\psi=0\\) is the streamline along the wall and the horizontal free surface and \\(\\psi=\\pi/4\\) is the streamline along the inner boundary of the fountain. Integrating (A1) gives
\\[z=\\cot^{-1}\\sqrt{\\coth w}+\\coth^{-1}\\sqrt{\\coth w},\\] (A2)
an implicit equation for the stream function \\(\\psi\\). Applying Bernoulli's theorem the pressure is given by
\\[p=C-\\frac{u^{2}+v^{2}}{2}=C-\\frac{1}{2}\\left|\\frac{\\mathrm{d}w}{\\mathrm{d}z} \\right|^{2}=\\frac{1}{2}-\\frac{1}{2}|\\tanh w|,\\] (A3)
where we have chosen the pressure to be zero at the inner boundary of the fountain. The pressure along the vertical wall and the horizontal free surface is given by taking \\(w=s\\), where \\(s\\) is real and \\(w=0\\) is the corner:
\\[p_{w}=\\frac{1}{1+e^{|2s|}}.\\] (A4)
The connection between \\(s\\) and world coordinates comes from (A2).
The equation for the inner boundary of the fountain is given implicitly by substituting \\(w=\\pi/4+s\\) into (A2). In the vicinity of the bend we can write a parametric power series solution
\\[\\begin{array}{lll}x&=&\\frac{\\pi+\\log(3+2\\sqrt{2})}{4}\\\\ &&+\\frac{1}{\\sqrt{2}}\\left(+s+\\frac{1}{2}s^{2}-\\frac{1}{6}s^{3}-\\frac{5}{24}s^ {4}+\\frac{17}{120}s^{5}\\right),\\\\ y&=&\\frac{\\pi+\\log(3+2\\sqrt{2})}{4}\\\\ &&+\\frac{1}{\\sqrt{2}}\\left(-s+\\frac{1}{2}s^{2}+\\frac{1}{6}s^{3}-\\frac{5}{24}s^ {4}-\\frac{17}{120}s^{5}\\right).\\end{array}\\] (A5)
If we define \\(R=\\left[\\pi+\\log(3+2\\sqrt{2})\\right]/2\\) we can write this solution to the same order fully implicitly as
\\[R^{2}=(x+y)^{2}+(x-y)^{2}/\\sqrt{2}+\\frac{1}{8}\\left(\\frac{R}{\\sqrt{18}}-1 \\right)(x-y)^{4}+\\cdots.\\] (A7)
We can also look far from the corner where we have
\\[y=\\frac{\\pi}{4}+\\exp(\\pi/2-2x),\\quad\\text{or}\\quad x=\\frac{\\pi}{4}+\\exp(\\pi/2 -2y).\\] (A8)
Thus the solution corresponds to a jet of thickness \\(\\pi/4\\) and uniform incoming and outgoing speed \\(1\\). The solution can trivially be scaled to any velocity and size.
Figure A1 shows a time-averaged normalised light intensity image of the fountain viewed from the side of the tank with the solution for flow around a corner superimposed. The edge of the fountain is identified as the location where the normalised light intensity falls to less than \\(20\\%\\) of the maximum value at each height. The edge of the fountain is only used for comparison with the solution for flow around a corner and as such the exact threshold is not significant.
It can be seen that the solution given by (A7) agrees well with the contour of light intensity near the free surface where the solution is expected to be valid, justifying the assumption of inviscid, irrotational flow in vicinity of the corner.
## References
* Beaird et al. (2015) Beaird, N., F. Straneo, and W. Jenkins, 2015: Spreading of Greenland meltwaters in the ocean revealed by noble gases. _Geophys. Res. Lett._, **42**, 7705-7713.
* Bloomfield and Kerr (1998) Bloomfield, L. J., and R. C. Kerr, 1998: Turbulent fountains in a stratified fluid. _J. Fluid Mech._, **358**, 335-356.
* Burridge and Hunt (2016) Burridge, H. C., and G. R. Hunt, 2016: Entrainment by turbulent fountains. _J. Fluid Mech._, **790**, 407-418.
* Burridge and Hunt (2017) Burridge, H. C., and G. R. Hunt, 2017: From free jets to clinging wall jets: the influece of a horizotal boundary on a horizontally forced buoyant jet. _Phys. Rev. Fluids_, **2 (023501)**.
* Carazzo et al. (2006) Carazzo, G., E. Kaminski, and S. Tait, 2006: The route to self-similarity in turbulent jets and plumes. _J. Fluid Mech._, **547**, 137-148.
* Carroll et al. (2015) Carroll, D., D. A. Sutherland, E. L. Shroyer, J. D. Nash, G. A. Catania, and L. A. Stearns, 2015: Modeling turbulent subglacial meltwater plumes: Implications for fjord-scale buoyancy-driven circulation. _J. Phys. Oceanogr._, **45**, 2169-2185.
* Cenedese and Gatto (2016) Cenedese, C., and V. M. Gatto, 2016: Impact of a localized source of subglacial discharge on the heat flux and submarine melting of a tidewater glacier: a laboratory study. _J. Phys. Oceanogr._, **46**, 3155-3163.
* Chambers et al. (2017) Chambers, D. P., A. Cazenave, N. Champollion, H. Dieng, W. Llovel, and R. Forsberg, 2017: Evaluation of the global mean sea level budget between 1993 and 2014. _Surveys in Geophys._, **38**, 309-327.
Figure A1: Time-averaged normalised light intensity of a side view of the fountain (i.e. \\(y\\) and \\(z\\) are the Cartesian coordinates). The blue solid line shows where the light intensity falls to \\(20\\%\\) of the maximum intensity at that height which we use as a measure of the inner boundary of the fountain. The black dashed line shows the solution for flow around a corner from (A7) using Lamb’s method. Dotted lines show the enlarged section on the right panel.
Cowton, T., D. Slater, A. Sole, D. Goldberg, and P. Nienow, 2015: Modeling the impact of glacial runoff on fjord circulation and submarine melt rate using a new subgrid-scale parameterization for glacial plumes. _J. Geophys. Res. Oceans_, **120**, 796-812.
* Ezhova _et al._ (2018) Ezhova, E., C. Cenedese, and L. Brandt, 2018: Dynamics of three-dimensional turbulent wall plumes and implications for estimates of submarine glacier melting. _J. Phys. Oceanogr._, **48**, 1941-1950.
* Fried and Coauthors (2015) Fried, M. J., and Coauthors, 2015: Distributed subglacial discharge drives significant submarine melt at a Greenland tidewater glacier. _Geophys. Res. Lett._, **42**, 9328-9336.
* Goldman and Jaluria (1986) Goldman, D., and Y. Jaluria, 1986: Effect of opposing buoyancy on the flow in free and wall jets. _J. Fluid Mech._, **166**, 41-56.
* How and Coauthors (2017) How, P., and Coauthors, 2017: Rapidly-changing subglacial hydrology pathways at a tidewater glacier revealed through simultaneous observations of water pressure, supraglacial lakes, meltwater plumes and surface velocities. _The Cryosphere_, **11**, 2691-2710.
* Hunt and Burridge (2015) Hunt, G. R., and H. C. Burridge, 2015: Fountains in industry and nature. _Annu. Rev. Fluid Mech._, **47**, 195-220.
* 2010: Calculation and use of thermodynamic properties_. UNESCO.
* Jackson and Coauthors (2017) Jackson, R. H., and Coauthors, 2017: Near-glacier surveying of a subglacial discharge plume: implications for plume parameterizations. _Geophys. Res. Lett._, **44**, 6886-6894.
* Kapoor and Jaluria (1989) Kapoor, K., and Y. Jaluria, 1989: Heat transfer from a negatively buoyant wall jet. _Int. J. Heat Mass Transfer_, **32**, 697-709.
* Kaye and Hunt (2006) Kaye, N. B., and G. R. Hunt, 2006: Weak fountains. _J. Fluid Mech._, **558**, 319-328.
* Kaye and Hunt (2007) Kaye, N. B., and G. R. Hunt, 2007: Overturning in a filling box. _J. Fluid Mech._, **576**, 297-323.
* Kaye and Linden (2004) Kaye, N. B., and P. F. Linden, 2004: Coalescing axisymmetric turbulent plumes. _J. Fluid Mech._, **502**, 41-63.
* Lamb (1916) Lamb, H., 1916: _Hydrodynamics_. 4th ed., Cambridge University Press.
* measurements and modeling. _Ann. Rev. Fluid Mech._, **15**, 419-459.
* Mankoff _et al._ (2016) Mankoff, K. D., F. Straneo, C. Cenedese, S. B. Das, C. G. Richards, and H. Singh, 2016: Structure and dynamics of a subglacial discharge plume in a Greenlandic fjord. _J. Geophys. Res. Oceans_, **121**, 8670-8688.
* McConnochie and Kerr (2017) McConnochie, C. D., and R. C. Kerr, 2017: Enhanced ablation of a vertical ice face due to an external freshwater plume. _J. Fluid Mech._, **810**, 429-447.
* Morton _et al._ (1956) Morton, B. R., G. I. Taylor, and J. S. Turner, 1956: Turbulent gravitational convection from maintained and instantaneous sources. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_, Vol. 234, 1-23.
* Nokes (2014) Nokes, R., 2014: _Streams, version 2.03: system theory and design_. Department of Civil and Natural Resources Engineering, University of Canterbury, New Zealand.
* Sciascia _et al._ (2013) Sciascia, R., F. Straneo, C. Cenedese, and P. Heimbach, 2013: Seasonal variability of submarine melt rate and circulation in an East Greenland fjord. _J. Geophys. Res. Oceans_, **118**, 2492-2506.
* Slater _et al._ (2016) Slater, D. A., D. N. Goldberg, P. W. Nienow, and T. R. Cowton, 2016: Scalings for submarine melting at tidewater glaciers from buoyant plume theory. _J. Phys. Oceanogr._, **46**, 1839-1855.
* Straneo and Cenedese (2015) Straneo, F., and C. Cenedese, 2015: The dynamics of Greenland's glacial fjords and their role in climate. _Annu. Rev. Max. Sci._, **7**, 89-112.
* Straneo _et al._ (2011) Straneo, F., R. G. Curry, D. A. Sutherland, G. S. Hamilton, C. Cenedese, K. Vage, and L. A. Stearns, 2011: Impact of fjord dynamics and glacial runoff on the circulation near Helheim Glacier. _Nat. Geosci._, **4**, 322-327.
* Straneo and Coauthors (2012) Straneo, F., and Coauthors, 2012: Characteristics of ocean waters reaching Greenland's glaciers. _Annals of Glaciol._, **53**, 202-210.
* Turner (1973) Turner, J. S., 1973: _Buoyancy effects in fluids_. Cambridge University Press.
* Wille and Fernholz (1965) Wille, R., and H. Fernholz, 1965: Report on the first European Mechanics Colloquium, on the Coanda effect. _J. Fluid Mech._, **23**, 801-819.
* Xu _et al._ (2013) Xu, Y., E. Rignot, I. Fenty, and D. Menemenlis, 2013: Subaqueous melting of Store Glacier, West Greenland from three-dimensional, high-resolution numerical modeling and ocean observations. _Geophys. Res. Lett._, **40**, 4648-4653.
* Zgheib _et al._ (2015) Zgheib, N., T. Bonometti, and S. Balachandar, 2015: Dynamics of non-circular finite-release gravity currents. _J. Fluid Mech._, **783**, 344-378. | We use laboratory experiments and theoretical modeling to investigate the surface expression of a subglacial discharge plume, as occurs at many fjords around Greenland. The experiments consider a fountain that is released vertically into a homogeneous fluid, adjacent either to a vertical or a sloping wall, that then spreads horizontally at the free surface before sinking back to the bottom. We present a model that separates the fountain into two separate regions: a vertical fountain and a horizontal, negatively buoyant jet. The model is compared to laboratory experiments that are conducted over a range of volume fluxes, density differences, and ambient fluid depths. It is shown that the non-dimensionalised length, width, and aspect ratio of the surface expression are dependent on the Froude number, calculated at the start of the negatively buoyant jet. The model is applied to observations of the surface expression from a Greenland subglacial discharge plume. In the case where the discharge plume reaches the surface with negative buoyancy the model can be used to estimate the discharge properties at the base of the glacier. | Give a concise overview of the text below. | 213 |
arxiv-format/2004_10340v1.md | # The iWildCam 2020 Competition Dataset
Sara Beery*\\({}^{\\dagger}\\), Elijah Cole*, Arvi Gjoka\\({}^{\\dagger}\\)
California Institute of Technology* Google\\({}^{\\dagger}\\)
## 1 Introduction
In order to understand the effects of pollution, exploitation, urbanization, global warming, and conservation policy on our planet's biodiversity, we need access to accurate, consistent biodiversity measurements. Researchers often use _camera traps_ - static, motion-triggered cameras placed in the wild - to study changes in species diversity, population density, and behavioral patterns. These cameras can take thousands of images per day, and the time it takes for human experts to identify species in the data is a major bottleneck. By automating this process, we can provide an important tool for scalable biodiversity assessment.
Camera trap images are taken automatically based on a triggered sensor, so there is no guarantee that the animal will be centered, focused, well-lit, or at an appropriate scale (they can be either very close or very far from the camera, each causing its own problems). See Fig. 2 for examples of these challenges. Further, up to 70% of the photos at any given location may be triggered by something other than an animal, such as wind in the trees. Automating camera trap labeling is not a new challenge for the computer vision community [12, 17, 18, 19, 20, 23, 24, 25, 26, 23, 26, 7, 2]. However, most of the proposed solutions have used the same camera locations for both training and testing the performance of an automated system. If we wish to build systems that are trained to detect and classify animals and then deployed to new locations without further training, we must measure the ability of machine learning and computer vision to _generalize to new environments_[20, 6]. This is central to the 2018 [2019], 2019 [4], and 2020 iWildCam challenges.
The 2020 iWildCam challenge includes a new component: the use of multiple data modalities (see Fig. 1). An ecosystem can be monitored in a variety of ways (e.g. camera traps, citizen scientists, remote sensing) each of which has its own strengths and limitations. To facilitate the exploration of techniques for combining these complementary data streams, we provide a time series of remote sensing imagery for each camera trap location as well as curated subsets of the iNaturalist competition datasets matching the species seen in the camera trap data. It has been shown that species classification performance can be dramatically improved by using information beyond the image itself [15, 8, 10] so we expect that participants will find creative and effective uses for this data.
Figure 1: **The iWildCam 2020 dataset.** This year’s dataset includes data from multiple modalities: camera traps, citizen scientists, and remote sensing. Here we can see an example of data from a camera trap paired with a visualization of the infrared channel of the paired remote sensing imagery.
## 2 Data Preparation
The dataset consists of three primary components: (i) camera trap images, (ii) citizen science images, and (iii) multispectral imagery for each camera location.
### Camera Trap Data
The camera trap data (along with expert annotations) is provided by the Wildlife Conservation Society (WCS) [2]. We split the data by camera location, so no images from the test cameras are included in the training set to avoid overfitting to one set of backgrounds [7].
The training set contains \\(217,959\\) images from \\(441\\) locations, and the test set contains \\(62,894\\) images from \\(111\\) locations. These \\(552\\) locations are spread across \\(12\\) countries in different parts of the world. Each image is associated with a location ID so that images from the same location can be linked. As is typical for camera traps, approximately 50% of the total number of images are empty (this varies per location).
There are 276 species represented in the camera trap images. The class distribution is long-tailed, as shown in Fig. 3. Since we have split the data by location, some classes appear only in the training set. Any images with classes that appeared only in the test set were removed.
### iNaturalist Data
iNaturalist is an online community where citizen scientists post photos of plants and animals and collaboratively identify the species [1]. To facilitate the use of iNaturalist data, we provide a mapping from our classes into the iNaturalist taxonomy.1 We also provide the subsets of the iNaturalist 2017-2019 competition datasets [22] that correspond to species seen in the camera trap data. This data provides \\(13,051\\) additional images for training, covering \\(75\\) classes.
Footnote 1: Note that for the purposes of the competition, competitors may only use iNaturalist data from the iNaturalist competition datasets.
Though small relative to the camera trap data, the iNaturalist data has some unique characteristics. First, the class distribution is completely different (though it is still long tailed). Second, iNaturalist images are typically higher quality than the corresponding camera trap images, providing valuable examples for hard classes. See Fig. 4 for a comparison between iNaturalist images and camera trap images.
### Remote Sensing Data
For each camera location we provide multispectral imagery collected by the Landsat 8 satellite [21]. All data comes from the the Landsat 8 Tier 1 Surface Reflectance dataset [13] provided by Google Earth Engine [14]. This data has been been atmospherically corrected and meets certain radiometric and geometric quality standards.
**Data collection.** The precise location of a camera trap is generally considered to be sensitive information, so we first obfuscate the coordinates of the camera. For each time point when imagery is available (the Landsat 8 satellite images the Earth once every 16 days), we extract a square _patch_ centered at the obfuscated coordinates consisting of 9 bands of multispectral imagery and 2 bands of per-pixel metadata. Each patch covers an area of 6km \\(\\times\\) 6km. Since one Landsat 8 pixel covers an area of 30m\\({}^{2}\\), each patch is \\(200\\times 200\\times 11\\) pixels. Note that the bit depth of Landsat 8 data is 16.
The multispectral imagery consists of 9 different bands,
Figure 3: **Camera trap class distribution.** Per-class distribution of the camera trap data, which exhibits a long tail. We show examples of both a common class (the African giant pouched rat) and a rare class (the Indonesian mountain weasel). Within the plot we show images of each species, centered and focused, from iNaturalist. On the right we show images of each species within the frame of a camera trap, from WCS.
Figure 2: **Common data challenges in camera trap images.** (1) **Illumination**: Animals are not always well-lit. (2) **Motion blur**: common with poor illumination at night. (3) **Size of the region of interest** (ROI): Animals can be small or far from the camera. (4) **Occlusion**: e.g. by bushes or rocks. (5) **Camouflage**: decreases saliency in animals’ natural habitat. (6) **Perspective**: Animals can be close to the camera, resulting in partial views of the body.
ordered by descending frequency / ascending wavelength. Band 1 is ultra-blue. Bands 2, 3, and 4 are traditional blue, green, and red. Band 5-9 are infrared. Note that bands 8 and 9 are from a different sensor than bands 1-7 and have been upsampled from 100m\\({}^{2}\\)/pixel to 30m\\({}^{2}\\)/pixel. Refer to [13] or [21] for more details.
Each patch of imagery has two corresponding _quality assessment_ (QA) bands which carry per-pixel metadata. The first QA band (pixelqa) contains automatically generated labels for classes like clear, water, cloud, or cloud shadow which can help to interpret the pixel values. The second QA band (radsatqa) labels the pixels in each band for which the sensor was saturated. Cloud cover and saturated pixels are common issues in remote sensing data, and the QA bands may provide some assistance. However, they are automatically generated and cannot be trusted completely. See [13] for more details.
## 3 Baseline Results
We trained a basic image classifier as a baseline for comparison. The model is a randomly initialized Inception-v3 with input size \\(299\\times 299\\), which was trained using only camera trap images. During training, images were randomly cropped and perturbed in brightness, saturation, hue, and contrast. We used the rmsprop optimizer with an initial learning rate of 0.0045 and a decay factor of 0.94.
Let \\(C\\) be the number of classes. We trained using a class balanced loss from [11], given by
\\[\\mathcal{L}^{\\prime}(\\mathbf{p},y)=\\frac{1-\\beta}{1-\\beta^{n_{y}}}\\mathcal{L }(\\mathbf{p},y)\\]
where \\(\\mathbf{p}\\in\\mathbb{R}^{C}\\) is the vector of predicted class probabilities (after softmax), \\(y\\in\\{1,\\dots,C\\}\\) is the ground truth class, \\(\\mathcal{L}\\) is the categorical cross-entropy loss, \\(n_{y}\\) is the number of samples for class \\(y\\), and \\(\\beta\\) is a hyperparameter which we set to 0.9.
This baseline achieved a macro-averaged F1 score of \\(0.62\\) and an accuracy of \\(62\\%\\) on the iWildCam 2020 test set.
## 4 Conclusion
The iWildCam 2020 dataset provides a test bed for studying generalization to new locations at a larger geographic scale than previous iWildCam competitions [4, 6]. In addition, it facilitates exploration of multimodal approaches to camera trap image classification and pairs remote sensing imagery with camera trap imagery for the first time.
In subsequent years, we plan to extend the iWildCam challenge by adding additional data streams and tasks, such as detection and segmentation. We hope to use the knowledge we gain throughout these challenges to facilitate the development of systems that can accurately provide real-time species ID and counts in camera trap images at a global scale. Any forward progress made will have a direct impact on the scalability of biodiversity research geographically, temporally, and taxonomically.
## 5 Acknowledgements
We would like to thank Dan Morris and Siyu Yang (Microsoft AI for Earth) for their help curating the dataset, providing bounding boxes from the MegaDetector, and hosting the data on Azure. We also thank the Wildlife Conservation
Figure 4: **Camera trap data (left) vs iNaturalist data (right).** (1) Animal is large, so camera trap image does not fully capture it. (2) Animal is small, so it makes up a small part of the camera trap images. (3) Quality is equivalent, although iNaturalist images have more camera pose and animal pose variation.
Society for providing the camera trap data and annotations. We thank Kaggle for supporting the iWildCam competition for the past three years. Thanks also to the FGVC Workshop, Visipedia, and our advisor Pietro Perona for continued support. This work was supported in part by NSF GRFP Grant No. 1745301. The views are those of the authors and do not necessarily reflect the views of the NSF.
## References
* [1] iNaturalist. [https://www.inaturalist.org/](https://www.inaturalist.org/).
* [2] Wildlife Conservation Society Camera Traps Dataset. [http://lila.science/datasets/wcscameratraps](http://lila.science/datasets/wcscameratraps).
* [3] Sara Beery, Yang Liu, Dan Morris, Jim Piavis, Ashish Kapoor, Neel Joshi, Markus Meister, and Pietro Perona. Synthetic examples improve generalization for rare classes. In _The IEEE Winter Conference on Applications of Computer Vision_, pages 863-873, 2020.
* [4] Sara Beery, Dan Morris, and Pietro Perona. The iwildcam 2019 challenge dataset. _ArXiv_, abs/1907.07617, 2019.
* [5] Sara Beery, Dan Morris, and Siyu Yang. Efficient pipeline for camera trap image review. _arXiv preprint arXiv:1907.06772_, 2019.
* [6] Sara Beery, Grant van Horn, Oisin MacAodha, and Pietro Perona. The iwildcam 2018 challenge dataset. _arXiv preprint arXiv:1904.05986_, 2019.
* [7] Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 456-473, 2018.
* [8] Sara Beery, Guanhang Wu, Vivek Rathod, Ronny Votel, and Jonathan Huang. Context r-cnn: Long term temporal context for per-camera object detection. _arXiv preprint arXiv:1912.03538_, 2020.
* [9] Guobin Chen, Tony X Han, Zhihai He, Roland Kays, and Tavis Forrester. Deep convolutional neural network based species recognition for wild animal monitoring. In _Image Processing (ICIP), 2014 IEEE International Conference on_, pages 858-862. IEEE, 2014.
* [10] Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard, Yang Song, Fernando Brucher, Thomas Leung, and Hartwig Adam. Geo-aware networks for fine-grained recognition. _ICCV Workshop on Computer Vision for Wildlife Conservation_, 2019.
* [11] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge J. Belongie. Class-balanced loss based on effective number of samples. _CoRR_, abs/1901.05555, 2019.
* [12] Jhony-Heriberto Giraldo-Zuluaga, Augusto Salazar, Alexander Gomez, and Angelica Diaz-Pulido. Camera-trap images segmentation using multi-layer robust principal component analysis. _The Visual Computer_, pages 1-13, 2017.
* [13] Google Earth Engine. USGS Landsat 8 Surface Reflectance Tier 1. [https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR).
* [14] Noel Gorelick, Matt Hancher, Mike Dixon, Simon Ilyushchenko, David Thau, and Rebecca Moore. Google earth engine: Planetary-scale geospatial analysis for everyone. _Remote Sensing of Environment_, 2017.
* [15] Oisin Mac Aodha, Elijah Cole, and Pietro Perona. Presence-only geographical priors for fine-grained image classification. _ICCV_, 2019.
* [16] Agnieszka Miguel, Sara Beery, Erica Flores, Loren Klemesrud, and Rana Bayrakcismith. Finding areas of motion in camera trap images. In _Image Processing (ICIP), 2016 IEEE International Conference on_, pages 1334-1338. IEEE, 2016.
* [17] Mohammad Sadegh Norouzzadeh, Dan Morris, Sara Beery, Neel Joshi, Nebojas Jojic, and Jeff Clune. A deep active learning system for species identification and counting in camera trap images. _arXiv preprint arXiv:1910.09716_, 2019.
* [18] Mohammed Sadegh Norouzzadeh, Anh Nguyen, Margaret Kosmala, Ali Swanson, Craig Packer, and Jeff Clune. Automatically identifying wild animals in camera trap images with deep learning. _arXiv preprint arXiv:1703.05830_, 2017.
* [19] Stefan Schneider, Graham W Taylor, and Stefan Kremer. Deep learning object detection methods for ecological camera trap data. In _2018 15th Conference on Computer and Robot Vision (CRV)_, pages 321-328. IEEE, 2018.
* [20] Michael A Tabak, Mohammad S Norouzzadeh, David W Wolfson, Eric J Newton, Raoul K Boughton, Jacob S Ivan, Eric A Odell, Eric S Newkirk, Reesa Y Conrey, Jennifer L Stenglein, et al. Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: Mhvie2. _bioRxiv_, 2020.
* [21] U.S. Geological Survey. Landsat 8 Imagery. [https://www.usgs.gov/land-resources/nli/landsat/landsat-8](https://www.usgs.gov/land-resources/nli/landsat/landsat-8).
* [22] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 8769-8778, 2018.
* [23] Alexander Gomez Villa, Augusto Salazar, and Francisco Vargas. Towards automatic wild animal monitoring: Identification of animal species in camera-trap images using very deep convolutional neural networks. _Ecological Informatics_, 41:24-32, 2017.
* [24] Michael J Wilber, Walter J Scheirer, Phil Leitner, Brian Heflin, James Zott, Daniel Reinke, David K Delaney, and Terrance E Boult. Animal recognition in the mojave desert: Vision tools for field biologists. In _Applications of Computer Vision (WACV), 2013 IEEE Workshop on_, pages 206-213. IEEE, 2013.
* [25] Hayder Yousif, Jianhe Yuan, Roland Kays, and Zhihai He. Fast human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. In _Circuits and Systems (ISCAS), 2017 IEEE International Symposium on_, pages 1-4. IEEE, 2017.
* [26] Zhi Zhang, Zhihai He, Guitao Cao, and Wenming Cao. Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. _IEEE Transactions on Multimedia_, 18(10):2079-2092, 2016. | Camera traps enable the automatic collection of large quantities of image data. Biologists all over the world use camera traps to monitor animal populations. We have recently been making strides towards automatic species classification in camera trap images. However, as we try to expand the geographic scope of these models we are faced with an interesting question: how do we train models that perform well on new (unseen during training) camera trap locations? Can we leverage data from other modalities, such as citizen science data and remote sensing data? In order to tackle this problem, we have prepared a challenge where the training data and test data are from different cameras spread across the globe. For each camera, we provide a series of remote sensing imagery that is tied to the location of the camera. We also provide citizen science imagery from the set of species seen in our data. The challenge is to correctly classify species in the test camera traps. | Summarize the following text. | 182 |
arxiv-format/1902_05402v1.md | # Spectral-Spatial Diffusion Geometry for Hyperspectral Image Clustering
James M. Murphy Mauro Maggioni
J.M. Murphy is with the Department of Mathematics at Tufts University; email: [email protected]. Maggioni is with the Department of Mathematics and the Department of Applied Mathematics and Statistics at Johns Hopkins University; email: [email protected]
## I Introduction
As the volume of data captured by remote sensors grows unabated, human capacity for providing labeled training datasets is strained. State-of-the-art supervised learning algorithms (e.g. deep neural networks) require very large training sets to fit the massive number of parameters associated to their high-complexity architectures. In order to take advantage of the deluge of unlabeled remote sensing data available, new methods that are _unsupervised_--requiring no training data--are necessary.
This article proposes an efficient unsupervised clustering algorithm for hyperspectral imagery (HSI) that exploits not only low-dimensional geometry in the high-dimensional space of spectra, but also the spatial regularity in the 2-dimensional image structure of the pixels. This is achieved by considering _spectral-spatial diffusion geometry_ as captured by spatially regularized diffusion distances [1, 2]. These distances are integrated into the recently proposed _diffusion learning_ algorithm, which has achieved competitive performance versus benchmark and state-of-the-art unsupervised HSI clustering algorithms [3, 4].
The remainder of this article is organized as follows. Background on HSI clustering and diffusion geometry is presented in Sec. II. The proposed algorithm is described and evaluated in Sec. III and IV, respectively. Conclusions and future research directions are presented in Sec. V.
## II Background
The unsupervised clustering of HSI \\(X=\\{x_{n}\\}_{n=1}^{N}\\subset\\mathbb{R}^{D}\\)--understood as a point cloud--consists in providing labels \\(\\{y_{n}\\}_{n=1}^{N}\\) to each data point without access to labeled training data. The total number of pixels in the image is \\(N\\), and the number of spectral bands is \\(D\\). Typically \\(D\\) is quite large, causing traditional statistical methods to perform sub-optimally. However, the different clusters in the data--corresponding to regions with distinct material properties--often exhibit low-dimensional, though nonlinear, structure. In order to efficiently exploit this structure, methods for clustering HSI that learn the underlying nonlinear geometry have been developed, including methods based on non-negative matrix factorization [5], regularized graph Laplacians [6], angle distances [7], and deep neural networks [8].
Recently, the _diffusion geometry_[1, 2] of \\(X\\) has been proposed in order to infer the latent clusters. The _diffusion distance at time \\(t\\)_ between \\(x,y\\in X\\), denoted \\(d_{t}(x,y)\\), is a notion of distance determined by the underlying geometry of the point cloud \\(X\\). The computation of \\(d_{t}\\) involves constructing a weighted, undirected graph \\(\\mathcal{G}\\) with vertices corresponding to the \\(N\\) points in \\(X\\), and weighted edges stored in the \\(N\\times N\\) weight matrix \\(W(x,y):=\\exp(-\\|x-y\\|_{2}^{2}/\\sigma^{2})\\) if \\(x\\in NN_{k}(y),W(x,y)=0\\) otherwise for some scaling parameter \\(\\sigma\\) and with \\(NN_{k}(x)\\) the set of \\(k\\)-nearest neighbors of \\(y\\) in \\(X\\) with respect to Euclidean distance. Typically \\(\\sigma\\) is chose adaptively, for example as some multiple of average distance to nearest neighbors [9], and \\(k\\ll n\\) is chosen so that \\(W\\) is sparse; we set \\(k=100\\) in all experiments. Let \\(P(x,y)=W(x,y)/\\mathrm{deg}(x)\\) be a Markov diffusion matrix defined on \\(X\\), where \\(\\mathrm{deg}(x):=\\sum_{y\\in X}W(x,y)\\) is the degree of \\(x\\). The _diffusion distance at time \\(t\\)_ is
\\[d_{t}(x,y):=\\sqrt{\\sum\
olimits_{u\\in X}(P^{t}(x,u)-P^{t}(y,u))^{2}}. \\tag{1}\\]
The computation of \\(d_{t}(x,y)\\) involves summing over all paths of length \\(t\\) connecting \\(x\\) to \\(y\\), so \\(d_{t}(x,y)\\) is small if \\(x,y\\) are similar according to \\(P^{t}\\).
The eigendecomposition of \\(P\\) yields fast algorithms to compute \\(d_{t}\\): the matrix \\(P\\) admits a spectral decomposition (under mild conditions, see [2]) with eigenvectors \\(\\{\\Phi_{n}\\}_{n=1}^{N}\\) and eigenvalues \\(\\{\\lambda_{n}\\}_{n=1}^{N}\\), where \\(1=\\lambda_{1}\\geq|\\lambda_{2}|\\geq\\cdots\\geq|\\lambda_{N}|\\). The diffusion distance (1) can then be written as
\\[d_{t}(x,y)=\\sqrt{\\sum\
olimits_{n=1}^{N}\\lambda_{n}^{2t}(\\Phi_{n}(x)-\\Phi_{n}( y))^{2}}\\,. \\tag{2}\\]
The weighted eigenvectors \\(\\{\\lambda_{n}^{t}\\Phi_{n}\\}_{n=1}^{N}\\) are new data-dependent coordinates of \\(X\\), which are nearly geometricallyintrinsic [1]. The parameter \\(t\\) measures how long the diffusion process on \\(\\mathcal{G}\\) has run: small values of \\(t\\) allow a small amount of diffusion, which may prevent the interesting geometry of \\(X\\) from being discovered. On the other hand, large \\(t\\) allow the diffusion process to run for so long that the fine geometry is homogenized. In all our experiments we set \\(t=30\\); see [4] and [10] for empirical and theoretical analyses of \\(t\\), respectively.
If the underlying graph \\(\\mathcal{G}\\) is connected, \\(|\\lambda_{n}|<1\\) for \\(n>1\\), so that the sum (2) may approximated by its truncation at some suitable \\(2\\leq M\\ll N\\). In our experiments, \\(M\\) was set to be the value at which the decay of the eigenvalues \\(\\{\\lambda_{n}\\}_{n=1}^{N}\\) begins to taper [4]. The subset \\(\\{\\lambda_{n}^{t}\\Phi_{n}\\}_{n=1}^{M}\\) used in the computation of \\(d_{t}\\) is a dimension-reduced set of diffusion coordinates. Indeed, the mapping
\\[x\\mapsto(\\lambda_{1}^{t}\\Phi_{1}(x),\\lambda_{2}^{t}\\Phi_{2}(x),\\ldots,\\lambda_ {M}^{t}\\Phi_{M}(x)) \\tag{3}\\]
may be understood as a form of dimension reduction from the ambient space \\(\\mathbb{R}^{D}\\) to \\(\\mathbb{R}^{M}\\). The truncation also enables us to compute only the first \\(M=O(1)\\) eigenvectors, reducing computational complexity.
Diffusion maps consider the data \\(X\\) as a point cloud in \\(\\mathbb{R}^{D}\\). If the data has organizing structure beyond its \\(D\\)-dimensional coordinates, this information can be incorporated into the diffusion maps construction by modifying the underlying transition matrix \\(P\\). In the case of HSI, each point is not only a high dimensional spectra, but also a pixel arranged in an image. In particular, the HSI enjoys _spatial regularity_, in the sense that points in a particular class are likely to have their nearest spatial neighbors being in the same class. The incorporation of spatial information into supervised learning algorithms is known to improve empirical performance for a variety of data sets and methods [11, 12, 13]. In this article, we propose to extend the recently proposed _diffusion learning_ unsupervised clustering framework [4] by directly incorporating spatial information into the underlying diffusion matrix \\(P\\).
## III Description of Algorithm
The proposed algorithm first constructs a Markov diffusion matrix, \\(P\\), under the constraint that pixels may only be connected to other pixels that are within some spatial radius \\(r\\). Figure 1 shows how nearest neighbors with and without this spatial constraint differ. The construction of \\(P\\) and the corresponding eigenpairs used to compute the diffusion distances are described in Algorithm 1.
```
1Input:\\(X,r\\).
2Connect each element \\(x\\in X\\) to its \\(k=100\\) nearest neighbors within spatial radius \\(r\\), call them \\(y\\), with weight \\(\\exp(-\\|x-y\\|_{2}^{2}/\\sigma^{2})\\).
3Let \\(D\\) be the diagonal degree matrix such that \\(D_{ii}=\\sum_{j=1}^{n}W_{ij}\\), and set \\(P=D^{-1}W\\).
4Compute the top \\(M\\) eigenpairs of \\(P\\), \\(\\{(\\lambda_{n},\\Phi_{n})\\}_{n=1}^{M}\\).
5Output:\\(\\{(\\lambda_{n},\\Phi_{n})\\}_{n=1}^{M}\\).
```
**Algorithm 1**Spectral-Spatial Diffusion Maps
Once the eigenpairs are computed, pairwise diffusion distances are simply Euclidean distances in the new coordinate system (3). Cluster modes are computed as points maximizing \\(\\mathcal{D}_{t}=p(x)\\rho_{t}(x)\\), where \\(p(x)\\) is a kernel density estimator and \\(\\rho_{t}(x)\\) is the diffusion distance of a point to its nearest neighbor of higher density. The mode detection algorithm is summarized in Algorithm 2; see [4] for details.
```
1Input:\\(X,K,t\\).
2Compute a kernel density estimate \\(p(x_{n})\\) for each \\(x_{n}\\in X\\).
3Compute diffusion distances using Algorithm 1 and (2).
4Compute \\(\\{\\rho_{t}(x_{n})\\}_{n=1}^{N}\\), the diffusion distance from each point to its nearest neighbor in diffusion distance of higher empirical density.
5Compute the learned modes \\(\\{x_{i}^{*}\\}_{i=1}^{K}\\) as the \\(K\\) maximizers of \\(\\mathcal{D}_{t}(x_{n})=p(x_{n})\\rho_{t}(x_{n})\\).
6Output:\\(\\{x_{i}^{*}\\}_{i=1}^{K},\\{p(x_{n})\\}_{n=1}^{N},\\{\\rho_{t}(x_{n})\\}_{n=1}^{N}\\).
```
**Algorithm 2**Geometric Mode Detection Algorithm
From the modes, points are labeled iteratively--from highest density to lowest density--according to their nearest spectral neighbor of higher density that has already been labeled, unless it is the case that such a labeling would strongly violate spatial regularity. In that case, points are labeled according to their spatial nearest neighbors. The spectral-spatial labeling scheme is summarized in Algorithm 3, and its crucial parameters and the role of the labeling spatial regularization are discussed at length in [4].
Fig. 1: _The Salinas A data set is \\(86\\times 83\\) and contains 6 classes, all of which are well-localized spatially. The dataset was captured over Salinas Valley, CA, by the AVHS sensor. The spatial resolution is 3.7 myt pixel. The image contains 224 spectral bands._ Top left: projection of the HSI onto its first principal component. Top right: _ground truth (GT)._ Bottom left: _100 nearest neighbors (yellow) of a pixel (red) without spatial regularization._ Bottom right: _100 nearest neighbors (yellow) of a pixel (red) with spatial regularization._The proposed method--consisting of Algorithms 1, 2, and 3--is called _spatially-regularized diffusion learning (SRDL)_.
## IV Experimental Results
We evaluate the SRDL algorithm on \\(3\\) real HSI datasets. The Indian Pines, Salinas A, and Kennedy Space Center datasets considered are standard, have ground truth, and are publicly available1. In order to quantitatively evaluate the unsupervised results in the presence of ground truth (GT), we consider three metrics. _Overall accuracy (OA)_ is total number of correctly labeled pixels divided by the total number of pixels, which values large classes more than small classes. _Average accuracy (AA)_ is the average, over classes, of the OA of each class, which values small classes and large classes equally. _Cohen's \\(\\kappa\\)-statistic (\\(\\kappa\\))_ measures agreement across two labelings adjusted for random chance [14].
Footnote 1: [http://www.ehu.cus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes](http://www.ehu.cus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes)
We note that the Indian Pines and Kennedy Space Center datasets are restricted to subsets in the spatial domain, due to well-documented challenges of unsupervised methods for data containing a large number of classes [15]. These datasets are restricted to reduce the number of classes and achieve meaningful clusters. The Salinas A dataset is considered in its entirety. We remark that results on small subsets can be patched together [4]; such results are not shown here for reasons of space. While the proposed method automatically estimates the number of clusters based on the decay of \\(\\mathcal{D}_{t}\\), the number of class labels in the ground truth images were used as parameter \\(K\\) for all clustering algorithms to make a fair comparison with methods that cannot reliably estimate the number of clusters.
Since the proposed and comparison methods are unsupervised, experiments are performed on the entire dataset, including points without ground truth labels. The labels for pixels without ground truth are not accounted for in the quantitative evaluation of the algorithms tested.
### _Comparison Methods_
We consider 13 benchmark and state-of-the-art methods of HSI clustering for comparison. The benchmark methods are: _\\(K\\)-means_[16] run directly on \\(X\\); _principal component analysis (PCA)_ followed by \\(K\\)-means; _independent component analysis (ICA)_ followed by \\(K\\)-means[17, 18, 19]2; _Gaussian random projections_ followed by \\(K\\)-means [20]; _DBSCAN_[21]; _spectral clustering_ (SC) [22]; and _Gaussian mixture models_ (GMM) [23], with parameters determined by expectation maximization.
Footnote 2: [https://www.cs.helisinki.fi/u/hyvarin/papers/fastica.shtml](https://www.cs.helisinki.fi/u/hyvarin/papers/fastica.shtml)
Footnote 3: [http://vision.jhu.edu/code/](http://vision.jhu.edu/code/)
The recent, state-of-the-art clustering methods considered are: _sparse manifold clustering and embedding (SMCE)_[24, 25]4, which fits the data to low-dimensional, sparse structures, and then applies spectral clustering; _hierarchical clustering with non-negative matrix factorization (HNMF)_5[4], which has shown excellent performance for HSI clustering when the clusters are generated from a single endmember; a graph-based method based on the Mumford-Shah segmentation [26]6, related to spectral clustering, and called _fast Mumford-Shah (FMS)_ in this article5; _fast search and find of density peaks clustering_ (FSFDPC) algorithm [27], which has been shown effective in clustering a variety of data sets; and two variants of the recently proposed _diffusion learning_ algorithm, in which the labeling process considers only spectral information (DL) or spectral and spatial information (DLSS) [4].
Footnote 4: [https://sites.google.com/site/nicolasgillis/code](https://sites.google.com/site/nicolasgillis/code)
Footnote 5: [http://www.ipol.im/pub/art/2017/2047utm_source=doi](http://www.ipol.im/pub/art/2017/2047utm_source=doi)
Among the comparison methods, the proposed method bears closest resemblance to the DL and DLSS methods. SRDL and these two methods differ primarily in how the underlying geometry for clustering is learned. In DL and DLSS, the diffusion geometry is computed though \\(P\\) by considering the HSI only as a spectral point cloud. SRDL regularizes the construction of \\(P\\) by incorporating spatial information into the nearest neighbors construction. The proposed SRDL method also bears similarity to SC, SMCE, and FMS since all these methods use data-driven graphs. The FSFDPC algorithm uses a mode detection scheme similar to SRDL, but without diffusion geometry or spatial information.
### _Indian Pines Data_
The Indian Pines dataset used for experiments is a subset of the full Indian Pines datasets, consisting of three classes that are difficult to distinguish visually; see Figure 2. Clustering results for Indian Pines are in Figure 3 and Table I. SRDL was run with \\(r=8\\). The spatial regularization in the construction of \\(P\\) is beneficial in terms of labeling: the proposed method improves over DLSS. As expected, SRDL leads to a spatially smoother labeling. However, a mistake is still made in the labeling of the proposed method, indicating that this is a challenging dataset to cluster without supervision.
### _Salinas A Data_
The Salinas A dataset (see Figure 1) consists of 6 classes in diagonal rows. Certain pixels in the HSI have the same values; in order to distinguish these pixels, small Gaussian noise (variance \\(<10^{-3}\\)) was added as a preprocessing step. SRDL was run with \\(r=20\\). Visual results for Salinas A appear in Figure 4 and quantitative results appear in Table I. The proposed method yields the best results, and moreover the labels recovered by the proposed method are quite spatially regular.
### _Kennedy Space Center Data_
The Kennedy Space Center dataset used for experiments consists of a subset of the original dataset, and contains four classes. Figure 5 shows the data along with ground truth consisting of the examples of four vegetation types which dominate the scene. SRDL was run with \\(r=20\\). Results appear in Table I; for reasons of space, visual results are not shown.
### _Parameter Analysis_
The crucial parameter in the proposed method is the spatial radius \\(r\\) which determines how near the nearest neighbors in the underlying diffusion process must be. The impact of this parameter in terms of overall accuracy, average accuracy, and \\(\\kappa\\) appear in Figure 6. The plots exhibit the tradeoff typical of regularization in machine learning: insufficient or excessive regularization are both detrimental. It is critical to find a good range of regularization parameters, and the flat regions near the maxima in Figure 6 suggest the proposed method is relatively robust to the choice of \\(r\\). We remark that if knowledge of the spatial smoothness of the image is known or can be estimated a priori, then \\(r\\) can be estimated.
## V Conclusions and Future Directions
The incorporation of spatial regularity into the construction of the the underlying diffusion geometry improves empirical performance of diffusion learning for HSI clustering in all three datasets considered. For images whose labels are sufficiently smooth, our results suggest there will be a regime for choices of spatial window in which incorporating spatial proximity improves the underlying mode detection and consequent labeling of HSI.
In terms of _computational complexity_, it suffices to note that the bottleneck is in the construction of \\(P\\). Since \\(k\\) nearest
Fig. 4: Clustering results for Salinas _A dataset. The proposed method is the optimal performer, with the DLSS, DL, and spectral clustering methods also performing strongly. The spatial regularization incorporated into the diffusion distances used for the proposed method keeps the diagonal stripes relatively far apart from each other, leading to accurate mode estimate and good subsequent labeling._
Fig. 5: The Kennedy Space Center data is a \\(250\\times 100\\) subset of the full Kennedy Space Center dataset. The scene was captured with the NASA AVIRIS instrument over the Kennedy Space Center (KSC), Florida, USA and has 18m/pixel spatial resolution. It consists of 4 classes, some of which have poor spatial localization. The dataset consists of 176 bands after removing two signal-to-noise-ratio and water-absorption bands. Left: projection of the data onto its first principal component. Right: ground truth (GT).
Fig. 3: Clustering results for Indian Pines dataset. The SRDL method leads to quite smooth spatial labels, whose accuracy is optimal among all methods. However, in this case, the ground truth indicates that the triangular region on the lower right is labeled incorrectly by the proposed method. The smoothing imposed by SRDL—though beneficial overall—washes that region out. This weakness could be resolved in a variety of ways, most easily perhaps by oversegmenting the HSI, then querying the oversegmented class modes to determine which classes ought to be merged a posteriori.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline \\multicolumn{1}{|c|}{Method} & \\multicolumn{1}{c|}{\\(\\mathbf{\\Delta U}\\)**-
neighbors are sought, and neighbors are constrained to live within a spatial radius \\(r\\), as long as \\(r,k=O(1)\\) with respect to \\(n\\), the nearest neighbor searches for all points can be done in \\(O(n)\\). This gives an overall complexity for the algorithm that is essentially linear in \\(n\\). When the spatial radius is is large enough so that the full scene is considered in the nearest neighbor search, indexing structures (e.g. cover trees) allow for fast nearest neighbor searches, giving a quasilinear algorithm.
The proposed method is likely to be of substantial benefit for scenes that are in some sense smooth with respect to the underlying class labels. For images with many classes that are rapidly varying in space--for example, urban HSI--alternative approaches for incorporating spatial features may be necessary. One approach would be to consider as underlying data points not individual pixels, but higher order semantic features, for example image patches [28]. This would allow for information about fine features such as edges and textures to be incorporated into the diffusion process, allowing for fine-scale clusters to be learned. On the other hand, using patches instead of distinct pixels as the underlying dataset will increase the dimensionality of the data from \\(D\\) to \\(D\\cdot(\\text{Patch Size})\\), which may lead to slower computational time. However, it is expected that the patch space is low-dimensional [29], making the statistical learning problem tractable.
## Acknowledgments
This research was partially supported by NSF-ATD-1737984, AFOSR FA9550-17-1-0280, and NSF-IIS-1546392.
## References
* [1] R.R. Coifman, S. Lafon, A.B. Lee, M. Maggioni, B. Nadler, F. Warner, and S.W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. _Proceedings of the National Academy of Sciences of the United States of America_, 102(21):7426-7431, 2005.
* [2] R.R. Coifman and S. Lafon. Diffusion maps. _Applied and Computational Harmonic Analysis_, 21(1):5-30, 2006.
* [3] J.M. Murphy and M. Maggioni. Diffusion geometric methods for fusion of remotely sensed data. In _Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV_, volume 10644, page 106440I. International Society for Optics and Photonics, 2018.
* [4] J. Murphy and M. Maggioni. Nonlinear unsupervised clustering and active learning of hyperspectral images. _IEEE Transactions on Geoscience and Remote Sensing_, 2018. To Appear.
* [5] N. Gillis, D. Kuang, and H. Park. Hierarchical clustering of hyperspectral images using rank-two nonnegative matrix factorization. _IEEE Transactions on Geoscience and Remote Sensing_, 53(4):2066-2078, 2015.
* [6] Z. Meng, E. Merkurjev, A. Koniges, and A. Bertozzi. Hyperspectral video analysis using graph clustering methods. _Image Processing On Line_, 7:218-245, 2017.
* [7] A. Erturk and S. Erturk. Unsupervised segmentation of hyperspectral images using modified phase correlation. _IEEE Geoscience and Remote Sensing Letters_, 3(4):527-531, 2006.
* [8] C. Tao, H. Pan, Y. Li, and Z. Zou. Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. _IEEE Geoscience and Remote Sensing Letters_, 12(12):2438-2442, 2015.
* [9] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. In _Advances in neural information processing systems_, pages 1601-1608, 2005.
* [10] M. Maggioni and J.M. Murphy. Clustering by unsupervised geometric learning of modes. _ArXiv:1810.06702_, 2018*.
* [11] M. Fauvel, J. Chanussot, and J.A. Benediktsson. A spatial-spectral kernel-based approach for the classification of remote-sensing images. _Pattern Recognition_, 45(1):381-392, 2012.
* [12] N.D. Cahill, W. Czaja, and D.W. Messinger. Schroedinger eigenmaps with nondiagonal potentials for spatial-spectral clustering of hyperspectral imagery. In _Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX_, volume 9088, page 908804. International Society for Optics and Photonics, 2014.
* [13] H. Zhang, H. Zhai, L. Zhang, and P. Li. Spectral-spatial sparse subspace clustering for hyperspectral remote sensing images. _IEEE Transactions on Geoscience and Remote Sensing_, 54(6):3672-3684, 2016.
* [14] M. Banerjee, M. Capozzolli, L. McSweeney, and D. Sinha. Beyond kappa: A review of interrater agreement measures. _Canadian journal of statistics_, 27(1):3-23, 1999.
* [15] W. Zhu, V. Chaves, A. Tiard, S. Sanchez, D. Dahlberg, A. Bertozzi, S. Osher, D. Zosos, and D. Kuang. Unsupervised classification in hyperspectral imagery with nonlocal total variation and primal-dual hybrid gradient algorithm. _IEEE Transactions on Geoscience and Remote Sensing_, 55(5):2786-2798, 2017.
* [16] J. Friedman, T. Hastie, and R. Tibshirani. _The Elements of Statistical Learning_, volume 1. Springer series in statistics Springer, Berlin, 2001.
* [17] P. Comon. Independent component analysis, a new concept? _Signal Processing_, 36(3):287-314, 1994.
* [18] A. Hyvarinen. Fast and robust fixed-point algorithms for independent component analysis. _IEEE Transactions on Neural Networks_, 10(3):626-634, 1999.
* [19] A. Hyvarinen and E. Oja. Independent component analysis: algorithms and applications. _Neural Networks_, 13(4):411-430, 2000.
* [20] S. Dasgupta. Experiments with random projection. In _Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence_, pages 143-151. Morgan Kaufmann Publishers Inc., 2000.
* [21] M. Ester, H.P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. In _Kdd_, volume 96, pages 226-231, 1996.
* [22] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In _NIPS_, volume 14, pages 849-856, 2001.
* [23] N. Acito, G. Corsini, and M. Diani. An unsupervised algorithm for hyperspectral image segmentation based on the gaussian mixture model. In _IEEE International Geoscience and Remote Sensing Symposium (IGARSS)_, volume 6, pages 3745-3747, 2003.
* [24] E. Elhamifar and R. Vidal. Sparse manifold clustering and embedding. In _Advances in Neural Information Processing Systems_, pages 55-63, 2011.
* [25] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 35(11):2765-2781, 2013.
* [26] D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions and associated variational problems. _Communications on pure and applied mathematics_, 42(5):577-685, 1989.
* [27] A. Rodriguez and A. Laio. Clustering by fast search and find of density peaks. _Science_, 344(6191):1492-1496, 2014.
* [28] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. In _Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on_, volume 2, pages 60-65. IEEE, 2005.
* [29] A.D. Szlam, M. Maggioni, and R.R. Coifman. Regularization on graphs with function-adapted diffusion processes. _Journal of Machine Learning Research_, 9(Aug):1711-1739, 2008.
Fig. 6: Impact of spatial radius parameter. Number of nearest neighbors is fixed at 100. As the spatial window increases, empirical results improve, before decreasing. This illustrates that optimal spatial regularization in the construction of diffusion maps is to consider a not too small (which leads to underregularization) and not too large (which leads to overregularization) spatial radius. We note that once a good radius has been found, results are relatively robust. | An unsupervised learning algorithm to cluster hyperspectral image (HSI) data is proposed that exploits spatially-regularized random walks. Markov diffusions are defined on the space of HSI spectra with transitions constrained to near spatial neighbors. The explicit incorporation of spatial regularity into the diffusion construction leads to smoother random processes that are more adapted for unsupervised machine learning than those based on spectra alone. The regularized diffusion process is subsequently used to embed the high-dimensional HSI into a lower dimensional space through diffusion distances. Cluster modes are computed using density estimation and diffusion distances, and all other points are labeled according to these modes. The proposed method has low computational complexity and performs competitively against state-of-the-art HSI clustering algorithms on real data. In particular, the proposed spatial regularization confers an empirical advantage over non-regularized methods. | Provide a brief summary of the text. | 172 |
arxiv-format/1902_07980v1.md | # Non-Markovian memory in IBMQX4
Joshua Morris
[email protected]
Felix A. Pollock
[email protected]
Kavan Modi
[email protected] School of Physics & Astronomy, Monash University, Clayton, Victoria 3800, Australia
November 6, 2021
## I Introduction
Quantum computers promise significant speedups over their classical counterparts by manipulating and exploiting effects only seen in the quantum realm [1, 2, 3]. The price of using phenomena that only become significant at the smallest scales is incredible sensitivity to noise and inherent fragility [4, 5]. Quantifying this fragility, in general, necessitates sophisticated methods for understanding and predicting the behaviour of the fundamental constituents of computing, i.e., quantum gates [6, 7, 8].
Imperfections in quantum gates arise, in part, from interactions with an uncontrollable environment. Any realistic quantum computer is an open system, but the nature of the interactions with the environment are not always clear. In the broadest sense, the resulting dynamics can be partitioned into one of two categories, Markovian and non-Markovian [9, 10], depending on whether memory effects play a role. In the former case, information leaks out of the system over time, eventually reaching an equilibrium point, beyond which further computation becomes impossible or meaningless. In the non-Markovian regime, the information transfer between the system and its environment becomes bidirectional. At some later time information lost to the environment may return to the system, resulting in behaviour that depends on the system's previous state. While, in some cases, these temporal correlations could be beneficial [11], without knowledge of their specific structure, they manifest as unwanted noise. This noise furthermore violates many common assumptions made in characterizing and controlling it [12].
In a circuit-based quantum computer, where sequential gates are applied to realize the computational steps of a specific algorithm, if errors were to non-trivially depend on previous choices of gates, meaningful computation would become impossible. While quantum error correction could certainly be employed to minimize these effects, most error correction techniques almost always assume that the errors are Markovian and thus correct sub-optimally for correlated errors; see [13, 14, 15] for exceptions. Moreover, techniques for gauging the performance of a quantum computer, such as randomized benchmarking [5] and gate set tomography [7], fail to be reliable in the presence of non-Markovian errors [8, 16].
Recently, several quantum computers have been made available to researchers to run experiments remotely, with IBM's Q Experience a prominent example. Researchers have used IBM's quantum computers to prepare highly entangled states [17], discriminate between unitary operations [18], implement quantum stochastic differential equations [19], test fault-tolerant protocols [20, 21] and demonstrate dynamical decoupling [22]. We add to this growing list here.
In this article, we show evidence for the existence of temporal correlations between sequential gates implemented on the IBMQX4, a five superconducting transmon qubit quantum computer (dubbed Tenerife) [23]. To demonstrate non-Markovian effects in the IBMQX4, we develop techniques for quantifying the conditional dependence of noise in quantum gates on the history of past operations and identifying the approximate time scales of the corresponding correlations, all without specific information about the underlying system-environment interactions. Specifically, we look at the behaviour of a quantum gate conditioned on the gate that precedes it, finding noise that depends on past choices and hence a strong indication of non-Markovianity. Finally, we estimate the size of the memory by measuring the correlations in a sequence of controlled-not gates. Our findings indicate that the IBMQX4 is non-trivially coupled to its environment, and suffers from non-Markovian effects that cannot be ignored. On the other hand, our results could, in turn, be used to inform the design of better pulse sequences conditioned on previous pulses to either sidestep or mitigate the issue [24], yielding cleaner and more faithful computation.
We begin with an overview of our theoretical ideas, followed by presentation of data found by running remote experiments on the IBMQX4.
## II Testing for non-Markovianity
Consider the three scenarios outlined in Fig. 1 for the sequential application of two quantum gates. The first panel represents the desired evolution; that of a closed quantum system undergoing unitary transformations. In reality, the implementation of a quantum gate \\(U\\) is unlikely to be perfect; instead, a noisy operation, described by a completely-positive, trace-preserving (CPTP) map or quantum channel \\(\\Phi_{U}\\)[25], will be applied to the system. We may think ofas stemming from a unitary transformation \\(\\tilde{U}\\) applied on the system-environment. When applying an operator \\(U\\) followed by another operator \\(V\\), it is often assumed that this leads to application of \\(\\Phi_{U}\\) followed by \\(\\Phi_{V}\\). That is, the two noisy maps are independent, as depicted in Fig. 1b.
However, this assumption of independence between applications of quantum gates requires the information carried by the environment to dissipate from one application to the next. This Markovian behaviour is in contradistinction to the circuit in Fig. 1c, where the environment evolves _without_ complete information loss, leading to a noisy process
\\[\\Phi_{VU}\
eq\\Phi_{V}\\circ\\Phi_{U}. \\tag{1}\\]
The resulting non-Markovian dynamics leads to correlations between gate errors, which should be accounted for in a rigorous analysis of fault tolerance.
To quantify these correlations, we exploit the fact that in the Markovian case, with no temporal correlations between gates, \\(\\Phi_{VU}\\) would be equivalent to the composition of individual maps \\(\\Phi_{V}\\circ\\Phi_{U}\\). Let us define the conditional map
\\[\\Phi_{V|U}:=\\Phi_{VU}\\circ\\Phi_{U}^{-1},\\quad\\text{where}\\quad\\Phi_{U}\\circ\\Phi _{U}^{-1}=\\mathcal{I}, \\tag{2}\\]
with \\(\\mathcal{I}\\) being the identity map (\\(\\mathcal{I}[\\rho]=\\rho\\;\\;\\forall\\rho\\)). If \\(\\Phi_{V|U}\\) is completely positive, then the dynamics is what is called CP-divisible [26], and this is will be our first test for non-Markovian memory. Even when the conditional map is CP, this alone does not guarantee a Markov process, since the conditional map can still depend on its conditioning argument, and a broader analysis technique is required.
For our second check we ask how different is \\(\\Phi_{V|U}\\) from \\(\\Phi_{V}\\). If the two are not the same, this implies non-Markovianity. However, for a skeptic this failure could imply a very mild type of non-Markovianity. That is, suppose the performance of the second gate is always worse than the first gate, i.e.,
\\[\\Phi_{VU}=\\Phi_{V}\\circ\\Phi_{2}\\circ\\Phi_{U} \\tag{3}\\]
for some fixed decohering dynamics \\(\\Phi_{2}\\) associated with application of any second gate. This is a simple non-Markovian process, where the memory is merely a clock keeping track of the number of gates that have been applied. To overcome this we further implement a more sophisticated check; a less trivial form of non-Markovianity would manifest as explicit dependence on the first gate, i.e.,
\\[\\Phi_{V|U_{1}}\
eq\\Phi_{V|U_{2}}\\quad\\text{for all}\\quad U_{1},\\;U_{2}. \\tag{4}\\]
We now construct all three of these tests for all gates in \\(\\mathcal{G}\\), a universal set of gates.
## III Conditional performance
To clearly demonstrate the presence of temporal correlations, we implement single and sequential pairs of gates
\\[U,V\\in\\mathcal{G}=\\{H,S,T,X,Y,Z,C_{X}\\} \\tag{5}\\]
on the IBMQX4. Here \\(H\\) is the Hadamard gate, \\(S\\) is the phase gate, \\(T\\) is the \\(\
icefrac{{\\pi}}{{8}}\\) gate, \\(X,Y,Z\\) are the Pauli gates, and \\(C_{X}\\) is the Cnot gate. We reconstruct, via quantum process tomography (QPT), the maps \\(\\{\\Phi_{U},\\Phi_{VU}\\}\\) corresponding to each one- and two-gate sequence, i.e., \\(\\{\\Phi_{U},\\Phi_{VU}\\}\\). The details of QPT are given in the Appendix A, along with an analysis of the associated statistical and systematic errors in Appendix B.
With the relevant maps reconstructed, we perform three tests described above for the presence of temporal correlations. The first is a simple check of complete positivity of each conditional map \\(\\Phi_{V|U}\\), computed from the corresponding \\(\\Phi_{VU}\\) and \\(\\Phi_{U}\\). All of the gate combinations tested fail to
Figure 1: (a) Two unitary operators act on closed system \\(S\\), evolving it in time. (b) If the system evolves alongside an inaccessible bath or environment \\(E\\), but does not affect it sufficiently to impact on future interactions, then it is Markovian (or can be approximated as such) and the dynamics on \\(S\\) alone will be described by a CPTP map. (c) Otherwise, the system is non-Markovian, and the environment can be thought of as an uncontrollable quantum memory attached to the system. While the collective dynamics on \\(SE\\) will be a sequence of unitaries, the local dynamics of \\(S\\) cannot be straightforwardly decomposed into two independent stages. Note that each of the boxes in these panels should be thought of as a quantum map (sometimes labelled by a unitary operator) acting on a density operator.
Figure 2: As a measure of the non-positivity of the reconstructed inverse maps \\(\\Phi_{V|U}\\), we present the difference from unity of the summed absolute values of the maps’ Choi states (operator representations of the maps [25]): \\(\\mathrm{tr}|\\Phi_{V|U}|-1\\) for each of the gate combinations \\(U,V\\in\\mathcal{G}\\). For completely positive quantum maps this should be vanishing. The deviation of this measure from zero then gives the degree to which the reconstructed map is not CP, indicating non-Markovian dynamics.
satisfy this condition, as seen in Fig. 2; however, as mentioned above, even if they were completely positive, this would not rule out non-Markovianity. We therefore move on to the more sophisticated second and third tests, which involve direct comparison of each \\(\\Phi_{V}\\) with various \\(\\Phi_{V|U}\\), and of pairs \\(\\Phi_{V|U_{1}}\\) and \\(\\Phi_{V|U_{2}}\\), respectively.
There are many ways for comparing two quantum maps, with the natural metric being the diamond distance [27, 28]
\\[\\|\\Phi_{A}-\\Phi_{B}\\|_{\\diamond}:=\\max_{\\xi}D(\\Phi_{A}\\otimes\\mathcal{I}_{d}[ \\xi],\\Phi_{B}\\otimes\\mathcal{I}_{d}[\\xi]), \\tag{6}\\]
where
\\[D(\\rho,\\sigma)=\\tfrac{1}{2}\\mathrm{tr}|\\rho-\\sigma| \\tag{7}\\]
is the trace distance, \\(\\mathcal{I}_{d}\\) is the identity map on an ancillary \\(d\\) dimensional space, and \\(\\rho\\) is a density operator on the joint system-ancilla space. This distance can be interpreted as the maximum ability to statistically distinguish between two maps in a single shot with an entangled input. However, the diamond distance in our context accounts for the worst case distinguishability; a given pair of maps are unlikely to differ this much when applied to a typical input. In addition, we would like a measure that characterises the average distinguishability between two maps. That is, we want to quantify the effects of non-Markovianity a typical user of IBMQX4 might encounter. To achieve this we employ the averaged trace distance
\\[E(\\Phi_{A},\\Phi_{B}):=\\frac{1}{M}\\sum_{l=1}^{M}D(\\Phi_{A}[\\rho_{l}],\\Phi_{B}[ \\rho_{l}]), \\tag{8}\\]
where \\(\\{\\rho_{l}\\}_{l=1}^{M}\\) forms a set of \\(M\\) random input states drawn from the Haar distribution [29].
In Fig. 3a, we show the distribution of trace distances, prior to averaging, for the gate sequence \\(U=X,V=Z\\). The figure shows this distribution for both experimentally determined maps, as well as maps constructed using the IBMQX4 simulator. The latter only accounts for statistical errors, while the former is equipped to account for memory effects. The figure shows that IBMQX4 suffers from systematic errors that go beyond statistical fluctuations. The average of this distribution indicates size of error, on the average, due to the non-Markovianity.
In Figs. 3b and c, we show the distances, according to Eq. (3) and Eq. (6) respectively, between conditional and unconditional maps corresponding to gates in the set \\(\\mathcal{G}\\) applied on IBMQX4. To maintain consistency, each single qubit gate always acts on qubit 1 (with numbering from 0-4) and each controlled-not (\\(C_{X}\\)) acts on qubit 0 with the control on qubit 1. This analysis clearly shows the conditional realization of the gates to be different from that of unconditional ones. This might be expected, as the coherence of the qubit will diminish more after two operations in comparison to just one. In this case, the magnitude of each column of the matrix entries in Figs. 3b and c would be the same. But they are not: for instance, performing a \\(Z\\) gate after an \\(X\\) gate is very different than after another \\(Z\\). This significant variation as the initial operation \\(U\\) is varied, for a particular \\(V\\), is precisely the dependence on the past discussed above. Beyond simply witnessing the existence of non-Markovian errors in the IBMQX4, Figs. 3b and c tell us which _specific_ gates lead to the strongest (detectable) interaction with the environment.
Though this result demonstrates some kind of correlation in sequential processes, it allows for the comparison of the effect of past gate choices only indirectly; two different choices of \\(U\\) might lead to \\(\\Phi_{V|U}\\)s that are similarly distinguishable from \\(\\Phi_{V}\\), but which are also significantly distinct from each other. For a more direct comparison, we compute the distinguishability between maps corresponding to a fixed gate conditioned on different preceding gates: \\(E(\\Phi_{V|U_{1}},\\Phi_{V|U_{2}})\\). This is shown in Fig. 4 for \\(V,\\,U_{1},\\,U_{2}\\in\\mathcal{G}\\). It can be seen that the difference between conditional maps is far from uniform. In other words, a gate's deviation from its ideal implementation is significantly perturbed by past actions, and not with others, when implemented on the IBMQX4. For instance, \\(Y\\) is strongly affected by \\(S\\), but less so by other gates. Similarly, \\(Z\\) seems to be most affected only by \\(X\\), while \\(X\\) is sensitive to any gate that precedes it. That is, the behaviour of \\(X\\) is drastically different conditionally on what came before it. These structures strengthen the confidence in our analysis.
As important as their presence is, the lifetime of these correlations may be relatively short. Even strong correlations
Figure 3: (a) The two maps \\(\\Phi_{Z|X}\\) and \\(\\Phi_{Z}\\) are found via process tomography on the IBMQX4. Both maps are applied to one hundred thousand randomly generated input states, and their output states are compared against one another (Green) using Eq. (7). The same quantities computed using a simulator provided by IBM are also presented (Blue). (b) The average trace distance between outputs of \\(\\Phi_{V|U}\\) and \\(\\Phi_{V}\\) for each combination of \\(U,V\\in\\mathcal{G}\\), as given by Eq. (8). (c) The diamond distance, given in Eq. (6), between the same pairs of maps, but scaled by \\(\\frac{1}{2}\\), with \\(d=4\\) when \\(U\\) or \\(V\\) is \\(C_{X}\\) and \\(d=2\\) otherwise. This may be thought of (up to this scaling) as calculating the supremum of the distribution in (a), whereas (b) gives its mean. The uncertainty in these values is approximately \\(4.5\\times 10^{-3}\\). Note that for a constant operator \\(V\\), both metrics fluctuate as \\(U\\) is varied, indicating a dependency on the previous gate.
between sequential actions can be accounted for if their temporal range is small; one need only wait them out before application of the next gate. It is with this in mind that we proceed to an investigation of the lifetime of this newly detected quantum memory.
## IV Memory length
Having showed evidence for temporal correlations between pairs of sequential operations on the IBMQX4 platform, we now ask how long-lived they are. In addition to waiting between gate applications, if the length of the memory is not too long, then a hidden Markov model could be reconstructed and conditional pulse sequences applied to correct for errors [30, 31]. If, however, these correlations extend far into the system's future, then correcting or modelling them may be challenging. Additionally, having a better grasp of the structure of memory would also enable more informed decoupling techniques.
To estimate the length of the memory in IBMQX4, we apply a sequence of \\(n\\)\\(C_{X}\\) gates and construct the corresponding CPTP maps \\(\\{\\Phi_{C_{X}}^{(n)}\\}\\) using process tomography for each \\(n\\in[1,15]\\). In the Markovian case, where the errors in each implementation are independent, we would expect \\(E(\\Phi_{C_{X}}^{(n)},\\Phi_{C_{X}}^{(m)}\\circ\\Phi_{C_{X}}^{(n-m)})\\) to vanish for integers \\(m<n\\). Conversely, as before, a non-vanishing distinguishability, by either measure we consider, is a measure for temporal correlations [26]. Furthermore, the behaviour as a function of \\(m\\), for fixed \\(n\\), indicates whether and how the memory decays. Rather than the average distinguishability of two maps that we consider above, the diamond distance considers the absolute worst case scenario by maximising the trace distance between map outputs over all inputs, including those that include entangled ancillas. This maximisation problem can be posed as a semi-definite program [32], described and solved using an appropriate software package [33, 34].
Unsurprisingly, given our previous results, we see significant fluctuations in the distinguishability as \\(m,n\\) are varied, shown in Fig. 5. Though we are now considering the absolute error of \\(\\Phi^{n}\\) and the causally broken process \\(\\Phi^{(m)}\\circ\\Phi^{(n-m)}\\), as opposed to the relative error of two processes as in Fig. 3b, we are still comparing the two maps as they actually are in the IBMQX4, rather than the ideal case. This is to say that Fig. 5 is not intended as a commentary on the fidelity of the \\(C_{X}\\) gate in the IBMQX4, but rather its operational dependence on the environmental state after repeated applications of the controlled not gate.
From Fig. 5, it is clear that, beyond the lack of divisibility, there is some structure present in the distinguishability as the temporal length of the two maps is increased. Considering the upper triangular matrix of the figure, where we compute the diamond distance between the two channels; we see a high
Figure 4: The distinguishability between \\(\\Phi_{V|U_{1}}\\) and \\(\\Phi_{V|U_{2}}\\) for all \\(V,U_{1},U_{2}\\in\\mathcal{G}\\) using the diamond distance (upper triangles) and average trace distance (lower triangles). The former is scaled by a factor of \\(\\frac{1}{4}\\) (\\(d=4\\) for \\(C_{X}\\), \\(d=2\\) otherwise), and both measures are scaled by a factor of two for \\(V=C_{X}\\).
Figure 5: The distinguishability of two maps; one constructed from the application of \\(n\\) sequential \\(C_{X}\\) gates \\(\\Phi_{C_{X}}^{(n)}\\), and the other a concatenation of maps corresponding to \\(m\\) and \\(n-m\\) sequential \\(C_{X}\\) gates \\(\\Phi_{C_{X}}^{(m)}\\circ\\Phi_{C_{X}}^{(n-m)}\\). In the upper figure the distinguishability is calculated using the diamond distance in Eq. 6, while in the lower one it is computed as in Eq. (8).
channel distinguishability, corresponding to strong short term correlations, before a decrease with increasing duration. This decrease is presumably due to the decoherence of both maps as they converge on a noisy equilibrating channel. At \\(m=8\\), however, the divergence once again increases, indicating that some kind of long range temporal correlation has been lost due to the imposed causal break. Given that the \\(C_{X}\\), as a coupling operation, acts on a larger physical space than local operations, it is not surprising that we see evidence for temporal correlations. Curiously, after \\(m=10\\), the distinguishability once again drops quite sharply, where we might instead expect a gradual decrease. While this behaviour for \\(m=8,9,10\\) could indicate a different calibration of the device during their reconstruction, all tomography experiments for related maps in Fig. 5 were performed at similar times on the IBMQX4.
Our results suggest that the non-Markovian memory lasts for several gates. Though it is not clear what physical mechanism corresponds to these long terms correlations, we find it encouraging that we can identify them with no knowledge of the corresponding gate implementations on the IBMQX4. Combining our results with a physical model will open up the potential for a hidden Markov model for IBMQX4. We now move to discuss broader implications of our results.
## V Full quantification of non-Markovianity
Though we have presented measures for correlations between errors in gate applications at different points in time, this falls short of a full characterization of the corresponding non-Markovian process. Ideally, we would reconstruct the _process tensor_[10, 35], which is a generalization of the CPTP map to multi-step processes. The process tensor is a complete descriptor of a stochastic quantum process[36, 37], including temporally correlated noise, which could be of purely quantum nature [38, 39]. However, reconstructing the process tensor requires sequential measurements, which are currently not possible in the IBMQX4 system. A potential way around this is to construct a restricted process tensor [40], but it too can be cumbersome. Another alternative is to map the process to the large entangled quantum state realized by the circuit depicted in Fig. 6, which physically implements a generalized Choi-Jamiolkowski isomorphism. This could then be determined through state tomography. Though more resource intensive in terms of qubits required, this procedure does not require sequential measurements.
Unfortunately, even this cannot be achieved, since implementing it requires the clean implementation of \\(H\\), \\(C_{X}\\) and swap (a sequence of three \\(C_{X}\\)) gates, which we have seen are themselves noisy and temporally correlated. These correlations and the imperfect nature of the gates means we construct not the Choi state of the process tensor but rather some proxy for it. Determining this proxy experimentally however can still yield meaningful results. We have run the circuit in Fig. 6, for the choices of \\(U=S\\) and \\(V=T\\), on the older IBMQX2 and performed quantum state tomography on the relevant four qubits. We then compared this with a Markovian simulator of the same circuit. This is realized by multiplying the reconstructed map for each element of the circuit numerically. The relative entropy distance between the two states represents an estimate of the amount of non-Markovianity [9, 10], which for IBMQX2 we find to be \\(0.68\\) (with \\(2\\) being the maximum achievable [41]). Though this value cannot be taken as a measure of the correlation between the maps corresponding to gates \\(U\\) and \\(V\\) in Fig. 6, it can serve as an indicator and measure of non-Markovianity, albeit with little insight into the source. It is worth noting that IBMQX2 appears to be much noisier than IBMQX4 [42]. Nevertheless, our findings suggest that a great deal of quantum information may be trapped in the non-Markovian correlations in both devices. Being able to recover this information would make the quantum computation cleaner.
Our findings are supported by previous work, such as Ref. [22]. There the authors focus on constructing quantum dynamics that are resistant to noise through dynamical decoupling. They experimentally implement a dynamical decoupling protocol on both the IBMQX4 and the Rigetti Acorn (a 19 superconducting qubit computer). They demonstrate enhancement in the fidelity of operations in the IBMQX4 due to their decoupling protocol, which is a purely a non-Markovian phenomenon [43, 44].
## VI Discussions
Although, we cannot fully quantify non-Markovianity in IBMQX4, we have presented two techniques for identifying and measuring the properties of a non-Markovian system. These techniques form a useful tool-set in identifying difficult to detect correlations in specific, sequential control operations for the IBMQX4 and for general quantum systems where process tomography (or some analogue of it) is possible. Our methods require no prior knowledge of the system Hamiltonian to infer the temporal correlations and as such we need not have an in-depth understanding of the experimental details of the IBMQX4 to comment on its physical behaviour. On the other hand, combining our methods with a physical model would allow for better estimates for the physical parameters, e.g., the coupling strengths, using machine learning tools [45, 46]. Moreover, through an iterative procedure our could be used to make a better physical model for IBMQX4,
Figure 6: The quantum circuit that would create the Choi state of the process tensor corresponding to sequential application of gates \\(U\\) and \\(V\\), assuming all other gates are applied cleanly. For Markovian noise in \\(U\\) and \\(V\\), this would just be the product of Choi states of their corresponding maps \\(\\Phi_{U}\\) and \\(\\Phi_{V}\\).
and consequently adjust the pulse sequence to yield cleaner gates.
The hurdle that we face in reconstructing the process tensor is also a main drawback in reconstructing the conditional dynamics. The root cause being the gate infidelity in what is essentially the preparation and measurement phases of tomography, i.e., state preparation and measurement (SPAM) errors. In performing process tomography we require the preparation of specific input states to the gate being reconstructed and specific measurements performed on the output state. A convenient assumption for most tomography [47] is that these preparations and measurements are performed flawlessly, with the only error being statistical in nature (stemming from finite sampling of the outcome distribution). If this is not the case, however, then the determination of maps corresponding to different gates, and their comparison, can become unreliable. The error rates, based on randomized benchmarking, for IBMQX4 [48] suggest that SPAM errors by themselves are small. Moreover, the SPAM errors alone are not sufficient to explain the high level of structure and the asymmetry in Figs. 3, 4, and 5. For the complete treatment of these errors see Appendix B.
Putting aside the IBMQX4, the ability to detect gate specific correlations, as we have demonstrated above, is a useful diagnostic tool when designing quantum devices. Though a single quantum gate may be well-implemented in isolation, the act of applying it may cause unwanted and traditionally difficult to detect perturbations in the next gate. Our techniques are designed to address this exact situation. In addition to this, though we have considered only temporal dependence mediated by some external environment here, this technique could easily be extended to a search for spatio-temporal correlations, e.g. find the correlations between the third gate applied to qubit one and the fifth gate applied to qubit six. Besides measuring the performance of the device, in a way that is not biased by its particular architecture, this confers a number of advantages. In particular, we note the possibility of the results shown here being used to identify where efforts to improve the implementation of gates might be best spent. At the very least, our results highlight where non-Markovian correlations play a role, allowing for improved constraints on meaningful quantum circuit compilation, something that is, and will continue to be, a problem for large scale quantum computation [49, 24].
## Appendix A Process tomography in the IBMQX4
Due to our need to determine and compare the transformations that are actually being performed in the IBMQX4, it was required that we perform process tomography on a number of one and two step gate sequences. In the IBM system, only Pauli \\(Z\\) measurements are possible, thus a rotation operation needs to be performed for measurement in a complete tomographic basis (the minimal Pauli basis set in this instance). Since the computer is an open system, Markovian or otherwise, we are limited to determining the reduced dynamics of the full unitary evolution of the system-environment. This is described by the quantum map \\(\\Phi[\\rho_{S}]=\\mathrm{tr}_{E}(\\vec{U}\\rho_{SE}\\vec{U}^{\\dagger})\\) that acts on \\(\\rho_{S}\\) and may be expressed in tomographic form, as presented in [25]:
\\[\\Phi[\\rho]=\\sum_{i}\\rho_{i}^{\\prime}\\mathrm{tr}\\left(D_{i}^{\\dagger}\\rho \\right), \\tag{1}\\]
where \\(\\rho_{i}^{\\prime}=\\Phi[\\rho_{i}]\\) is the output state of the map and \\(\\mathcal{S}=\\left\\{\\rho_{i}\\right\\}_{i=1}^{d^{2}}\\) spans the space of bounded linear operators \\(\\mathcal{B}(\\mathcal{H}_{d})\\) acting on the system Hilbert space (with dimension \\(d\\)). \\(\\{D_{j}\\}_{j=1}^{d^{2}}\\) are the duals of \\(\\mathcal{S}\\) such that \\(\\mathrm{tr}(D_{i}^{\\dagger}\\rho_{j})=\\delta_{ij}\\). The map itself can be represented as a matrix that acts on vectorized density operators as \\(\\Phi=\\sum_{i}\\rho_{i}^{\\prime}\\times D_{i}^{\\dagger}\\), where \\(\\times\\) indicates an outer product between vectorized quantities.
For any circuit implemented on the IBMQX4, we can perform state tomography on the output and reconstruct via maximum likelihood estimation [33, 34, 50]. Knowing the input states in \\(\\mathcal{S}\\), and hence the duals, gives us sufficient information to compute the map through Eq. (1), performing process tomography. Specifically, the full reconstruction signal chain is as follows:
1. Define the \\(4^{N}\\times 3^{N}\\) unique circuits required given preparation and measurement bases for full process tomography of a desired unitary operation \\(U\\) acting on \\(N\\) qubits.
2. Using the measurement outcome probability distributions, perform maximum likelihood estimation to determine the output state of each circuit.
3. Reconstruct the most likely process given the defined input and reconstructed output states for each circuit, retrieving the map \\(\\Phi_{U}\\).
As an explicit example of this process for a single qubit operator, one of the \\(4\\times 3=12\\) unique measurement configurations we would run on the IBMQX4 would be to first create one of the spanning input states; \\(|1\\rangle\\!\\langle 1|\\) in this example. Each qubit in the IBMQX4 is always initialised in the \\(|0\\rangle\\!\\langle 0|\\) state so we first apply a Pauli \\(X\\) to the qubit then apply the operator \\(U\\) to this prepared state. We now wish to know what the output state of \\(U\\) is, given an input of \\(|1\\rangle\\!\\langle 1|\\). Single qubit state tomography requires, at a minimum, measurement of three expectation values - \\(\\langle X\\rangle,\\langle Y\\rangle,\\langle Z\\rangle\\). The IBMQX4 can only perform projective measurements in the \\(Z\\) basis however and so to determine the others we must first rotate the output state of \\(U\\) before measuring \\(\\langle Z\\rangle\\). Continuing with our single configuration example, we choose to measure \\(\\langle X\\rangle\\), requiring a Hadamard \\(H\\) rotation prior to measurement of \\(Z\\). Repeating this many times allows computation of \\(\\langle X\\rangle\\). Thus we are ultimately measuring \\(\\mathrm{tr}\\left(ZHUX\\,|0\\rangle\\!\\langle 0|\\,XU^{\\dagger}H\\right)\\) as one of 12 measurements to compute the operator \\(U\\), or rather the map \\(\\Phi_{U}\\) when the desired operation \\(U\\) is performed on the IBMQX4.
Since IBM regularly re-calibrates the device, some care does need to be taken when performing the tomography experiments, so as not to mistake fluctuations in system parameters with environmental noise. While different gate combinations were run at different times, the tomography experimentsfor a particular gate combination \\(U,V\\) are always performed in immediate succession. Additionally, the experiments for each column of Fig. 3b and 3c were run sequentially, so that the differences in conditional behaviour for fixed \\(V\\) could be attributed solely to the variation in the preceding choice of gate \\(U\\).
## Appendix B Error contribution analysis
For the numerical data presented in Figs. 3 and 5 one can think of the uncertainty in these results stemming from two independent sources. The first being the usual statistical uncertainty for a finite sampling method, while the other is due to state preparation and measurement errors or SPAM errors. First we begin with consideration of the statistical errors.
The output of the IBMQX4 is a probability distribution of the measured states, built from a user defined number of \\(N\\) repeated experiments - the shot number. Assuming independent experiments, the standard deviation of the probabilities for each measured outcome goes as \\(\\delta=\\frac{1}{\\sqrt{N}}\\). Since we assume no measurement error bias, this uncertainty defines a sphere (truncated by the space of valid quantum states) in \\(\\mathcal{B}(\\mathcal{H})\\) with the true state being the center and the radius being the standard deviation \\(\\delta\\). Our analysis methods from this point involve taking this outcome distribution and estimating the most probable quantum state, then using this state tomography in process tomography which is then used for comparing the temporal dependence of different quantum processes a la Figs. 3, 4, and 5. By using a maximum likelihood estimator it is not immediately clear how to propagate the statistical uncertainty. We can however simulate this process numerically and gain, at the very least, a lower bound on the uncertainty. By sampling from the Gaussian distributed states, with mean centered on the true state, we can propagate a large number of measured states through our analysis process and compute a distribution of values for the entries in Figs. 3, 4, and 5. The standard deviation of the distribution of values found via this large scale sampling of quantum states then indicates the standard deviation in our output values due to a finite experiment, giving a bound on the standard deviation in the values of Fig. 3, 4 and 5.
Suppose for the moment the preparation and measurement errors are in fact zero and there is only a difference in the map we think we are applying (the ideal unitary) and the actual. The true process run in the noisy IBMQX4 then might be expressed as
\\[\\Phi_{U}=\\Phi_{U}^{\\mathrm{id}}+\\alpha_{U}, \\tag{10}\\]
with \\(\\Phi_{U}^{\\mathrm{id}}\\) being the ideal process we wish to apply and \\(\\alpha_{U}\\) being an error channel that introduces unwanted perturbations in the output states of \\(\\Phi_{U}\\). It is the behaviour of \\(\\alpha_{U}\\) and its dependence on the history of the system-environment that we have been concerned with in this paper. Our hope and assumption up until now is that the channel found through tomography
\\[\\Phi_{U}^{\\prime}=\\Phi_{U}^{\\mathrm{id}}+\\beta_{U}, \\tag{11}\\]
is exactly (10). For perfect measurement/preparation channels the tomography error channel \\(\\beta_{U}\\) is equivalent to \\(\\alpha_{U}\\) and the reconstructed channel and the actual are identical. Since in any real experiment this will not be true, we wish to know the error on the error itself
\\[e(U)=\\Phi_{U}-\\Phi_{U}^{\\prime}=\\alpha_{U}-\\beta_{U}. \\tag{12}\\]
We shall argue that even with the measurement errors accounted for, our results remain significant.
Process tomography is a three step process; state preparation, evolution and measurement. The preparation stage involves constructing an input set of states \\(\\rho_{i}\\) that span the space of bounded linear operators \\(\\mathcal{L}(\\mathcal{H}_{d})\\) that \\(\\Phi_{U}\\) acts on. If one knows how a map acts on a complete spanning set the full process for an arbitrary input state may be derived. The IBMQX4 is always well initialized as \\(\\rho=\\left|0\\rangle\\!\\langle 0\\right|^{\\otimes 5}\\) thus the preparation stage is a set of operations that map this initial state to an element in the spanning set. Since these maps will invariably have some error associated with them, the actual prepared states are \\(\\{\\rho_{i}\\}=\\{\\Phi_{i}|\\langle 0\\rangle\\!\\langle 0|\\}+\\gamma_{i}[\\left|0 \\rangle\\!\\langle 0|\\right]\\). If the actual input states to the process \\(\\Phi_{U}\\) has some error, then so too do the duals \\(\\{D_{i}^{\\prime}\\}=\\{D_{i}+\\delta_{i}\\}\\) as they are constructed based off an assumed spanning set with \\(\\{D_{i}\\}\\) being the actual duals to \\(\\{\\rho_{i}\\}\\).
When the input state drawn from \\(\\{\\rho_{i}\\}\\) is passed through the process \\(\\Phi_{U}\\) the channel can be thought of as acting on the desired input state and the error introduced by the preparation stage, with the output states of the channel becoming
\\[\\rho_{i}^{\\prime}=\\Phi_{U}\\circ\\Phi_{i}[\\left|0\\rangle\\!\\langle 0|\\right]+ \\Phi_{U}\\circ\\gamma_{i}[\\left|0\\rangle\\!\\langle 0|\\right]. \\tag{13}\\]
The set \\(\\{\\rho_{i}^{\\prime}\\}\\) is determined through state tomography, but, since we are restricted to purely \\(Z\\) projective measurements in the IBM system, measurements in the remaining elements of the Pauli basis (or any other measurement basis for that matter) first require the application of a rotation operator on the output states of \\(\\{\\rho_{i}^{\\prime}\\}\\) before projective measurement. This rotation in the measurement stage introduces another source of errors beyond simple statistical uncertainty. This is to say that we measure not \\(\\{\\rho_{i}^{\\prime}\\}\\) but another output state that is the output of \\(\\Phi_{U}^{\\prime}\\) plus the error \\(\\eta_{i}\\) introduced by imperfect rotation operators
\\[\\rho_{i}^{\\prime\\prime}=\\rho_{i}^{\\prime}+\\eta_{i}[\\rho_{i}^{\\prime}]. \\tag{14}\\]
If we then compute the channel defined by this new set of output states and the duals of the input states using Eq. (12) we find an expression for the reconstructed channel in terms of the actual channel and the error terms;
\\[\\Phi_{U}^{\\prime} =\\sum_{i}\\rho_{i}^{\\prime\\prime}\\times D_{i}^{\\prime\\ast}, \\tag{15}\\] \\[=\\sum_{i}\\left(\\rho_{i}^{\\prime}+\\eta_{i}[\\rho_{i}^{\\prime}] \\right)\\times\\left(D_{i}^{\\ast}+\\delta_{i}^{\\ast}\\right)\\]
If we then substitute into Eq. (12) terms of compositions of the channels and simplify, we arrive at an expression for the error \\(e(U)\\)
\\[e(U)= \\sum_{i}\\left(\\Phi_{U}\\circ\\gamma_{i}\\left[\\left|0\\rangle\\! \\langle 0|\\right]+\\eta_{i}\\circ\\Phi_{U}\\circ\\Phi_{i}\\left[\\left|0\\rangle\\! \\langle 0|\\right]\\right)\\times D_{i}^{\\ast}\\right.\\] \\[+\\Phi_{U}\\circ\\Phi_{i}\\left[\\left|0\\rangle\\!\\langle 0|\\right] \\times\\delta_{i}^{\\prime}+\\mathcal{O}(\\eta_{i}\\circ\\gamma_{i},\\eta_{i}\\circ \\delta,\\gamma_{i}\\circ\\delta_{i}).\\]Considering only terms up to first order in the errors, we see that the SPAM errors at each stage of the tomography each contribute an independent error term. Ignoring the second order errors, the first order preparation/measurement errors are due solely to the imperfection of the preparation/measurement channels with the finite sampling contributing to the latter as well. Previously we argued that the finite sampling is a small contribution and by IBM's own gate analysis (using randomised bench-marking), and available as part of the IBMQX4 interface, we know that the gates used for preparation are projective measurement are small as well. This lets us conclude that each term in the above equation is small at first order and thus so to is \\(e(U)\\), indicating that the results presented in the main text are due to non-Markovian behaviour rather than a simple infidelity.
###### Acknowledgements.
We thank Nathan Langford and Chris Wood for insightful conversations. KM is supported through Australian Research Council Future Fellowship FT160100073.
## References
* Georgescu _et al._ (2014)I. M. Georgescu, S. Ashhab, and F. Nori, Rev. Mod. Phys. **86**, 153 (2014).
* Barreiro _et al._ (2011)J. T. Barreiro, M. Muller, P. Schindler, D. Nigg, T. Monz, M. Chwalla, M. Hennrich, C. F. Roos, P. Zoller, and R. Blatt, Nature **470**, 486 (2011).
* Feynman (1982)R. P. Feynman, Int. J. Theor. Phys. **21**, 467 (1982).
* Blumel (1994)R. Blumel, Phys. Rev. Lett. **73**, 428 (1994).
* Knill _et al._ (2007)E. Knill, D. Leibfried, R. Reichle, J. Britton, R. B. Blakestad, J. D. Jost, C. Langer, R. Ozeri, S. Seidelin, and D. J. Wineland, Phys. Rev. A **77**, 012307 (2007).
* Proctor _et al._ (2017)T. Proctor, K. Rudinger, K. Young, M. Sarovar, and R. Blume-Kohout, Phys. Rev. Lett. **119**, 130502 (2017).
* Merkel _et al._ (2013)S. T. Merkel, J. M. Gambetta, J. A. Smolin, S. Poletto, A. D. Corcoles, B. R. Johnson, C. A. Ryan, and M. Steffen, Phys. Rev. A **87**, 062119 (2013).
* Wallman (2018)J. J. Wallman, Quantum **2**, 47 (2018).
* Pollock _et al._ (2018)F. A. Pollock, C. Rodriguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi, Phys. Rev. Lett. **120**, 040405 (2018).
* Costa and Shrapnel (2016)F. Costa and S. Shrapnel, New J. Phys. **18**, 063032 (2016).
* Man _et al._ (2015)Z.-X. Man, Y.-J. Xia, and R. Lo Franco, Phys. Rev. A **92**, 012315 (2015).
* Oreshkov and Brun (2007)O. Oreshkov and T. A. Brun, Phys. Rev. A **76**, 022318 (2007).
* Aharonov _et al._ (2005)D. Aharonov, A. Kitaev, and J. Preskill, Phys. Rev. Lett., 050504 (2005).
* Preskill (2012)J. Preskill, arXiv:1207.6131 (2012).
* Kalai (2009)G. Kalai, arXiv:0904.3265 (2009).
* Ball _et al._ (2015)H. Ball, T. M. Stace, S. T. Flammia, and M. J. Biercuk, Phys. Rev. A **93**, 022303 (2015).
* Wang _et al._ (2018)Y. Wang, Y. Li, Z. qi Yin, and B. Zeng, arXiv:1801.03782 (2018).
* Liu _et al._ (2018)S. Liu, Y. Li, and R. Duan, arXiv:1807.00429 (2018).
* Vissers and Bouten (2018)G. Vissers and L. Bouten, arXiv:1811.09657 (2018).
* Harper and Flammia (2018)R. Harper and S. Flammia, arXiv:1806.02359 (2018).
* Singh _et al._ (2018)R. K. Singh, B. Panda, B. K. Behera, and P. K. Panigrahi, arXiv:1807.02883 (2018).
* Pokharel _et al._ (2018)B. Pokharel, N. Anand, B. Fortman, and D. A. Lidar, Phys. Rev. Lett. **121**, 220502 (2018).
* Koch _et al._ (2007)J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A **76**, 042319 (2007).
* Haner _et al._ (2016)T. Haner, D. S. Steiger, K. Svore, and M. Troyer, arXiv:1604.01401 (2016).
* Milz _et al._ (2017)S. Milz, F. A. Pollock, and K. Modi, Open Sys. & Info. Dyn. **24**, 1740016 (2017).
* Milz _et al._ (2019)S. Milz, M. S. Kim, F. A. Pollock, and K. Modi, arXiv:1901.05223 (2019).
* Gilchrist _et al._ (2005)A. Gilchrist, N. K. Langford, and M. A. Nielsen, Phys. Rev. A **71**, 062310 (2005).
* Benenti and Strini (2010)G. Benenti and G. Strini, J. Phys. B **43**, 215508 (2010).
* Maziero (2015)J. Maziero, Braz. J. Phys. **45**, 575 (2015).
* Taranto _et al._ (2018)P. Taranto, F. A. Pollock, S. Milz, M. Tomamichel, and K. Modi, arXiv:1805.11341 (2018).
* Taranto _et al._ (2018)P. Taranto, S. Milz, F. A. Pollock, and K. Modi, arXiv:1810.10809 (2018).
* Watrous (2009)J. Watrous, arXiv:0901.4709 (2009).
* Diamond and Boyd (2016)S. Diamond and S. Boyd, J. Mach. Learn. Res. **17**, 1 (2016).
* Akshay Agrawal _et al._ (2018)S. D. Akshay Agrawal, Robin Verschueren and S. Boyd, Journal of Control and Decision **5**, 42 (2018).
* Pollock _et al._ (2018)F. A. Pollock, C. Rodriguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi, Phys. Rev. A **97**, 012127 (2018).
* Milz _et al._ (2017)S. Milz, F. Sakuldee, F. A. Pollock, and K. Modi, arXiv:1712.02589 (2017).
* Shrapnel _et al._ (2018)S. Shrapnel, F. Costa, and G. Milburn, New J. Phys. **20**, 053010 (2018).
* Costa _et al._ (2018)F. Costa, M. Ringbauer, M. E. Goggin, A. G. White, and A. Fedrizzi, Phys. Rev. A **98**, 012328 (2018).
* Giarmatzi and Costa (2018)C. Giarmatzi and F. Costa, arXiv:1811.03722 (2018).
* Milz _et al._ (2018)S. Milz, F. A. Pollock, and K. Modi, Phys. Rev. A **98**, 012108 (2018).
* (41)Relative entropy for four qubits can be as large as 4, but due to causality conditions this is not achievable for the process tensor.
* Shukla _et al._ (2018)A. Shukla, M. Sisodia, and A. Pathak, arXiv:1805.07185 (2018).
* Sakuldee _et al._ (2018)F. Sakuldee, S. Milz, F. A. Pollock, and K. Modi, J. Phys. A **51**, 414014 (2018).
* Li _et al._ (2018)L. Li, M. J. W. Hall, and H. M. Wiseman, Phys. Rep. **759**, 1 (2018).
* Giarmatzi and Costa (2018)C. Giarmatzi and F. Costa, npj Quantum Information **4**, 17 (2018).
* Shrapnel _et al._ (2018)S. Shrapnel, F. Costa, and G. Milburn, Int. J. Quant. Inf. **16**, 1840010 (2018).
* Greenbaum (2015)D. Greenbaum, arXiv:1509.02921 (2015).
* (48)\"The ibm quantum experience,\" [http://www.research.ibm.com/quantum/](http://www.research.ibm.com/quantum/).
* Heyfron and Campbell (2017)L. Heyfron and E. T. Campbell, arXiv:1712.01557 (2017).
* Banaszek _et al._ (1999)K. Banaszek, G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, Phys. Rev. Lett. **61**, 010304 (1999). | We measure and quantify non-Markovian effects in IBM's Quantum Experience. Specifically, we analyze the temporal correlations in a sequence of gates by characterizing the performance of a gate conditioned on the gate that preceded it. With this method, we estimate (i) the size of fluctuations in the performance of a gate, i.e., errors due to non-Markovianity; (ii) the length of the memory; and (iii) the total size of the memory. Our results strongly indicate the presence of non-trivial non-Markovian effects in almost all gates in the universal set. However, based on our findings, we discuss the potential for cleaner computation by adequately accounting the non-Markovian nature of the machine. | Give a concise overview of the text below. | 151 |
arxiv-format/1611_08705v1.md | # Direct transfer of classical non-separable state into hybrid entangled two photon state
M. V. Jabir
1*N. Apurv Chaitanya
1*Manoj Mathew
2*G. K. Samanta1
1*Photonic Sciences Lab., Physical Research Laboratory, Navarangpura, Ahmedabad 38ooog, Gujarat, India.
2National Centre for Biological Sciences, Bengaluru, India.
*Corresponding author: [email protected]_
## 1 Introduction
Entanglement, the quintessential strong non-classical correlations in joint measurement of at least two separate quantum systems, plays a critical role in many important applications in quantum information processing, including quantum communication [1], quantum computation [2], quantum cryptography [3], dense coding [4] and teleportation [5]. Typically, in photonic quantum optics, spontaneous parametric down-conversion (SPDC) is used to produce correlated photon pairs [6, 7, 8, 9] with many accessible degree-of-freedom (DoF) that can be exploited for the production of entanglement. With the first demonstration of entanglement with polarization DoF [10, 11], recent advancement in quantum optics have provided intrinsic entanglement (entanglement in variety of DoFs such as orbital angular momentum (OAM) [12], energy time [13], time bin [14], and many more[15]), hyperentanglement [16] (entanglement in every DoFs) and hybrid entanglement (entanglement between different DoFs of a pair of particles). While these entangled states have various applications, the hybrid entangled states encoded with polarization and OAM, in particular, allow the generation of qubit-qudit entangled states [17] for quantum information, macroscopic entanglement with very high quanta of OAM [18], important for quantum information science, and supersensitive measurement of angular displacement in remote sensing [19].
Typically, hybrid entangled states encoded with polarization and OAM are generated through the imprinting of chosen amount of OAM to a high-fidelity polarization entangled state using mode converters, such as, spatial light modulator (SLM) [20] and q-plate [21], in complicated experimental schemes. As compared to all mode converters, the SLM have many advantages in terms of dynamic variation in OAM, accessibility to very high OAM and fackbility in imprinting two particles with an arbitrarily high difference in OAM [18]. However, the diffraction based OAM imprinting process of the SLM reduces the overall number of photons and thus limiting the use of hybrid state for practical applications requiring entangled state with high brightness. To circumvent such problem, as such, it is imperative to device alternative techniques to produce hybrid entangled states in simple experimental scheme.
Entanglement properties of the paired photons generated through SPDC process are highly influenced by different crystal parameters including birefringence and length, and the spatial structure of the pump beam [22, 23]. Recent studies have shown that the transfer of pump properties such as non-diffraction [24], intensity distribution and phase structure [25, 26] into the transverse amplitude of heraldedsingle photons. Therefore, one can in principle manipulate the pump beam to directly generate hybrid entangle state through SPDC process.
On the other hand, light beams with non-separable states in polarization and OAM [27, 28] have attracted a great deal of interest due to its violation of Bell like inequality [29, 30]. Here we propose, for the first time to the best of our knowledge, direct transfer of non-separable laser beams into hybrid entangle two photon state in a simple experimental scheme. As a proof of principle, pumping the contiguous nonlinear crystals with classical non-separable pump beam of OAM mode \\(l\\)=1 and 3,and characterizing the quantum state through violation of Bell's inequalities and the measurement of entanglement witness operator (W) for twin photons we showed that the generated two photons are entangled in both polarization and OAM. The concept is generic and can be used for hybrid entanglement with higher OAM, and photons with arbitrarily high difference in OAM through proper choice of non-separable state of the pump beam. The concern of rapidly decreasing efficiency of the down conversion process for the direct generation of entanglement of higher OAM, can be overcome by OAM independent beam size of the non-separable state using the scheme used in Ref. [23, 31].
## 2 Experimental setup
The schematic of the experimental setup is shown in Fig. 1(a). A continuous-wave, single-frequency (\\(<\\)12 MHz) UV laser providing 70 mW of output power at 405 nm in Gaussian spatial profile is used as a pump laser. The laser power to the experiment is controlled using a half-wave plate (\\(\\lambda/2\\)) and a polarizing beam splitter (PBS1) cube. A second \\(\\lambda/2\\) plate placed after PBS1 converts the linearly polarized Gaussian beam represented by the state, \\(|H\\ 0\\rangle\\) (here, the first and second terms of the ket represent polarization and OAM of the beam respectively) in to \\(\\psi 1=1/\\sqrt{2}(|H\\ 0\\rangle+|V\\ 0\\rangle)\\). Here, \\(H\\) and \\(V\\) represents horizontal and vertical polarization respectively and the Gaussian beam has OAM mode of \\(i\\)=10. To prepare classical non-separable state, the pump beam is passed through a polarization Signac interferometer comprising with PBS2, three mirrors (M) and a polarization independent spiral phase plate, SPP. The PBS2 splits the pump state, \\(\\psi 1\\) in two counter propagating beams in the Sagnac interferometer with \\(|H\\ 0\\rangle\\) and \\(|V\\ 0\\rangle\\) beams propagating in counter dock-wise (CCW) and clock-wise (CW) directions respectively. After a round trip both the beams recombine in the PBS2 and produce output state same as that of the input state, \\(\\psi 1\\). However, due to the presence of SPP that converts Gaussian beam (\\(l\\)=0) into optical vortex of OAM mode, \\(\\pm l\\), the output state of the Sagnac interferometer will be different than that of the input state. Depending on the direction of thickness variation of the SPP, if the CCW beam, \\(|H\\ 0\\rangle\\), while passing through the SPP in the Sagnac interferometer acquires spiral phase corresponding to an optical vortex of order \\(+l\\) (say) then the CW beam, \\(|V\\ 0\\rangle\\), will acquire optical vortex of order \\(-l\\) and vice versa. As a result, the output of the Sagnac interferometer can be represented by the classical non-separable state, \\(\\psi 2=1/\\sqrt{2}(|H\\ 1\\rangle+e^{-i\\phi}|V-l\\rangle)\\). The phase factor, \\(\\phi\\), arises due to the asymmetric positioning of the SPP inside the Sagnac interferometer. Two contiguous BIBO (lismuth borate) crystals each of 0.6-mm thick and 10 x 10 mm\\({}^{2}\\) in aperture with optic axes aligned in perpendicular planes, is used as nonlinear crystal for SPDC process. Both the crystals are cut with, \\(\\theta\\)=151.7\\(\\epsilon\\)(\\(\\phi\\)=90\\(\\epsilon\\)) in optical yz-plane for perfect phase-matching of non-collinear type-I (\\(\\alpha\\)\\(\\rightarrow\\)\\(\\epsilon\\)\\(\\epsilon\\)) degenerate down converted photons at 810 nm in a cone of half-opening angle-\\(3^{\\circ}\\) for normal incidence of pump. Orthogonal positioning of the optic axes of the BIBO crystals facilitate the pump photons in both \\(|H\\rangle\\) and \\(|V\\rangle\\) polarization states to produce respective non-collinear SPDC photons in concentric cones around the direction of the pump beam. For entanglement studies, we select two diametrically opposite points of the SPDC ring [red circle in Fig. 1(a)] in the horizontal plane and named them as Arm-1. and Arm-2. In Arm-1, we conditioned one of the down-converted photons (say idler photon) using a hard aperture of diameter \\(\\sim\\)460 \\(\\upmu\\)m and multimode fiber [25]. To herald its partner photon (here signal) we collected the photons in Arm-2 using a collimator with an opening diameter of \\(\\sim\\)460 \\(\\upmu\\)m and a multi-mode fiber assembly placed on \\(xy\\) scanning stage. The photons of each arm are analyzed through coincidence count using a combination of single photon counting module (SPCM) and a time-to-digital converter (TDC). A time window of 2.5 ns was used to measure the coincidence counts. Quarter-wave plate, \\(\\lambda\\)/4, and analyzers, \\(\\Lambda\\), [comprised with a \\(\\lambda\\)/2 plate and PBS as shown in Fig. 1(b)] are used to analyze the photons in polarization basis. The interference filter (IF) with transmission bandwidth of \\(\\sim\\)10 nm with central wavelength at 810 nm is used to extracts degenerate photons from broad spectrum of SPDC process. Figure 1(c)- (e) show the spatial distribution of the non-separable state of pump beam, conditioned idler and heralded signal photons respectively.
## 3 Results and discussions
In non-collinear SPDC process, where high energy photon owing to energy conservation splits into two low energy photons known as signal and idler, the generated entangled photons are distributed in a ring with signal and idler photons laying in diametrically opposite points [see Fig 1(a)]. To study the entanglement quality of the down converted photons in the current experimental scheme we pumped the dual-BIBO crystal with input state \\(\\psi 1=1/\\sqrt{2}(|H\\ 0\\rangle+|V\\ 0\\rangle)\\) in Gaussian spatial intensity distribution by removing the SPP from the Sagnac interferometer. Due to orthogonal positioning of the optic axes of the crystals, if the first crystal produces down converted photons of the pump photons of \\(|H\\rangle\\) polarization state into \\(|VV\\rangle\\) owing to type-I phase-matching, the second crystal converts pump photons of \\(|V\\rangle\\) polarization state into down converted photons in \\(|HH\\rangle\\) state and vice versa. Since the crystal thickness is small and the coherence length of the pump laser is very high (\\(\\sim\\)25 m) the photons generated from the both crystals are highly indistinguishable in space and time. Therefore, one can write the output state, \\(\\psi\\) of the down converted photons as superposition of individual states \\(|HH\\rangle\\) and \\(|VV\\rangle\\). However, one need to determine the state and the quality of polarization entanglement in two photon state. In doing so we used standard coincidence measurement technique and recorded the two-photon interference in terms of photon coincidence between the twin photons distributed in Arm-1 and Arm-2 under two
Figure 1: **Direct generation of hybrid entangled two photon state.** (a) Schematic of the experimental setup. Laser 405 nm, continuous wave single frequency diode laser at 405 nm providing 70 mW of output power; \\(\\lambda\\)/2, half-wave plate; PBS1-2, polarizing beam splitter cube; SPP, spiral phase plates; M, mirrors; schematic marked in yellow represents polarization Signac interferometer; C, dual-BIBO crystal having optic axis orthogonal to each other for the generation of entangled photons; A, analyser; \\(\\lambda\\)/4, quarter wave plate; IF, interference filter; D1-2, single photon counting module (SPCM); TDC, time-to-digital converter. (b) Analyser comprises with PBSAnd\\(\\lambda\\)/2. Spatial distribution of the (c) non-separable pump beam, (d) conditioned idler photon and (e) heralded signal photons.
non-orthogonal projection bases, H/\\(V\\) (horizontal/vertical) and D/A (diagonal/anti-diagonal) using two polarization analyzers as the quantum state analyzer with the results shown in Fig. 2(a). As expected, the normalized coincidence rate measured 10 sec show the expected sinusoidal variation with angle of the quantum state analyzer with fringe visibilities 99.7\\(\\pm\\)0.03% and 96.9\\(\\pm\\)0.04% for H (red dots) and D (black dots) bases respectively. The measured visibilities under both basis are higher than 71% [32], large enough to violate Bell's inequality. However, using the coincidence rates we measured the Bell's parameter to be \\(\\mathrm{S}\\) = 2.73\\(\\pm\\)0.04 indicating the polarization entanglement of the generated two-photon state. We also constructed the density matrix of the state using linear tomographic technique [33]. Figure 2(b) shows the graphical representation of density matrix of the generated state. From this analysis we determine the state to be \\(\\psi 3=1/\\sqrt{2}(|HH\\rangle-|VV\\rangle)\\) and fidelity is 0.992.
For hybrid-entanglement, we prepared the classical pump beam in non-separable state [28] in OAM and polarization DoF by placing SPP inside the polarization Sagnac interferometer with phase variation corresponding to OAM mode of \\(l\\)=1,2 and 3. The non-separable state of the pump beam incident to the nonlinear crystal can be expressed, \\(\\psi 2=1/\\sqrt{2}(|H\\rangle+e^{-i\\phi}|V-l\\rangle)\\) with intensity distribution shown in Fig. 1(c). To verify the non-separability, measurement in one DoF influence the outcome of the measurement in other DoF, we projected the pump state having OAM mode of \\(l\\)=1 and 3 at different polarization states, horizontal (\\(|H\\rangle\\)), vertical (\\(|V\\rangle\\)), anti-diagonal, (\\(|A\\rangle\\)), diagonal (\\(|D\\rangle\\)), left circular (\\(|L\\rangle\\)) and right circular (\\(|R\\rangle\\)) in Poincare sphere and recorded the intensity of the beam with the results shown in Fig. 3. As evident from Fig. 3, the projection of the pump state on \\(|H\\rangle\\) and \\(|V\\rangle\\) states result in vortex intensity profile with OAM order \\(l\\) and \\(-l\\) respectively. However, the projection on \\(|A\\rangle\\), \\(|D\\rangle\\), \\(|L\\rangle\\) and \\(|R\\rangle\\) states produce intensity distribution corresponding to the superposition of two opposite helical wavefronts of OAM order \\(l\\) resulting a ring lattice structure containing \\(2l\\) number of petals at different orientation. All these projected intensity distribution can be represented by different points on LG-Bloch (Poincare) sphere. The change in the images of Fig. 3(a) and Fig. 3(b) representing the projection of the pump state corresponding to OAM mode of \\(l\\)=1 and 3 respectively corresponding to the change in polarization projection, verifies the generation of non-separable states. The inset image of Fig. 3(a) and 3(b) show intensity profile of the pump state without any projection.
With successful generation and verification of the non-separable state, we pumped the dual-BIBO crystal with the state, \\(\\psi 2=1/\\sqrt{2}(|H\\rangle+e^{-i\\phi}|V-l\\rangle)\\), for direct transfer of classical non-separable state into hybrid entangled two photon states. According to the OAM conservation in nonlinear processes [34], the OAM of the pump photon should be equal to the sum of the OAMs of the generated signal and idler photons, \\(l_{p}=l_{S}+l_{l}\\)[35, 36]. Here, \\(l_{p},l_{s}\\) and \\(l_{l}\\) are the OAM of pump, signal and idler photons respectively. Since the OAM conservation law does not put any selection rule for the OAMs of the signal and idler photons, the OAMs of signal and idler can have arbitrary values with random variation owing to the conservation law. However, if we force signal or idler to carry fixed OAM value, then its partner photon will have a particular OAM with certainty. For example, if we project either signal or idler photons into Gaussian mode (\\(l\\)=0), then the OAM of the idler or signal photon will be equal to that of pump photon indicating the possibility of direct transfer of the non-separable state in OAM and polarization DoF of the pump into one of the down converted photon. Therefore, in the present experiment, the state of the paired photons (considering idler photons in Gaussian mode) generated from the dual-BIBO crystal while pumped with non-separable state, \\(\\psi 2=1/\\sqrt{2}(|H\\rangle+e^{-i\\phi}|V-l\\rangle)\\) can be written as, \\(\\psi 4=a|V\\rangle+e^{-i\\phi}|\\beta|H-l\\rangle)\\) while the signal photon is projected by the analyzer in the D polarization. However, if the idler photon is projected in D polarization (for example) the paired photon state will transformed to \\(\\psi 5=|D\\rangle\\otimes|\\alpha|l\\rangle+e^{-i\\phi}|-l\\rangle\\)], where, \\(\\alpha\\), \\(\\beta\\) and \\(\\phi\\) are real constants and \\(\\alpha^{2}+\\beta^{2}=1\\). As a proof of principle, in the present experimental scheme, we pumped the crystal with non-separable state, \\(\\psi 2\\) for two different values of OAM (\\(l\\)=1, 3) as shown in Fig 3, and projected the photon (idler) of Arm-1 into Gaussian mode [\\(l\\)=0] using a lens and multi-mode fiber similar to Ref. [24, 25] and measured the spatial distribution of the heralded photon (signal) in the form of coincidence counts per 10s in the transverse [\\(x\\)-\\(y\\), see Fig 1(a)] plane of Arm-2 with the results shown in Fig. 4. As evident from the gallery of images of Fig. 4(a) and Fig. 4(b) representing the spatial distribution of the heralded signal photon for the non-separable state corresponding to OAM mode of \\(l\\)=1 and \\(l\\)=3 respectively, the projection of the idler photon in Arm-1 to different states (\\(|H\\rangle,|A\\rangle,|R\\rangle,|V\\rangle,|D\\rangle\\) and \\(|L\\rangle\\) (as marked by the white letters in the images) in the polarization Poincare sphere with the help of \\(\\lambda/4\\) plate and analyzer, A, directly projects the probability distribution of the heralded signal photons in Arm-2 in a ring lattice spatial structure with \\(2l\\) number of petals to different points on the LG-Bloch sphere. Such observation intuitively gives the impression
Fig. 3: **Intensity profiles of the non-separable states of the pump beam for two different OAM modes**. Depending on the projection of the beam in different polarization states,\\(|H\\rangle,|A\\rangle,|R\\rangle,|V\\rangle,|D\\rangle\\) and \\(|L\\rangle\\) (as shown by the white letters on the images) on the Poincaré sphere, the mode pattern of the beam recorded by the CCD camera change to different points on LG-Bloch sphere for (**a**) \\(1/\\sqrt{2}(|H\\rangle+e^{-i\\phi}|V-1\\rangle)\\) and (**b**) \\(1/\\sqrt{2}(|H\\rangle+e^{-i\\phi}|V-3\\rangle)\\). Inset images are intensity distribution of the non-separable states for OAM mode \\(l\\)=1 and \\(l\\)=3 without any polarization projection.
Fig. 2: **Study of two-photon polarization entanglement and identification of the state.** (a) Visibility graph for polarization entangled state at H (red dots) and D (black dots) bases. (b) Graphical representation of the density matrix obtained from linear tomographic technique for polarization entangled state.
of generation of hybrid entangled two photon state through the direct transfer of non-separable state of the pump.
However, for confirmation and quantitative study of the entanglement we explore the features of the LG modes. It is well known that the superposition of any two equal OAM modes with opposite helicities\\(|\\)LG\\({}_{\\pm 1}\\rangle=|l\\rangle+e^{-i\\phi}|-l\\rangle\\) results in radially symmetric ring lattice with \\(2l\\) number of petals. However, the relative phase\\(\\phi\\) between the two OAM modes results in spatial rotation of \\(\\frac{\\phi\\,360^{\\circ}}{2\\pi}\\frac{\\phi\\,360^{\\circ}}{2l}\\)[5] which can in principle be used to identify and discriminate between different superpositions of the OAM modes. To distinguish the spatial rotation and therefore different superpositions for the verification of entanglement in OAM and polarization DoF we have evaluated the coincidence of the heralded signal photons per angular region, \\(\\theta\\), from the coincidence images (see insets of Fig. 5) for the idler polarization in mutually unbiased bases, \\(|A\\rangle\\), \\(|D\\rangle\\) and \\(|R\\rangle\\), \\(|L\\rangle\\) for the pump OAM mode, \\(l\\)=1 and \\(l\\)=3 with the results shown in Fig. 5(a) and Fig. 5(b) respectively. Let's assume that at an angle \\(\\theta\\), we have the maximum coincidence for anti-diagonal (\\(A\\)) projection. The \\(n^{th}\\) maxima in the same projection appears at an angle, \\(\\theta+n\\frac{360^{\\circ}}{2l}\\), where \\(n\\) is an integer in the range, 15 \\(n\\)_S2L_ The angular separation between two consecutive maxima is\\(\\frac{360^{\\circ}}{2l}\\). On the other hand, in mutually unbiased (say, L) and orthogonal (D) basis the maxima will shift to an angle \\(\\theta+\\frac{45^{\\circ}}{l}\\) and \\(\\theta+\\frac{90^{\\circ}}{l}\\) respectively. As evident from Fig. 5(a), for OAM mode, \\(l\\)=1, two consecutive maxima in A-projection (black dots) occur at \\(\\theta\\)=0\\({}^{\\circ}\\) and \\(\\theta+\\frac{360^{\\circ}}{2l}\\)=180\\({}^{\\circ}\\) and the maxima in the L- (blue dots) and D- (red dots) projections have angular shift of angle \\(\\theta+\\frac{45^{\\circ}}{l}\\)=45\\({}^{\\circ}\\) and \\(\\theta+\\frac{90^{\\circ}}{l}\\)=90\\({}^{\\circ}\\) respectively with respect to A-projection. Similarly in case \\(l\\)=3, as evident from Fig. 5(b), we observe two consecutive maxima in same projection basis to have an angular separation of 60\\({}^{\\circ}\\) and the maxima in the L- (blue dots) and D- (red dots) projections have angular shift of angle 15\\({}^{\\circ}\\) and 30\\({}^{\\circ}\\) respectively with respect to maxima in A-projection. Such spatial rotation of the OAM mode of the heralded signal photon for the projection of the idler at different points in the polarization Poincare sphere confirms the entanglement in both OAM and polarization DoF. However, to estimate the entanglement quality we have calculated the entanglement witness operator,\\(\\tilde{\\Psi}=V_{R/L}+V_{D/A}\\) using the quantum interference visibilities, \\(V_{R/L}\\) and \\(V_{D/A}\\)in two mutually unbiased bases R/L and D/A respectively. For all separable states, the entanglement witness operator satisfies the inequality,\\(\\tilde{\\Psi}\\leq 1\\) and exceeding the limit vertices entanglement [20, 37]. Using the values of \\(V_{R/L}\\) and \\(V_{D/A}\\) we estimate the entanglement witness operator for OAM mode \\(l\\)=1, 2, and \\(l\\)=3 to have a value \\(\\tilde{\\Psi}\\)= 1.56\\(\\pm\\)0.04, 1.40\\(\\pm\\)0.04 and 1.25\\(\\pm\\)0.03, clearly violating the inequality by more than 14, 10 and 8 standard deviation respectively. It is to be noted that we did not apply any background correction to the experimental results. Slight lower violation of inequality in case \\(l\\)=3 with respect to that of \\(l\\)=1 can be attributed to error in visibility data due low signal to noise ratio in the spatial distribution of the lower number of down converted photons. The stronger violation requires increase in the number of down converted photons. However, the present study confirms generated twin photons are entangled in both OAM and polarization DoF.
## 4 Conclusions
In conclusion, we have successfully demonstrate a novel scheme of generating hybrid entangled two photon state. Pumping a contiguous nonlinear crystals using non-separable state of pump beam of OAM mode of \\(l\\)=1 and 3 we have generated two photon hybrid entangled state. Characterization of the generated quantum state through tomography, and violation of Bell's inequality parameter shows high quality polarization entanglement whereas, the measurement of entangled witness parameter verifies the generation of two photons state entangled in both OAM and polarization DoFs. The concept is generic and can be used to produce hybrid entangled state with higher quanta of OAM and photons with arbitrarily high difference in OAM through the proper selection of the nonclassical state of the pump beam.
## References
* 171 (2007).
* [2] David DiNrecenzo, Emmanuel Knill, Raymond LaFlamme, and W. Zurek, \"Special issue on quantum coherence and decoherence,\" Proc. R. Soc. London, Ser. A 1969 (1998).
* [3] Artur K. Ekert, \"Quantum cryptography based on Bell's theorem\", Phys. Rev. Lett. 67, 661 (1991).
* [4] Charles H. Bennett, and Stephen I. Wiesner, \"Communication via one- and two-particle operators on Einstein-Podolsky-Rosen states,\" Phys. Rev. Lett. 69, 2881(1992).
* [5] Charles H. Bennett, Gilles Brassard, Claude Crepeau, Richard Jozsa, Asher Peres, and William K. Wootters, \"Teleporting an unknown quantum
Figure 4: **Gallery of images representing the probability distribution of the heralded signal photons**. Depending on the polarization of the idler photon projected (white letters in the images) at different points in the polarization Poincaré sphere, the spatial distribution of the heralded signal change into different mode patterns in the LG-Bloch sphere. The sequence of coincidence images of the heralded signal due to particular polarization of its partner (idler) photon while pumping with non-separable state of (a) first order, and (b) third order LG modes around the meridian (vertical circle) and the equator (horizontal circle) in the LG-Bloch sphere confirms the generation of hybrid entangled two photon state encoded in polarization and OAM. In absence of any polarizer in the path of the idler photon, the spatial distribution of the heralded signal photon (inset) shows a statistical mixture of all states of the LG-Bloch sphere.
Figure 5: **Distribution of heralded signal photon number per angular region**. Confidence per angular region for A (black dots), D (red dots), R (pink dots) and L (blue dots) projections of the idler photon for OAM (a) \\(l\\)=1, (b) for \\(l\\)=3. The lines are theoretical fit to experimental data. (Inset) Image of the probability distribution of the heralded signal photons. Errors are estimated from iteration.
state via dual classical and Einstein-Podolsky-Rosen channels,\" Phys. Rev. Lett. 70, 1895 (1993).
* (6) D. Burnham, and D. Weinberg, \"Observation of Simultaneity in Parametric Production of Optical Photon Pairsm,\" Phys. Rev. Lett. 25, 84-87 (1970).
* (7) C. Hong, and L Mandel, \"Theory of parametric frequency down conversion of light,\" Phys. Rev. A 31, 2409-2418 (1985).
* (8) R. Ghosh, and L Mandel, \"Observation of nonclassical effects in the interference of two photons,\" Phys. Rev. Lett. 59, 1903 (1987).
* (9) S. P.Walborn, C. H. Monken, S. Padua, and P. H Souto Ribeiro, \"Spatial correlations in parametric down-conversion,\" Phys. Rep. 495, 87-139 (2010).
* (10) L. Y. Ou, & L Mandel, \"Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment,\" Phys. Rev. Lett. 61, 50(1988).
* (11) Paul G. Kwiat, Klaus Mattle, Harald Weinfurter, Anton Zelinger, Alexander V. Sergienko, & Yanhua Shih, \"New High-intensity Source of Polarization-Entangled Photon Pairs,\" Phys. Rev. Lett. 75, 4337 (1995)
* (12) A. Maiz, A. Vaziri, G. Wells, and A. Zeilinger, \"Entanglement of the orbital angular momentum states of photons,\" Nature 412, 313-316 (2001).
* (13) J. Brendel, N. Gisin, W. Tittel, and H. Zbinden, \"Pulsed Energy-Time Entangled Twin-Photon Source for Quantum Communication.\" Phys. Rev. Lett. 82, 2594 (1999).
* (14) J. D. Franson, \"Bell inequality for position and time,\" Phys. Rev. Lett. 62, 2205 (1989).
* (15) Robert Fickler, Radek Lapkiewicz, Sven Ramelow, and Anton Zelinger, \"Quantum entanglement of complex photon polarization patterns in vector beams,\" Phys. Rev. A89, 060301(R) (2014).
* (16) J. Iudio T. Barreiro, Nathan K. Langford, Nicholas A. Peters, and Paul G. Kwiat, Generation of Hyperentangled Photon Pairs, Phys. Rev. Lett. 95, 260501 (2005).
* (17) Leonardo Neves, Gustavo Lima, Aldo Delgado, and Carlos Saavedra, \"Hybrid photonic entanglement: Realization, characterization, and applications,\" Phys. Rev. A 80, 042322 (2009).
* (18) Robert Fickler, Radek Lapkiewicz, William N. Plick, Mario Krenn, Christoph Schaef, Sven Ramelow, and Anton Zellinger, \"Quantum Entanglement of High Angular Momenta,\" Science 338, 640-644 (2012).
* (19) Anand Kumar Jha, Girish S. Agarwal, and Robert W. Boyd, \"Supersitche measurement of angular displacements using entangled photons,\" Phys. Rev. A 83, 053829 (2011).
* (20) Robert Fickler, Mario Krenn, Radek Lapkiewicz, Sven Ramelow, and Anton Zelinger. \"Real-Time Imaging of Quantum Entanglement,\" Sci. Rep. 3, 1914 (2013).
* (21) Eleonora Nagal, and Fabio Sciarrino, \"Generation of hybrid polarization-orbital angular momentum entangled states\", Optics Express. 18, 18243-18248 (2010).
* (22) Ramirez-Alarcon, R.H Cruz-Ramirez, and A. B U'Ren, \"Crystal length effects on the angular spectrum of spontaneous parametric down conversion photon pairs,\" Laser Phys. 23(5), 055204 (2013).
* (23) M.V Jablir, N. Apuru Chalatyna, A. Aaddi, G. K. Samanta, \"Generation of \"perfect\" vortex of variable size and its effect in angular spectrum of the down-converted photons,\" Sci. Rep. 6, 21877 (2016).
* (24) V. Kucuna-Hernandez, H. Cruz-Ramirez, R. Ramirez-Alarcon, and A. B. U'Ren, \"Observation of non-diffracting behavior at the single-photon level,\" Opt. Express 20, 29761 (2012).
* (25) V. Kucuna-Hernandez, H. Cruz-Ramirez, R. Ramirez-Alarcon, and A. B. U'Ren, \"Classical to quantum transfer of optical vortices,\" Opt. Express 22, 020027(2014).
* (26) C. H. Monken,, P. H. Souto Ribeiro, and S. Padua, \"Transfer of angular spectrum and image formation in spontaneous parametric down-conversion,\" Phys. Rev. A 57, 3123 (1998).
* (27) Zhan, \"Observation: Calibration vector beams: from mathematical concepts to applications,\" Adv. Opt. Photon.1, 1 (2009).
* (28) Chithrabhanu Pemangatta, Gang Reddy Salla, Ali Anwar, A. Aaddi, Shashi Prabhakar, R.P. Singh, \"Scattering of non-separable states of light,\" Optics Com. 335, 301-305 (2015).
* (29) H. Di Lorenzo Pires, H. C. B. Florijn, and M. P. van Exter, \"Measurement of the Spiral Spectrum of Entangled Two-Photon States,\" Phys. Rev. Lett. 104, 020505 (1990).
* (30) C. V. S. Borges, M. Hor-Meyll, J. A. O. Huguenin, and A. Z Khoury, \"Bell-like inequality for the spin-orbit separability of a laser beam,\" Phys. Rev. A 82, 033833 (2010)
* (31) N. Apuru Chatura, M. V. Jablir, and G. K. Samanta, \"Efficient nonlinear generation of high power, higher order, ultrafast \"perfect\" vortices in green,\" Opt. Lett. 41, 1348-1351 (2016).
* (32) Hrdi Takeeke, Kyo Inoue, Osamu Tadanaga, Yoshiki Nishida, and Masaki Asobe, \"Generation of pulsed polarization-entangled photon pairs in a 1.55-um band with a periodically poled lithium niobate waveguide and an orthogonal polarization delay circuit,\" Opt. Lett. 30, 293-295 (2005).
* (33) Daniel F. V. James, Paul G. Kwiat, William J. Munro, and Andrew G. White, \"Measurement of qubits,\" Phys. Rev. A 64, 052312 (2001).
* (34) Apuru Chatura N., A. Aaddi, M. V. Jablir, and G. K. Samanta, \"Frequency-doubling characteristics of high-power, ultrafast vortex beams,\" Optics Letters.40,11, 2614-2617 (2015).
* (35) Y. Jeronimo-Moreno and R. Jauregui, \"On-demand generation of propagation-invariant photons with orbital angular momentum,\" Phys. Rev. A 90, 013833 (2014).
* (36) Clara I. Osoro, Gabriel Molina-Terriza, and Juan P. Torres, \"Correlations in orbital angular momentum of spatially entangled paired photons generated in parametric down-conversion,\" Phys. Rev. A 77, 015810 (2008).
* (37) Zhi-Yuan Zhou, Shi-Long Liu, Yan Li, Dong-Sheng Ding, Wei Zhang, Shuai Shi, Ming-Xin Dong, Bao-Sen Shi, and Guang-Can Guo, \"Orbital Angular Momentum-Entanglement Frequency Transducer,\" Phys. Rev. Lett. 117, 103601 (2016). | Hybrid entangled states, having entanglement between different degrees-of-freedom (DoF) of a particle pair, are of great interest for quantum information science and communication protocols. Among different DoFs, the hybrid entangled states encoded with polarization and orbital angular momentum (OAM) allow the generation of qubit-qudit entangled states, macroscopic entanglement with very high quanta of 0AM and improvement in angular resolution in remote sensing. Till date, such hybrid entangled states are generated by using a high-fidelity polarization entangled state and subsequent imprinting of chosen amount of OAM using suitable mode converters such as spatial light modulator in complicated experimental schemes. Given that the entangled sources have feeble number of photons, loss of photons during imprinting of OAM using diffractive optical elements limits the use of such hybrid state for practical applications. Here we report, on a simple experimental scheme to generate hybrid entangled state in polarization and OAM through direct transfer of classical non-separable state of the pump beam in parametric down conversion process. As a proof of principle, using local non-separable pump state of OAM mode _l_=3, we have produced quantum hybrid entangled state with entanglement witness parameter of \\(\\mathbf{\\hat{W}}\\)= 1.25\\(\\pm\\)0.03 violating by 8 standard deviation. The generic scheme can be used to produce hybrid entangled state between two photons differing by any quantum number through proper choice of non-separable state of the pump beam. | Give a concise overview of the text below. | 322 |
arxiv-format/2011_14497v3.md | # Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling
Kavisha Vidanapathirana\\({}^{1,2}\\), Peyman Moghadam\\({}^{1,2}\\), Ben Harwood\\({}^{1}\\), Muming Zhao\\({}^{1}\\),
Sridha Sridharan\\({}^{2}\\), Clinton Fookes\\({}^{2}\\)
\\({}^{1}\\) Kavisha Vidanapathirana, Peyman Moghadam, Ben Harwood, Muming Zhao are with the Robotics and Autonomous Systems Group, DATA61, CSIRO, Brisbane, QLD 4069, Australia. E-mails: _firstname.lastname_ @data1.csiro.au\\({}^{2}\\) Kavisha Vidanapathirana, Peyman Moghadam, Sridha Sridharan, Clinton Fookes are with the School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Brisbane, Australia. E-mails: {kavisha.vidanapathirana, peyman.moghadam, s.sridharan, c.fookes}@qut.edu.au
## I Introduction
Place Recognition (PR) is an essential capability required by autonomous robots and driverless cars, which enables the recognition of previously visited places under changing viewpoints and environmental conditions. Place recognition is crucial for various applications such as loop closure detection for large-scale, global data association in Simultaneous Localization and Mapping (SLAM) or metric localization within known maps [1, 2].
In this paper, we consider the problem of place recognition based on 3D LiDAR point clouds. The majority of the current state-of-the-art LiDAR-based place recognition methods extract representations of 3D point clouds based on either local or global descriptors. Local descriptors such as keypoint features [3] often suffer from low repeatability under noise and minor changes in the environment. The demand for extracting discriminative and generalizable global descriptors from point clouds has led to the development of several different techniques [4, 5, 6]. However, the majority of global descriptors are not robust to viewpoint variations, occlusions and dynamic scenes [7]. A potential reason for this fragility is that these methods are inherently not capable of capturing topological and temporal interactions between different components in the scene. Such auxiliary information is vital for the construction of robust and discriminative representations since a higher-level understanding is needed to distinguish between different scenes of similar structure.
To this end, we propose a novel place recognition method (named _Locus_) which effectively exploits multi-level features for global place description using 3D LiDAR data. We define multiple levels of feature representations for each point cloud frame. A point cloud frame is defined as the points accumulated from a sweep of the LiDAR [8]. First, segment features are extracted to encode structural information of each point cloud frame. The segment-based representation leverages from the advantages of both local and global representations while not suffering from their individual drawbacks. Compared to local keypoint based representations, they are more descriptive and more robust to noise and changing environments. Next, spatial feature pooling encodes topological relationships between segments within a point cloud frame. Capturing topological relationships between components of a scene enables discrimination between scenes of similar composition but different arrangement.
Finally, temporal feature pooling encodes co-occurrences of segments in the past \\(k_{t}\\) point cloud frames providing robustness to sensor motion and dynamic objects. Once multi-level features are extracted, second-order pooling aggregates information of local features over a point cloud frame to form a holistic representation for place recognition. Second-order pooling captures the multiplicative interactions between the multi-level features and outputs a fixed-length descriptor which is invariant to the number and permutation of the input features. Furthermore, the fixed dimension of the global descriptor enables maintaining the computational complexity.
Our main contributions are summarised as follows:
* We introduce multi-level features which encode structural appearance, topological relationships and temporal correspondences related to components in a scene.
* We formulate the generation of a global descriptor which encodes these multi-level features into a single viewpoint invariant representation using second-order pooling and demonstrate how these multi-level features contribute to place recognition performance.
* Our proposed method (_Locus_) outperforms the state-of-the-art place recognition methods on the KITTI dataset.
* We quantitatively evaluate the robustness of our _Locus_ method on a variety of challenging scenarios such as viewpoint changes and occlusion.
## II Related Work
### _Point cloud representation and descriptor generation_
To address the challenge of place recognition, the scene-level point cloud is often encoded in three different ways; a set of local descriptors, a single global descriptor or a set of object/segment descriptors. Local descriptor based methods first detect a set of keypoints in the point cloud and then form local descriptors by encoding information in each keypoint neighbourhood [3, 9]. Local descriptors and keypoint detection suffer from low repeatability in noisy point clouds and changing environments.
Global descriptors aim to describe the entire point cloud by a single vector representation. M2DP [4] generates a global descriptor by projecting the points into multiple 2D planes and calculating density signatures. PointNetVLAD [5] extracts a global descriptor using an end-to-end process based on PointNet and NetVLAD [10]. PointNetVLAD ignores the spatial distribution of features and hence lacks descriptive power. LPD-Net [11] addresses the limitation of PointNetVLAD by using adaptive local feature extraction along with a graph-based neighborhood aggregation module. Recently, DH3D [12] proposed a network which learns both the keypoint detection and local description and generates a global descriptor using NetVLAD. The aforementioned global descriptors do not demonstrate rotational-invariance and often fail upon reverse revisits.
The current state-of-the-art consists of several rotation-invariant global descriptors. ScanContext [6] records the maximum height of points in a 2D grid-map of the scene and computes pairwise descriptor distances by comparing distances to all column shifted variants of the query descriptor to achieve rotational-invariance. Intensity ScanContext [13] extends ScanContext by including the intensity return of the LiDAR sensor. The ScanContext and its variants are not capable of capturing scene composition or topological relationships and rely on expensive distance calculation to ensure rotational-invariance.
Recently, Kong _et al._[14] represented point clouds as a semantic graph and recognize places using a graph similarity network. Their network is capable of capturing topological and semantic information from the point cloud and also achieves rotational-invariance along with state-of-the-art performance. However, due to the use of semantic segmentation, their method suffers from the following two bottlenecks. First, it is dependant on the existence of pre-defined semantic classes in the test dataset (_i.e._, domain gap problem). Second, since each segment is only represented by its class label, the method is not capable of differentiating between two segments of the same class and thus loses valuable information related to intra-class variations.
Euclidean segment based representations [7] are less affected by the aforementioned drawbacks while still being able to capture topological and semantic information of the components in a scene. SegMatch [7] and SegMap [15] address the challenges of local and global descriptors by constructing a global map along the trajectory and performs recognition via segment-wise kNN retrieval on this map followed by a geometric consistency check. In this paper, we also leverage from segment descriptors for point cloud encoding but we avoid the construction of a global map and instead treat place recognition as a retrieval problem similar to recent LiDAR-based place recognition research [5, 6, 13, 14]. In this sense, we avoid segment-wise kNN search and construct a global descriptor through the aggregation of multi-level segment features.
### _Feature aggregation_
Bag-of-Words (BoW) [16] and its higher-order variants (Vector of Locally Aggregated Descriptors (VLAD) [17], NetVLAD [10], Fisher Vector (FV) [18]) are popular methods in many retrieval tasks which construct global descriptors by aggregating local descriptors using zeroth, first or second order statistics respectively. These codebook-based aggregation methods require a preprocessing step to learn the codebook (a standard BoW, cluster centers in VLAD
Fig. 1: The overall framework of the proposed _Locus_ method. Segments extracted from a LiDAR point cloud frame are described using two complementary sets of features. One feature describes the structural appearance of a segment while the other encodes topological and temporal information related to a segment. The two sets of features are aggregated using second-order pooling (O2P) followed by a Power-Euclidean transform (PE) to obtain a global descriptor of the point cloud.
and Gaussian Mixture Models (GMM) in FV) and do not transfer well to unseen environments.
Other higher-order aggregation methods such as Second Order Pooling (O2P) [19] do not require such pre-processing steps and and hence enables to be used for online place recognition in unseen environments. They also tend to be more efficient in run-time since the description stage does not require finding the nearest codebook word for each feature. Additionally, O2P methods allow the aggregation of complementary features [20] while maintaining a fixed-length descriptor without violating permutation invariance. In this paper, we propose to use second-order pooling to aggregate multiple levels of feature representations (structural, topological and temporal) into a fixed-length global descriptor for place recognition.
### _Incorporating temporal information_
In place recognition, temporal information has been used almost exclusively in the retrieval stage. In visual place recognition (VPR), SeqSLAM [21] incorporated sequential information by comparing feature similarity over time and demonstrated dramatic performance improvement when dealing with changing appearance. Lynen _et al._[22] modeled sequential retrieval as a probability density estimation in votes versus travel distance space. More recently, SeqLPD [23] extended LPD-Net with SeqSLAM for LiDAR-based place recognition. Our method differs from these sequential retrieval methods by incorporating temporal information in the place description stage instead. Recently, in VPR, Delta descriptors [24] outlined how visual sequential-representation provides inherent robustness to variations in camera motion and environmental changes.
## III Method
This section describes our proposed method (named _Locus_) for LiDAR-based place description including the segment based representation of the point cloud, the generation of the multi-level features and their second-order aggregation (see Fig. 1).
### _Segment-based representation_
Segments are defined as Euclidean clusters of points in the point cloud representation. Our segment extraction is performed similar to the SegMatch implementation [7] by first removing the ground plane and then extracting Euclidean clusters of points. The maximum Euclidean distance between two neighbouring points in the same segment is set to \\(0.2m\\) and the segments contain a minimum of 100 and a maximum of 15000 points. Thus, each input point cloud \\(P\\in\\mathbb{R}^{N\\times 3}\\) is represented by a set of segments \\(S=\\{s_{1},..,s_{m}\\}\\), where \\(s_{i}\\in\\mathbb{R}^{N_{i}\\times 3}\\), and the number of segments \\(m\\) varies depending on the environment and range of the point cloud.
For each segment, a compact deep feature which encodes the structural appearance of the segment is obtained using the _SegMap-CNN_ network proposed in [15]. The network represents each segment in a voxel grid of 32x32x16 of which the voxel sizes (0.1m by default) are scaled in order to fit larger segments. The description network consists of three 3D convolutional layers plus max-pool layers followed by two fully connected layers. For a given set of \\(m\\) segments \\(S\\), it outputs a set of compact features \\(F_{a}=\\{f_{a_{1}},..,f_{a_{m}}\\}\\) (where \\(f_{a_{i}}=f_{a}(s_{i})\\in\\mathbb{R}^{d},d=64\\)) which discriminates segments based on structural appearance.
### _Spatiotemporal Pooling_
Incorporating topological and temporal information in place description has many advantages when dealing with changing environments and varying sensor motion. With the aim of encoding this information, we compute a complimentary set of features \\(F_{b}\\in\\mathbb{R}^{m\\times d}\\) for each point cloud frame \\(P_{n}\\), in addition to the structural appearance features \\(F_{a}\\in\\mathbb{R}^{m\\times d}\\) described in section III-A. This is achieved via two stages of feature pooling: spatial and temporal.
Feature pooling computes a new feature for a segment by taking the weighted average of features of all related segments. Encoding topological information is achieved via feature pooling based on spatial relationships within a point cloud frame. Temporal information is encoded via pooling features based on temporal correspondences across multiple point cloud frames. For a query segment \\(s_{i}^{P_{n}}\\) from the current point cloud frame \\(P_{n}\\), we find topological relationships and temporal correspondences with all other segments \\(s_{j}^{P_{1}}\\in\\mathbb{S}_{n,k_{t}}\\) from the current frame and \\(k_{t}\\) frames into the immediate past,
\\[\\mathbb{S}_{n,k_{t}}=\\{S^{P_{n}},..,S^{P_{n-k_{t}}}\\},\\quad S^{P_{n}}=\\{s_{1}^ {P_{1}},..,s_{m_{1}}^{P_{1}}\\}, \\tag{1}\\]
If a segment \\(s_{j}^{P_{1}}\\) is spatially or temporally related to the query segment \\(s_{i}^{P_{n}}\\), its structural appearance feature \\(f_{a}(s_{j}^{P_{1}})\\) is included in the feature pooling of the query segment.
#### Iii-B1 **Spatial Feature Pooling**
We define spatial relationships between segments through a directed graph \\(\\mathcal{G}=(\\mathcal{V},\\mathcal{E})\\). Vertices \\(\\mathcal{V}=\\{1,..,m\\}\\) represent the segments \\(S=\\{s_{1},..,s_{m}\\}\\) in the current point cloud frame and edges \\(\\mathcal{E}\\subseteq\\mathcal{V}\\times\\mathcal{V}\\) represent which segments relate to which other segments. \\(\\mathcal{G}\\) is constructed as a kNN graph where each segment is connected to its \\(k_{s}\\) nearest neighbours (\\(k_{s}=5\\)) and the distance between segments is calculated using the minimum translational distance (MTD) [25] between the convex hulls of the segments. We use the QuickHull algorithm [26] to compute convex hulls. Since the segment extraction process in Section III-A guarantees that segments do not overlap, the MTD (\\(\\mathcal{D}\\)) can be computed as follows,
\\[\\mathcal{D}(s_{i},s_{j})=\\min\\{\\|p_{i_{s}}-p_{j_{y}}\\|:\\forall\\ p_{i_{x}}\\in s _{i},p_{j_{y}}\\in s_{j}\\} \\tag{2}\\]
where \\(\\|p_{i_{s}}-p_{j_{y}}\\|\\) represents Euclidean distance between points \\(p_{i_{x}},p_{j_{y}}\\in\\mathbb{R}^{3}\\) and \\(\\hat{s}=\\texttt{QuickHull}(s)\\). The spatial feature pooling for segment \\(s_{i}\\) is then carried out as,
\\[\\Phi(s_{i})=\\sum_{j:(i,j)\\in\\mathcal{E}}\\phi(i,j)f_{a}(s_{j}), \\tag{3}\\]
\\[\\phi(i,j) =\\texttt{softmax}\\{\\mathcal{D}(s_{i},s_{j})\\} \\tag{4}\\] \\[=\\frac{exp(-\\beta\\cdot\\mathcal{D}(s_{i},s_{j}))}{\\sum_{k:(i,k) \\in\\mathcal{E}}exp(-\\beta\\cdot\\mathcal{D}(s_{i},s_{k}))},\\]where \\(\\beta=0.1\\) is a smoothing factor. The spatially-pooled feature \\(\\Phi(s_{i})\\) is essentially a weighted average of the structural-appearance features (\\(f_{a}\\)) of the 5 closest segments to segment \\(s_{i}\\). This captures information on the immediate neighbourhood of \\(s_{i}\\) and thus contributes towards encoding topological information in the scene.
#### Iii-B2 **Temporal Feature Pooling**
Temporal relationships are defined by segment correspondences between frames. The segment \\(s_{i}^{P_{n}}\\) in the current point cloud frame \\(P_{n}\\) will only relate to its corresponding segment in each of the \\(k_{t}\\) previous frames (\\(k_{t}=3\\)). Segment correspondence indices \\(S_{c}\\) are calculated as in Algorithm 1 iteratively \\(k_{t}\\) times.
First \\(kNN\\) finds the indices of the k nearest neighbours of \\(f_{a_{i}}\\) from the set of features \\(F_{a}^{P_{l-1}}\\) of the previous frame. Next, \\(rNN\\) finds the indices of nearest neighbor centroids from the previous frame within a radius \\(r=1m\\). To increase accuracy, the \\(rNN\\) search takes into account the the sensor's relative pose across frames represented by the homogeneous transformation \\({}^{l-1}T_{l}\\). Finally, common elements in both nearest neighbour sets are found using the intersection function and the function arg_min finds the index of the segment which minimises both feature-space and Euclidean-space distance.
For the selected segment \\(s_{i}^{P_{n}}\\) in the current frame, the set of corresponding segments can be obtained from \\(S_{c}\\) as \\(\\{s_{i}^{P_{n}},..,s_{i}^{P_{n-k_{c}}}\\}=\\{\\mathbb{S}_{n,k_{t}}(j):j\\in S_{c}\\}\\). We note that \\(k_{c}\\leq k_{t}\\) since correspondences can be lost between frames (when a segment which minimises both feature space and Euclidean-space distance is not available in the previous frame). The temporal pooling for \\(s_{i}^{P_{n}}\\) is then carried out as,
\\[\\Psi(s_{i})=\\sum_{j\\in S_{c}}\\psi(i,j)f_{a}(\\tilde{s_{j}}),\\quad\\tilde{s_{j}}= \\mathbb{S}_{n,k_{t}}(j), \\tag{5}\\]
where \\(\\psi(i,j)\\) is calculated similar to \\(\\phi(i,j)\\) in Eq. 4 with \\(\\mathcal{D}(s_{i},s_{j})\\) replaced by \\(\\|f_{a}(s_{i})-f_{a}(\\tilde{s_{j}})\\|\\) and \\(k\\) is sampled from \\(S_{c}\\). The aggregation of features across multiple sequential frames essentially magnifies the weight of features corresponding to highly repeatable segments (segments which are extracted at every frame in the sequence and mapped to a similar point in the \\(f_{a}\\) feature space each time) and thus, inherently down-weights non-repeatable segments.
The final spatiotemporal feature \\(f_{b}\\) is obtained as the average of a spatially-pooled feature \\(\\Phi(s_{i})\\) and a temporally-pooled feature \\(\\Psi(s_{i})\\),
\\[f_{b}(s_{i})=(\\Phi(s_{i})+\\Psi(s_{i}))/2. \\tag{6}\\]
### _Second-order pooling_
Given a set of segments \\(S=\\{s_{1},..,s_{m}\\}\\) and two complementary sets of features \\(F_{a}\\in\\mathbb{R}^{m\\times d}\\) and \\(F_{b}\\in\\mathbb{R}^{m\\times d}\\) the second-order pooling \\(F^{O_{2}}\\) of the features is defined as,
\\[F^{O_{2}}=\\{F_{xy}^{O_{2}}\\},\\quad F_{xy}^{O_{2}}=\\max_{s\\in S}\\ f_{xy}^{o_{2} }(s), \\tag{7}\\]
where \\(F^{O_{2}}\\) is a matrix with elements \\(F_{xy}^{O_{2}}(1\\leq x,y\\leq d)\\) and \\(f^{o_{2}}(s)=f_{a}(s)f_{b}(s)^{T}\\in\\mathbb{R}^{d\\times d}\\) is the outer product of the two complementary features of segment \\(s\\) (\\(f_{a}(s),f_{b}(s)\\in\\mathbb{R}^{d}\\)). This accounts to taking the element-wise maximum of the second-order features of all segments in the scene.
In order to make the scene descriptor matrix \\(F^{O_{2}}\\) more discriminative, it is decomposed in to singular values as \\(F^{O_{2}}=U\\lambda V\\) and transformed non-linearly using the Power-Euclidean (PE) transform [27][28] into \\(F_{\\alpha}^{O_{2}}\\) by raising each of its Eigen values by a power of \\(\\alpha\\) as follows,
\\[F_{\\alpha}^{O_{2}}=U\\hat{\\lambda}V,\\quad\\hat{\\lambda}=\\textit{diag}(\\lambda_{ 1,1}^{\\alpha},..,\\lambda_{d,d}^{\\alpha}), \\tag{8}\\]
where \\(\\alpha=0.5\\). The matrix \\(F_{\\alpha}^{O_{2}}\\) is flattened and normalized to obtain the final global descriptor vector \\(\\textbf{g}\\in\\mathbb{R}^{d^{2}}\\).
The aggregation of multi-level features using higher-order pooling has multiple advantages for segment-based place description. First, it allows the encoding of complementary features for each segment thus enabling the incorporation of structural-appearance information along with topological and temporal information. Second, even though different point clouds will consist of varying number of segments \\(m\\) of varying sizes \\(N_{i}\\), the output dimension of the aggregated feature is fixed and therefore computational time can be greatly reduced. Finally, the aggregated feature is invariant to the permutation of its inputs, which results in a viewpoint invariant global descriptor for place recognition.
## IV Experimental Setup
### _Evaluation criteria_
We evaluate performance using the Precision-Recall curve and its scalar metric: the maximum \\(F1\\) score (\\(F1_{max}\\)). Additionally, we use the Extended Precision (\\(EP\\)) metric proposed in [29] as it highlights the maximum recall at 100% precision which is vital for robustness in place recognition. For Precision (P) and Recall (R), the \\(F1_{max}\\) and \\(EP\\) scores are defined as,
\\[F1_{max}=\\max_{\\tau}\\ 2\\frac{P_{\\tau}\\cdot R_{\\tau}}{P_{\\tau}+R_{\\tau}},\\quad EP =\\frac{P_{R0}+R_{P100}}{2}, \\tag{9}\\]where \\(\\tau\\) is the threshold for positive prediction, \\(P_{R0}\\) is the Precision at minimum Recall, and \\(R_{P100}\\) is the maximum Recall at 100% Precision.
Retrieval is performed based on the comparison of the cosine distance of the query descriptor with a database of descriptors of previously visited places. Inline with the evaluation criteria of [14], previous entries adjacent to the query by less than \\(30s\\) time difference are excluded from search to avoid matching to the same instance. The top-1 retrieval is considered a positive if the associated distance to query is less than the test threshold (\\(\\tau\\)). A positive retrieval is considered a true-positive if it is less than 3m from the ground truth pose of the query and a false-positive if it is greater than 20m away to maintain consistency with the evaluation in [14].
### _Dataset_
Evaluation is performed on the KITTI odometry dataset [8] which consists of Velodyne HDL-64E LiDAR scans collected from a moving vehicle in multiple dynamic urban environments in Karlsruhe, Germany. In line with the most recent evaluation on this dataset [14], we evaluate on sequences 00, 02, 05, 06, 07, and 08.
## V Results
In this section, we demonstrate the contribution of each component in our proposed method. We also provide quantitative and qualitative results to validate our method in comparison to state-of-the-art methods and we quantitatively evaluate the robustness our proposed method against viewpoint changes and sensor occlusion.
### _Contribution of individual components_
We investigate the contributions of each component (ablation study) of our proposed method using \\(F1_{max}\\) and \\(EP\\) on KITTI sequence \\(08\\). First, we demonstrate the importance of the Power Euclidean (PE) non-linear transform in our second-order pooling. For the generation of feature \\(f_{a}\\), we use the model trained on extracted segments from the sequences 05 and 06 [15]. The most basic second-order pooling method (_O2P_) is the aggregation of second-order statistics of the same feature (Eq. 7 with \\(f_{b}=f_{a}\\)). This can be extended with the Power Euclidean (PE) non-linear transform to get _O2P+PE_ (Eq. 8 with \\(f_{b}=f_{a}\\)). The _O2P+PE_ shows a dramatic increase of 44.7% \\(F1_{max}\\) (from 0.550 to 0.796) and and improvement of 32.4% \\(EP\\) (from 0.564 to 0.739) when including the non-linear transform. This demonstrates the importance of PE for obtaining discriminative descriptors.
The rest of the evaluation uses the _O2P+PE_ feature pooling and compares the contribution of each input feature type. The input \\(f_{a}\\) is the structural appearance feature and the input feature \\(f_{b}\\) is varied across all multi-level feature types. Table I shows the respective performance for each of the following cases: \\(f_{b}=f_{a}\\) (structural), \\(f_{b}=\\Phi\\) (structural \\(\\otimes\\) spatial), \\(f_{b}=\\Psi\\) (structural \\(\\otimes\\) temporal), and \\(f_{b}=(\\Phi+\\Psi)/2\\) (structural \\(\\otimes\\) spatiotemporal).
Table I demonstrates that pooling information from complementary features improves the place recognition performance. Furthermore, incorporating topological or temporal information individually makes the final descriptor more discriminative and improves \\(F1_{max}\\) by 12.4% and 13.1% respectively. The best performance of our proposed method is achieved when incorporating both topological and temporal information along with the structural appearance (+ 17% \\(F1_{max}\\)).
### _Comparison to State-of-the-Art_
In this section, we compare the results of our method against other state-of-the-art results. Table II shows summary comparison of \\(F1_{max}\\) scores. _Locus_ outperforms all other methods with the highest mean \\(F1_{max}\\) score (0.942). Our method sets a new state-of-the-art place recognition performance on KITTI dataset. _Locus_ achieves a significant improvement on the challenging sequence \\(08\\) which consists of number of reverse or orthogonal revisits thereby demonstrating the robustness to viewpoint variations.
The Precision-Recall curves of _Locus_ across all sequences are presented in Fig. 2. The curves highlight that the performance of _Locus_ on sequence \\(02\\) is significantly poor compared to other sequences. As depicted in Fig. 3, sequence \\(02\\) has a long stretch of road with many false negatives. This road consists of non-descriptive point clouds where the segment extraction process fails to obtain a high number of descriptive segments.
The qualitative visualization of our method at \\(R_{P100}\\) (_i.e._, maximum Recall at 100% Precision) is shown in Fig. 3. The figure shows the locations of true positive (red) and false negative (black) retrievals along the trajectory. This demonstrates that our method is capable of obtaining accurate retrievals in a wide range of environment and revisit types. We observe that most failure cases (black dots) occur
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline Feature Types & \\(EP\\) & \\(F1_{max}\\) \\\\ \\hline structural \\(\\otimes\\) structural & 0.739 & 0.796 \\\\ structural \\(\\otimes\\) spatial & 0.847 & 0.895 \\\\ structural \\(\\otimes\\) temporal & 0.865 & 0.900 \\\\
**structural \\(\\otimes\\) spatiotemporal** & **0.912** & **0.931** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE I: A comparison of _Locus_ performance on KITTI sequence 08 when different input feature types are used.
Fig. 2: Precision-Recall curves of our proposed _Locus_ method on the KITTI dataset.
near intersections.
### _Robustness tests_
We evaluate the robustness of our method against the state-of-the-art on a variety of challenging scenarios which simulate real-word adverse conditions. We simulate these adverse conditions by introducing a set of distortions to the input point cloud. Performance is compared based on the \\(F1_{max}\\) metric on the same KITTI sequences.
#### V-C1 **Viewpoint changes**
We simulate viewpoint changes by rotating point clouds about the z-axis with a random angle. The change in mean \\(F1_{max}\\) over all sequences on this test compared to the normal results was -0.007 for our method. The respective change for ScanContext was +0.002 and for SemGraph-RN it was -0.003 as presented in [14]. _Locus_, ScanContext and SemGraph-RN show less than 1% difference implying that random rotation of the input point cloud does not effect the final performance of these methods. Other methods such as M2DP, PointNetVLAD have been shown not to be rotation invariant as mentioned in [14].
#### V-C2 **Occlusion**
This test aims to simulate occlusion of the LiDAR where the field-of-view of the sensor can be greatly reduced due to nearby dynamic objects or self-occlusion. We extend the occlusion test in [14] which only considered occlusions of 30\\({}^{\\circ}\\) by evaluating performance of various occlusion angles \\(\\theta_{occ}\\) from 0\\({}^{\\circ}\\) up to 180\\({}^{\\circ}\\). The occlusion test consists of removing all points which lie within a sector of a randomly selected azimuth. We compare our method with ScanContext (SC) [6], which achieved the second best overall performance in the Table II. ScanContext (SC) mean \\(F1_{max}\\) performance drops by more than 20% at 45\\({}^{\\circ}\\) occlusion and around 50% at 90\\({}^{\\circ}\\) occlusion. Our method shows small performance degradation of 3.2% at 45\\({}^{\\circ}\\) occlusion and 9.98% at 90\\({}^{\\circ}\\) occlusion. Our method outperforms ScanContext (SC) with a large margin of 45.2% mean \\(F1_{max}\\) at 90\\({}^{\\circ}\\) occlusion. Also note that in our method, the \\(F1_{max}\\) of sequences \\(00\\) and \\(05\\) remains above 80% even at occlusions of 180\\({}^{\\circ}\\) where 50% of the point cloud is removed. Results for all sequences are depicted in Figure 4.
## VI Conclusion
In this paper, we presented _Locus_, a novel LiDAR-based place recognition method for large-scale environments. We presented the advantages of scene representation via the aggregation of multi-level features related to components in a scene. We quantitatively showed how the inclusion of topological and temporal information in the description stage leads to an improvement in final place recognition performance. We formulated the generation of a global descriptor which incorporates all multi-level features without violating rotational-invariance. We validated our method through evaluation on the KITTI dataset where it surpassed the state-of-the art. Furthermore, we demonstrated the robustness of _Locus_ against viewpoint changes and occlusion.
\\begin{table}
\\begin{tabular}{c|c c c c c|c} \\hline \\hline Methods & 00 & 02 & 05 & 06 & 07 & 08 & Mean \\\\ \\hline M2DP [4] & 0.836 & 0.781 & 0.772 & 0.896 & 0.861 & 0.169 & 0.719 \\\\ ScanContext [6] & 0.937 & 0.858 & 0.955 & **0.998** & 0.922 & 0.811 & 0.914 \\\\ PointNetVLAD [5] & 0.785 & 0.710 & 0.775 & 0.903 & 0.448 & 0.142 & 0.627 \\\\ PointNetVLAD* [5] & 0.882 & 0.791 & 0.734 & 0.953 & 0.767 & 0.129 & 0.709 \\\\ SemGraph-RN [14] & 0.960 & **0.859** & 0.897 & 0.944 & 0.984 & 0.783 & 0.904 \\\\ _Locus_ **(Ours)** & **0.983** & 0.762 & **0.981** & 0.992 & **1.0** & **0.931** & **0.942** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE II: \\(F1_{max}\\) scores on the KITTI dataset. Results of M2DP, ScanContext, PointNetVLAD, and SemGraph are as presented in [14] with the same evaluation criteria. PointNetVLAD* is the PointNetVLAD method retrained on KITTI [14].
Fig. 4: Robustness of _Locus_ to scan occlusion. Vertical blue line shows when the mean \\(F1_{max}\\) drops by 10%.
Fig. 3: Qualitative performance visualization of our method at \\(R_{P100}\\) (zero false positives) along the trajectory. Red: true positives, Black: false negatives, Blue: true negatives (not revisits).
## References
* IEEE International Conference on Robotics and Automation, pp. 1206-1213. Cited by: SSI
| Place Recognition enables the estimation of a globally consistent map and trajectory by providing non-local constraints in Simultaneous Localisation and Mapping (SLAM). This paper presents _Locus_, a novel place recognition method using 3D LiDAR point clouds in large-scale environments. We propose a method for extracting and encoding topological and temporal information related to components in a scene and demonstrate how the inclusion of this auxiliary information in place description leads to more robust and discriminative scene representations. Second-order pooling along with a non-linear transform is used to aggregate these multi-level features to generate a fixed-length global descriptor, which is invariant to the permutation of input features. The proposed method outperforms state-of-the-art methods on the KITTI dataset. Furthermore, _Locus_ is demonstrated to be robust across several challenging situations such as occlusions and viewpoint changes in 3D LiDAR point clouds. The open-source implementation is available at: [https://github.com/csiro-robotics/locus](https://github.com/csiro-robotics/locus) - | Condense the content of the following passage. | 220 |
mdpi/011ff480_5053_4611_8e1f_41eb837cec83.md | Effects of N/S Molar Ratio on Product Formation in Psychrophic Autotrophic Biological Removal of Suffide
Michal Sposob
1Department of Process, Energy and Environmental Technology, University College of Southeast Norway,
Kjolenes Ring 56, 3918 Porsgrunn, Norway; [email protected] (R.B.); [email protected] (C.D.)1
Rune Bakke
1Department of Process, Energy and Environmental Technology, University College of Southeast Norway,
Kjolenes Ring 56, 3918 Porsgrunn, Norway; [email protected] (R.B.); [email protected] (C.D.)1
Carlos Dinamarca
1Department of Process, Energy and Environmental Technology, University College of Southeast Norway,
Kjolenes Ring 56, 3918 Porsgrunn, Norway; [email protected] (R.B.); [email protected] (C.D.)1
Received: 13 March 2017; Accepted: 27 June 2017; Published: 29 June 2017
## 1 Introduction
Nitrate (NO\\({}_{3}\\)\\({}^{-}\\)) and sulfide (H\\({}_{2}\\)S) are present in many kinds of wastewater. Their removal is necessary due to their negative environmental and economic impact--i.e., increase of maintenance costs in anaerobic digesters or wastewater treatment plants. Presence of H\\({}_{2}\\)S can lead to corrosion, human toxicity, and biological process inhibition [1]. It has been reported that concentrations of dissolved HS\\({}^{-}\\) in the 100-800 mg/L range can inhibit anaerobic digestion [2]. Additionally, the presence of NO\\({}_{3}\\)\\({}^{-}\\) can inhibit volatile fatty acids (VFAs) production, methanogens, and consequently methane production [3].
Due to the wide diversity of sulfur reducing bacteria (SRB) the production of H\\({}_{2}\\)S can occur also in psychrophilic conditions [4]. The possibility to remove H\\({}_{2}\\)S in psychrophilic conditions by harvesting elemental sulfur (S\\({}^{0}\\)) out of the process line seems to be an interesting opportunity. Many waters and wastewaters are characterized by their low temperatures, especially in cold climates and winter conditions (e.g., Nordic countries). Production of S\\({}^{0}\\) at low temperatures can become important since heating up to mesophilic conditions can be prohibitively expensive.
NO\\({}_{3}\\)\\({}^{-}\\) and HS\\({}^{-}\\) can be removed simultaneously by sulfide oxidizing bacteria (SOB), where NO\\({}_{3}\\)\\({}^{-}\\) serves as an electron acceptor and HS\\({}^{-}\\) as an electron donor. Simultaneous removal of NO\\({}_{3}\\)\\({}^{-}\\) and HS\\({}^{-}\\) has been studied frequently in auto- and heterotrophic conditions but to our knowledge, nothing was published on continuous flow EGSB at low temperatures and at different N/S ratios. The simultaneous presence of NO\\({}_{3}\\)\\({}^{-}\\) and HS\\({}^{-}\\) in wastewaters is uncommon. Thus, in terms of applicability of the described process, typically NO3\\({}^{-}\\) needs to be added to remove HS\\({}^{-}\\) from contaminated water. The usage of NO3\\({}^{-}\\) as an electron acceptor for HS\\({}^{-}\\) removal can be more cost-effective than O2, which can also be used in biological HS\\({}^{-}\\) oxidation. NO3\\({}^{-}\\) has high solubility and can be added at lower costs than O2 [5].
The simultaneous biological removal of NO3\\({}^{-}\\) and HS\\({}^{-}\\) can lead to different final products in terms of HS\\({}^{-}\\) oxidation degree depending on relative molar ratio between NO3\\({}^{-}\\) and HS\\({}^{-}\\) (N/S ratio), while NO3\\({}^{-}\\) is reduced to nitrogen gas (N2). Based on theoretical considerations, including both anabolism and catabolism, two different key N/S ratios can be distinguished: 0.35 and 1.30 [6]. At N/S = 0.35 the main final product is S\\({}^{0}\\) where for 1.30 it is SO4\\({}^{2-}\\). N/S = 1.30 requires four-times more NO3\\({}^{-}\\) than at N/S = 0.35 for mainly S\\({}^{0}\\) production. Mixed products composition occurs at feed ratios between these two values [7]. Previously published batch and continuous flow experiments were focused on appropriate electron donor (reduced sulfur compounds), C/N/S ratios, reactor configurations, and/or pH conditions at mainly mesophilic conditions [8; 9; 10; 11]. Psychrophilic conditions are rarely studied [8; 12; 13], but it has been reported that the removal of NO3\\({}^{-}\\) decreases at temperature <15 \\({}^{\\circ}\\)C [14]. Efficient NO3\\({}^{-}\\) removal using thiosulfate (S2O3\\({}^{2-}\\)) as an electron donor has, however, been observed at 3 \\({}^{\\circ}\\)C [13] and efficient NO3\\({}^{-}\\) removal at 10 \\({}^{\\circ}\\)C with HS\\({}^{-}\\) as an electron donor is reported [15].
The objective of this study is to evaluate effects of different N/S ratios as a strategy to control sulfur product distribution in a continuous flow expanded granular sludge bed (EGSB) reactor at 10 \\({}^{\\circ}\\)C.
## 2 Materials and Methods
### Inoculum and Enrichment
The inoculum was taken from an up-flow anaerobic sludge blanket (UASB) methanogenic reactor treating pulp and paper industry wastewater at Norske Skog Saugbrugs, Halden, Norway. The EGSB reactor was inoculated with 0.25 L of sludge, which had a total solid content of 59.9 g/L with an 86% organic fraction. Imposed lithoautotrophic conditions caused no methane production while sulfur compounds were produced. The data set evaluated here is from an experiment carried out as a continuation study of temperature impact (temperature range 10-25 \\({}^{\\circ}\\)C) on sulfur products distribution at constant feed N/S ratio [15].
### Synthetic Wastewater
The EGSB reactor synthetic feed contained Na2S-9H2O (3.12 mM S/L) with NaHCO3. Potassium phosphate was used as a buffer. Nitrate, which acted as an electron acceptor was supplied at different concentrations 1.08, 1.25, 1.87, and 4.05 mM NO3\\({}^{-}\\)/L giving N/S ratios 0.35, 0.40, 0.60, and 1.30, respectively (Table 1). Nitrate feed contained also the following stock solutions: (A) NH4Cl (10 g/L), MgCl2\\(\\cdot\\)6H2O (10 g/L), CaCl2\\(\\cdot\\)2H2O (10 g/L); (B) K2HPO4 (300 g/L); (C) MnSO4\\(\\cdot\\)H2O (0.04 g/L), FeSO4\\(\\cdot\\)7H2O (2.7 g/L), CuSO4\\(\\cdot\\)5H2O (0.055 g/L), NiCl2\\(\\cdot\\)6H2O (0.1 g/L), ZnSO4\\(\\cdot\\)7H2O (0.088 g/L), CoCl2\\(\\cdot\\)6H2O (0.05 g/L), H3BO3 (0.05 g/L); (D) 10 times concentrated vitamin solution [16]. HNO3, stock solutions A (10 mL/L), B (2 mL/L), C (2 mL/L), and D (1 mL/L) were dissolved in distilled water. Electron donor (Na2S\\(\\cdot\\)9H2O) and acceptor (HNO3) were fed from separate bottles to prevent contamination and reactions in the feed bottles (Figure 1).
\\begin{table}
\\begin{tabular}{c c c c} \\hline
**Time (Day)** & **N/S Ratio** & **NO3\\({}^{-}\\) (mM/L)** & **HS\\({}^{-}\\) (mM/L)** \\\\ \\hline
1–30 & 0.35 & 1.08 & \\\\
31–44 & 0.40 & 1.25 & \\\\
45–52 & 0.60 & 1.87 & \\\\
53–60 & 1.30 & 4.05 & \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Feeding parameters.
### Experimental Setup
The 0.5 L effective volume laboratory-scale EGSB reactor was made of polycarbonate with an inner diameter of 32 mm and 620 mm effective height (Figure 1), equipped with tape measure for visual sludge bed height monitoring. Reactor temperature was maintained constant at 10 \\(\\pm\\) 0.1 \\({}^{\\circ}\\)C by a cold plate cooler on the recirculation loop (TE Technology, Inc., Traverse City, MI, USA). Four different N/S ratios were tested under invariable temperature and sulfur load imposed according to Table 1.
Synthetic influent was introduced from two 2 L influent vessels under nitrogen gas to avoid influent aging. Influent was pumped into the reactor at 2 L/day, equivalent to 6 h hydraulic retention time. Recycling pump (P3 in Figure 1) was set to maintain 6 m/h reactor up-flow velocity necessary to expand the sludge bed. pH was monitored by electrode (Hanna Instruments) on the recirculation loop.
### Analytical Procedure
Effluent samples were collected daily and analyzed immediately for nitrate (NO3\\({}^{-}\\)), nitrite (NO2\\({}^{-}\\)), sulfate (SO4\\({}^{2-}\\)), sulfide (HS\\({}^{-}\\)), and thiosulfate (S2O3\\({}^{2-}\\)) in collected liquid samples (following 0.45 \\(\\upmu\\)m filtration) by ion chromatography (Dionex ICS-5000) using potassium hydroxide (KOH) as the eluent. Sulfide concentration was determined indirectly by potassium permanganate oxidation (KMnO4). Sample separation and elution was performed using an IonPac AS11-HC 2 mm analytical column. Analysis started at 22 mM KOH, gradient started at 6 min, ramped up in 3 min to 45 mM and kept at that concentration for another 4 min. The data acquisition time is 13 min. The injection volume was 10 \\(\\upmu\\)L and the flow rate 0.3 mL/min.
### Elemental Sulfur Measurements
Two different fractions of S\\({}^{0}\\) were distinguished according to Sposob et al. [15]: accumulated into reactor (denoted as S\\({}^{0}\\)acc) and suspended elemental sulfur (S\\({}^{0}_{\\text{ss}}\\)). Distinguishing between these two S
Figure 1: Experimental setup.
fractions is done based on the elemental sulfur balance as an indirect method for quantification of \\(S^{0}_{\\rm\\,acc}\\), while \\(S^{0}_{\\rm\\,ss}\\) is equivalent to measured \\(S_{2}\\)O\\({}_{3}\\)\\({}^{2-}\\)[15]. Concentration of \\(S^{0}_{\\rm\\,acc}\\) was calculated based on the difference between influent HS\\({}^{-}\\) concentration and effluent concentrations of HS\\({}^{-}\\), SO\\({}_{4}\\)\\({}^{2-}\\), and \\(S^{0}_{\\rm\\,ss}\\), according to Equation (1). H\\({}_{2}\\)S in the headspace was not measured.
\\[S^{0}_{\\rm\\,acc}=\\rm{HS^{-}}_{\\rm inf}-\\rm{HS^{-}}_{\\rm eff}-\\rm{SO_{4}}^{2-}_{ \\rm eff}-S^{0}_{\\rm\\,ss}, \\tag{1}\\]
## 3 Results and Discussion
### Reactor Performance
The electron acceptor was almost completely removed (Figure 2), on average 98.7 \\(\\pm\\) 2.8% throughout the 60 days experiment, which consisted of four phases with increasing NO\\({}_{3}\\)\\({}^{-}\\) concentration, thereby changing N/S ratio (Table 1). The NO\\({}_{3}\\)\\({}^{-}\\) removal was equal to 96.8 \\(\\pm\\) 3.9% at the highest N/S ratio and 99.3 \\(\\pm\\) 2.3% at the lowest ratio. It has been reported that NO\\({}_{3}\\)\\({}^{-}\\) removal can be significantly inhibited at N/S ratios much higher than derived from stoichiometry [17] but this was not the case here. However, the changes in N/S ratio had an impact on HS\\({}^{-}\\) removal with 89.1 \\(\\pm\\) 2.2 and 89.6 \\(\\pm\\) 2.9% at N/S ratios of 0.35 and 1.30, respectively, and only 76.9 \\(\\pm\\) 2.6% at N/S = 0.60 (Figure 2).
Both S\\({}^{0}\\) forms, accumulated (S\\({}^{0}_{\\rm\\,acc}\\)) and suspended (S\\({}^{0}_{\\rm\\,ss}\\)), were decreasing with increasing N/S ratios and they were negligible at N/S = 1.30 (Table 2). The negative S\\({}^{0}_{\\rm\\,acc}\\) value at N/S = 1.30 implies the oxidation to SO\\({}_{4}\\)\\({}^{2-}\\) of the earlier accumulated S\\({}^{0}\\) in the reactor during lower N/S ratios.
Each increase in NO\\({}_{3}\\)\\({}^{-}\\) resulted in SO\\({}_{4}\\)\\({}^{2-}\\) concentration rise, depletion of S\\({}^{0}\\) fractions and pH drop (Figures 3 and 4). During the last week of the experiment, pH decreased to 7.19 \\(\\pm\\) 0.31 at N/S = 1.30 due to high SO\\({}_{4}\\)\\({}^{2-}\\) production (Figures 3 and 4). At this pH, a larger fraction of HS\\({}^{-}\\) in the unionized form as H\\({}_{2}\\)S could occur compared to the conditions at lower N/S, with higher pH (Figure 4). It is still argued that an insignificant amount of H\\({}_{2}\\)S was stripped off to headspace since: the dissolved H\\({}_{2}\\)S level at pH 7.19 \\(\\pm\\) 0.31 is calculated to only 0.2 mM/L and H\\({}_{2}\\)S has a high solubility in water (150 mM/L, at 10 \\({}^{\\circ}\\)C [18]). Therefore, there is no unaccounted for or missing sulfur in the balance.
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline
**N/S Ratio** & **S\\({}^{0}_{\\rm\\,acc}\\)** & **SO\\({}_{4}\\)\\({}^{2-}\\)** & **S\\({}^{0}_{\\rm\\,ss}\\)** & **HS\\({}^{-}\\)S** & **NO\\({}_{3}\\)\\({}^{-}\\)** & **pH** & **Total Sulfur (Effluent) 2** \\\\ \\hline
0.35 & 0.57 \\(\\pm\\) 0.21 & 0.69 \\(\\pm\\) 0.11 & 1.52 \\(\\pm\\) 0.16 & 0.34 \\(\\pm\\) 0.07 & 0.01 \\(\\pm\\) 0.03 & 8.11 \\(\\pm\\) 0.11 & 2.55 \\(\\pm\\) 0.21 \\\\
0.40 & 0.44 \\(\\pm\\) 0.32 & 0.96 \\(\\pm\\) 0.21 & 1.24 \\(\\pm\\) 0.16 & 0.56 \\(\\pm\\) 0.15 & 0.02 \\(\\pm\\) 0.03 & 7.92 \\(\\pm\\) 0.15 & 2.76 \\(\\pm\\) 0.36 \\\\
0.60 & 0.09 \\(\\pm\\) 0.20 & 1.39 \\(\\pm\\) 0.19 & 0.91 \\(\\pm\\) 0.26 & 0.72 \\(\\pm\\) 0.08 & 0.02 \\(\\pm\\) 0.06 & 7.65 \\(\\pm\\) 0.06 & 3.03 \\(\\pm\\) 0.2 \\\\
1.30 & \\(-\\)0.69 \\(\\pm\\) 0.58 & 3.37 \\(\\pm\\) 0.83 & 0.11 \\(\\pm\\) 0.23 & 0.32 \\(\\pm\\) 0.09 & 0.15 \\(\\pm\\) 0.16 & 7.19 \\(\\pm\\) 0.31 & 3.81 \\(\\pm\\) 0.58 \\\\ \\hline \\hline \\end{tabular}
* Notes: \\({}^{1}\\) Derived values come from the balance (Equation (1)); \\({}^{2}\\) Total sulfur (effluent) = SO\\({}_{4}\\)\\({}^{2-}\\) + S\\({}^{0}_{\\rm\\,ss}\\) + HS\\({}^{-}\\)\\(\\cdot\\)S.
\\end{table}
Table 2: Process output parameters (concentrations in mM/L).
### Sulfur Components at Different N/S Ratios
The imposed increase in feed NO3\\({}^{-}\\) concentration had, as expected, an impact on the presence of the four different sulfur components, HS\\({}^{-}\\), SO4\\({}^{2-}\\) and two fractions of S\\({}^{0}\\): accumulated (S\\({}^{0}_{\\rm acc}\\)) and suspended (S\\({}^{0}_{\\rm ss}\\)) (Figure 5).
Figure 4: pH vs. time under different N/S ratios at 10 \\({}^{\\circ}\\)C.
Figure 3: Time series of accumulated (S\\({}^{0}_{\\rm acc}\\)) and suspended (S\\({}^{0}_{\\rm ss}\\)) elemental sulfur, and sulfate (SO\\({}_{4}\\)\\({}^{2-}\\)-S) concentrations under different N/S ratios at 10 \\({}^{\\circ}\\)C.
The initially tested N/S ratio revealed that around 11% (0.34 \\(\\pm\\) 0.07 mM/L) of influent sulfur remained unreacted as HS\\({}^{-}\\). At this condition, S\\({}^{0}_{\\rm 88}\\) was a main fraction of S\\({}^{0}\\) at a 49% share of influent sulfur while S\\({}^{0}_{\\rm acc}\\) constituted 18%, adding up to 67%. A share of 22% of the electron donor was oxidized to SO\\({}_{4}\\)\\({}^{2-}\\) at N/S = 0.35. Similar studies performed at mesophilic conditions reveal lower SO\\({}_{4}\\)\\({}^{2-}\\) fractions at similar N/S ratio: (1) At 25 \\({}^{\\circ}\\)C and N/S = 0.35 the fraction of SO\\({}_{4}\\)\\({}^{2-}\\) constituted 14% [15]; (2) At room temperature (22-23 \\({}^{\\circ}\\)C) and N/S = 0.32 only 4% of HS\\({}^{-}\\) was converted to SO\\({}_{4}\\)\\({}^{2-}\\)[19]. The results confirm previous studies that show temperature impact on HS\\({}^{-}\\) removal and SO\\({}_{4}\\)\\({}^{2-}\\) production, where the SO\\({}_{4}\\)\\({}^{2-}\\) share increases with decreasing temperature [15].
The slight increase in N/S ratio from 0.35 to 0.40 (equivalent to catabolic reaction in simultaneous NO\\({}_{3}\\)\\({}^{-}\\) and HS\\({}^{-}\\) removal to yield S\\({}^{0}\\)) was imposed to supply sufficient NO\\({}_{3}\\)\\({}^{-}\\) such to obtain the complete removal of HS\\({}^{-}\\), it however led to less HS\\({}^{-}\\) oxidation. The presence of S\\({}^{0}\\) fractions also decreased from 67 to 54%, reducing the concentration of S\\({}^{0}_{\\rm acc}\\) by 23% and S\\({}^{0}_{\\rm 88}\\) by 18% in comparison to the previous (N/S = 0.35) period (Table 2). The electron donor removal decreased, so that 18% of influent sulfur remained unreacted. More of the HS\\({}^{-}\\) oxidized was, however, oxidized to the highest oxidation level (+VI), increasing the SO\\({}_{4}\\)\\({}^{2-}\\) share of products from 22 to 31%. This clearly shows that the appropriate N/S ratio for S\\({}^{0}\\) production is lower than that reflected in the catabolic reaction alone.
S\\({}^{0}_{\\rm acc}\\) was almost completely avoided at N/S = 0.60 (3% of influent sulfur, Figure 5). S\\({}^{0}\\) was still present in the liquid phase (S\\({}^{0}_{\\rm 88}\\) = 29% of influent sulfur) but much less than at lower N/S ratios. Concentration of HS\\({}^{-}\\) and SO\\({}_{4}\\)\\({}^{2-}\\) at the effluent increased compared to lower N/S ratios. Unreacted HS\\({}^{-}\\), 23%, 0.72 \\(\\pm\\) 0.08 mM/L, shows the lowest removal of electron donor during the whole experiment. The increase in SO\\({}_{4}\\)\\({}^{2-}\\) was similar as for the transition from 0.35 to 0.40, at N/S = 0.60 had a share of 45%.
Effluent SO\\({}_{4}\\)\\({}^{2-}\\) was the main HS\\({}^{-}\\) oxidation product at the highest studied N/S ratio (1.30; NO\\({}_{3}\\)\\({}^{-}\\) = 4.08 mM/L) but its concentration varied more than at lower N/S (3.37 \\(\\pm\\) 0.83 mM/L). The sum of sulfur components in the effluent was 22% higher than in the influent during this period (Figures 3 and 5), which is explained by the oxidation of previously accumulated sulfur, S\\({}^{0}_{\\rm acc}\\). Similar behavior has been observed during abrupt temperature drops [20]. The slight amount S\\({}^{0}_{\\rm 88}\\) (0.11 \\(\\pm\\) 0.23 mM/L;
Figure 5: Share of sulfur products under different N/S ratios at 10 \\({}^{\\circ}\\)C.
4%) observed in this period is assumed to originate from previously accumulated sulfur, \\(\\text{S}^{0}_{\\text{acc}}\\). Excess effluent compared to influent sulfur must have a temporary nature until the \\(\\text{S}^{0}_{\\text{acc}}\\) in granules is exhausted but the experiment did not last long enough to reach such a steady state.
The observed substrate consumption and products distribution for different ratios between electron acceptor and donor differs from that reported based on catabolic reactions under mesophilic conditions. In comparison, nitrite (NO2\\({}^{-}\\)) accumulation observed under mesophilic conditions [7] did not occur in the presented work. It has also been reported that SOB like _Thiohacillus denitrificans_ oxidizes stored sulfur only when reduced sulfur compounds--i.e., \\(\\text{S}_{2}\\text{O}_{3}\\text{${}^{2-}$}\\)--have been depleted [21]. However, in this study higher \\(\\text{NO}_{3}\\text{${}^{-}$}\\) immediately triggered a \\(\\text{SO}_{4}\\text{${}^{2-}$}\\) production increase even when HS\\({}^{-}\\) was not completely oxidized.
It has been reported that changes in N/S ratio under heterotrophic conditions caused changes in products distribution similar to that observed here. Additionally, changes in N/S ratio led to changes in the heterotrophic microbial community structure [22]. There may similarly have been autotrophic community changes in the present study, but this was not investigated. An observed decrease in sludge bed height level by 58% from the lowest to the highest N/S tested here may have been related to microbial community structure changes but the main cause is probably loss of \\(\\text{S}^{0}_{\\text{acc}}\\) from the granules. Oxidation of initially stored \\(\\text{S}^{0}_{\\text{acc}}\\) to recover energy at high N/S ratios, is proposed as the main cause of sludge bed reduction.
### Relation between Experimental and Theoretical Products Distribution
Using N/S ratio as a way to control the fate of HS\\({}^{-}\\) oxidation to either S\\({}^{0}\\) and/or \\(\\text{SO}_{4}\\text{${}^{2-}$}\\)[9] is further analyzed by comparing theoretical equations [6] and experimental results (Figure 6). Obtained experimental results show the offset from theoretical values with good match only at N/S = 1.30. The observed offset, especially at N/S = 0.35, may be due to a metabolic shift that has been observed in a temperature impact study [15]. It was observed that the production of \\(\\text{SO}_{4}\\text{${}^{2-}$}\\) was increasing at a constant N/S ratio (=0.35) with decreasing temperature, which was hypothesized to be a natural response of microbiota to compensate temperature-induced changes in energy requirements.
Theoretically, according to the equations given by Kleerebezem and Mendez [6], equal product distribution between S\\({}^{0}\\) and \\(\\text{SO}_{4}\\text{${}^{2-}$}\\) should be expected at N/S = 0.825 or even at higher ratios, taking into account just the catabolic reactions. Experimentally, however, equal distribution of S\\({}^{0}\\) and
Figure 6: Experimental and theoretical concentration of elemental sulfur (\\(\\text{S}^{0}_{\\text{acc}}+\\text{S}^{0}_{\\text{ss}}\\)) (**left**), and \\(\\text{SO}_{4}\\text{${}^{2-}$}\\) (**right**).
SO4\\({}^{2-}\\) was reached already at N/S = 0.6. The organisms accumulated some amount of sulfur, \\(S^{0}_{\\rm acc}\\), as an energy reserve at low N/S ratio. Thus, in addition to temperature effects, the obtained offset at N/S ratios 0.4 and 0.6 may have been influenced by the oxidation of \\(S^{0}_{\\rm acc}\\). The continuous flow feeding with increasing N/S ratio, facilitated the observation of competition between \\(S^{0}_{\\rm acc}\\) and HS\\({}^{-}\\) as electron donors. This is especially visible at mid-N/S ratios where the \\(S^{0}_{\\rm acc}\\) was evidently, to changing degrees, used as an electron donor together with HS\\({}^{-}\\), for which removal decreased at the same time. This observation contradicts the previous studies in which it has been reported that the oxidation of accumulated \\(S^{0}\\) as an electron reserve can occur only when the reduced sulfur compounds are depleted (HS\\({}^{-}\\) in this case) [21]. The possibility that the organisms can utilize this stored energy by oxidizing \\(S^{0}_{\\rm acc}\\) to SO4\\({}^{2-}\\) also in conditions when surplus HS\\({}^{-}\\) is present implies larger culture flexibility to utilize available resources. The microorganisms may thereby have increased their catabolic energy yield by utilizing differences in free Gibbs energy since the oxidation from \\(S^{0}\\) to SO4\\({}^{2-}\\) has a slightly higher \\(\\Delta G^{\\circ}\\) than from HS\\({}^{-}\\) to SO4\\({}^{2-}\\), \\(-\\)800.76 and \\(-\\)768.28 kJ/reaction, respectively (Table 3). The exponential-like response for \\(S^{0}\\) (Figure 6) may thereby be a result of increased \\(S^{0}_{\\rm acc}\\) oxidation with increased influent NO3\\({}^{-}\\) concentration. This pathway apparently has an impact and may explain the offset and shape of the exponential-like response of N/S ratio on \\(S^{0}\\).
The overall percentage distribution of reactants and products (Table 4) shows an imbalance of electrons in the experimental data which implies that some SO4\\({}^{2-}\\) must have been produced through the use of an electron acceptor other than NO3\\({}^{-}\\). The percentage of influent sulfur (as HS\\({}^{-}\\)) oxidized by another electron acceptor decreased with increasing N/S ratio from 14 to 8% of influent sulfur. Similar observations have been reported in other studies where the obtained products exceeds what is theoretically expected based on fed electron acceptor [7; 15]. Such unintended electron acceptors could be H\\({}^{+}\\) to give H\\({}_{2}\\) gas, inorganic carbon to biomass, or exposure to O\\({}_{2}\\).
## 4 Conclusions
The lowest and highest N/S ratios, 0.35 and 1.30, did not differ in HS\\({}^{-}\\) removal, with 89.1 \\(\\pm\\) 2.2% and 89.6 \\(\\pm\\) 2.9%, respectively. Less HS\\({}^{-}\\) removal was obtained at intermediate N/S ratios with the lowest, 76.9 \\(\\pm\\) 2.6%, at N/S = 0.60.
The products from the studied N/S ratios deviated from theoretical predictions, except at N/S = 1.30. Additionally, equal product distribution between S\\({}^{0}\\) and SO4\\({}^{2-}\\) occurred at a lower N/S ratio than theoretically expected. This implies that the reactions in continuous flow bioreactors are more complicated than accounted for in standard stoichiometric models.
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{**N/S** Ratio} & **Theoretical Share (\\%)** & **Experimental Share (\\%)** & **NO3\\({}^{-}\\)** & **Uptake Share (\\%)** & **SO4\\({}^{2-}\\)** & **Poduced by Another** \\\\ & **S\\({}^{0}\\)** & **SO4\\({}^{2-}\\)** & **S\\({}^{0}\\)** & **SO4\\({}^{2-}\\)** & **S\\({}^{0}\\)** & **SO4\\({}^{2-}\\)** & **Electron Acceptor (mM/L)** \\\\ \\hline
0.35 & 100 & 0 & 67 & 22 & 67 & 33 & 0.41 (13\\%) \\\\
0.40 & 95 & 5 & 54 & 31 & 46 & 54 & 0.44 (14\\%) \\\\
0.60 & 74 & 26 & 32 & 45 & 18 & 82 & 0.22 (7\\%) \\\\
1.30 & 0 & 100 & 4\\({}^{2}\\) & 108 & 1 & 99 & 0.26 (8\\%) \\\\ \\hline \\end{tabular} Notes: \\({}^{1}\\) In parenthesis percentage of influent sulfur concentration, \\({}^{2}\\) only S\\({}^{0}_{ss}\\) included.
\\end{table}
Table 4: Comparison of theoretical and experimental percentage share of products and electron acceptors uptake.
\\begin{table}
\\begin{tabular}{c c} \\hline \\hline
**Reaction** & **\\(\\Delta G^{\\circ}\\) (1 M of Electron Donor)** \\\\ \\hline HS\\({}^{-}\\) + 0.4NO3\\({}^{-}\\) + 1.4H\\({}^{+}\\)\\(\\rightarrow\\) S\\({}^{0}\\) + 0.2N2 + 1.2H2O & \\(-\\)252.13 \\\\ HS\\({}^{-}\\) + 0.8NO3\\({}^{-}\\) + 0.8H\\({}^{+}\\)\\(\\rightarrow\\) 0.5S2O3\\({}^{-}\\)\\({}^{+}\\) + 0.4N2 + 0.9H2O & \\(-\\)393.14 \\\\ HS\\({}^{-}\\) + 1.6NO3\\({}^{-}\\) + 0.6H\\({}^{+}\\)\\(\\rightarrow\\) SO4\\({}^{2-}\\) + 0.8N2 + 0.8H2O & \\(-\\)768.28 \\\\ S\\({}^{0}\\) + 1.2NO3\\({}^{-}\\) + 0.4H2O \\(\\rightarrow\\) SO4\\({}^{2-}\\) + 0.6N2 + 0.8H2O & \\(-\\)800.76 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Possible reaction of sulfur reduced compounds with nitrate (NO3\\({}^{-}\\)).
Increasing N/S feed ratio caused an increase in SO42\\({}^{-}\\) production and depletion of stored S0. The S0 accumulated during the low N/S feed ratio was utilized at higher N/S, thus, leading to SO42\\({}^{-}\\) production to recover stored energy. The oxidation of S0 occurred even though excess HS\\({}^{-}\\) was available at higher feed N/S ratios (>0.35). These phenomena can explain the lower removal of HS\\({}^{-}\\) at mid-N/S ratios and the highest sulfur concentration obtained in the effluent at N/S = 1.30.
Efficient psychrophilic biological HS\\({}^{-}\\) removal with NO3\\({}^{-}\\) as an electron acceptor in an EGSB process is documented and elemental sulfur (S0) harvesting can be obtained through careful NO3\\({}^{-}\\) supply control.
The authors would like to thank YARA AS International and The Research Council of Norway for support of this research.
Carlos Dinamarca and Michal Sposob conceived and designed the experiment and all authors were involved in analyzing the data; Michal Sposob performed the experiments and analyzed all the samples; Michal Sposob, Rune Bakke and Carlos Dinamarca contributed to writing the paper.
The authors declare no conflict of interest.
## References
* Pokorna and Zabranska (2015) Pokorna, D.; Zabranska, J. Sulfur-oxidizing bacteria in environmental technology. _Biotechnol. Adv._**2015**, _33_, 1246-1259. [CrossRef] [PubMed]
* Chen et al. (2008) Chen, Y.; Cheng, J.J.; Creamer, K.S. Inhibition of anaerobic digestion process: A review. _Bioresour. Technol._**2008**, _99_, 4044-4064. [CrossRef] [PubMed]
* Zhou and Yu (2012) Zhou, Z.; Yu, Z.; Meng, Q. Effects of nitrate on methane production, fermentation, and microbial populations in in vitro ruminal cultures. _Bioresour. Technol._**2012**, _103_, 173-179. [CrossRef] [PubMed]
* Knoblauch et al. (1999) Knoblauch, C.; Sahm, K.; Jorgensen, B.B. Psychrophilic sulfate-reducing bacteria isolated from permanently cold Arctic marine sediments: Description of _DesulfoFrigus oceanuse_ gen. nov., sp. nov., _DesulfoFrigus fragile_ sp. nov., _DesulfoFrigus gelida_ gen. nov., sp. nov., _DesulfoFrigus gelida_ gen. nov., sp. nov. and _DesulfoFrigus gelida_ sertia sp. nov. _Int. J. Syst. Evol. Microbiol._**1999**, _49_, 1631-1643.
* Auguet et al. (2016) Auguet, O.; Pijuan, M.; Borrego, C.M.; Gutierrez, O. Control of sulfide and methane production in anaerobic sewer systems by means of Downstream Nitrite Dosage. _Sci. Total Environ._**2016**, _550_, 1116-1125. [CrossRef] [PubMed]
* Kleerebezem and Mendez (2002) Kleerebezem, R.; Mendez, R. Autotrophic denitrification for combined hydrogen sulfide removal from biogas and post-denitrification. _Water Sci. Technol._**2002**, _45_, 349-356. [PubMed]
* Cai et al. (2008) Cai, J.; Zheng, P.; Mahmood, Q. Effect of sulfide to nitrate ratios on the simultaneous anaerobic sulfide and nitrate removal. _Bioresour. Technol._**2008**, _99_, 5520-5527. [CrossRef] [PubMed]
* Fajardo et al. (2014) Fajardo, C.; Mora, M.; Fernandez, I.; Mosquera-Corral, A.; Campos, J.L.; Mendez, R. Cross effect of temperature, pH and free ammonia on autotrophic denitrification process with sulphide as electron donor. _Chemosphere_**2014**, _97_, 10-15. [CrossRef] [PubMed]
* Beristain-Cardoso et al. (2006) Beristain-Cardoso, R.; Sierra-Alvarez, R.; Rowlette, P.; Flores, E.R.; Gomez, J.; Field, J.A.ulfide oxidation under chemolithoautotrophic denittifying conditions. _Biotechnol. Bioeng._**2006**, _95_, 1148-1157. [CrossRef] [PubMed]
* Huang et al. (2016) Huang, C.; Li, Z.; Chen, F.; Liu, Q.; Zhao, Y.; Gao, L.; Chen, C.; Zhou, J.; Wang, A. Efficient regulation of elemental sulfur recovery through optimizing working height of upflow anaerobic sludge blanket reactor during denitifying sulfide removal process. _Bioresour. Technol._**2016**, _200_, 1019-1023. [CrossRef] [PubMed]
* Mahmood et al. (2007) Mahmood, Q.; Zheng, P.; Cai, J.; Wu, D.; Hu, B.; Li, J. Anoxic sulfide biooxidation using nitrite as electron acceptor. _J. Hazard. Mater._**2007**, _147_, 249-256. [CrossRef] [PubMed]
* Xu et al. (2016) Xu, Y.; Chen, N.; Feng, C.; Hao, C.; Peng, T. Sulfur-based autotrophic denitrification with eggshell for nitrate-contaminated synthetic groundwater treatment. _Environ. Technol._**2016**, _37_, 3094-3103. [CrossRef] [PubMed]
* Di Capua et al. (2017) Di Capua, F.; Milone, I.; Lakaniemi, A.M.; Lens, P.N.L.; Esposito, G. High-rate autotrophic denitrification in a fluidized-bed reactor at psychrophilic temperatures. _Chem. Eng. J._**2017**, _313_, 591-598. [CrossRef]
* Yamamoto-Ikemoto et al. (2000) Yamamoto-Ikemoto, R.; Komori, T.; Nomuri, M.; Ide, Y.; Matsukami, T. Nitrogen removal from hydroponic culture wastewater by autotrophic denitrification using thiosulfate. _Water Sci. Technol._**2000**, _42_, 369-376.
* (15) Sposob, M.; Bakke, R.; Dinamarca, C. Metabolic divergence in simultaneous biological removal of nitrate and sulfide for elemental sulfur production under temperature stress. _Bioresour. Technol._**2017**, _233_, 209-215. [CrossRef] [PubMed]
* (16) Wolin, E.A.; Wolin, M.J.; Wolfe, R.S. Formation of methane by bacterial extracts. _J. Biol. Chem._**1963**, _238_, 2882-2886. [PubMed]
* (17) Oh, S.E.; Kim, K.S.; Choi, H.C.; Cho, J.; Kim, I.S. Kinetics and physiological characteristics of autotrophic denitrification by denitriying sulfur bacteria. _Water Sci. Technol._**2000**, _42_, 59-68.
* (18) Carroll, J.J.; Mather, A.E. The solubility of hydrogen sulphide in water from 0 to 90 \\({}^{\\circ}\\)C and pressures to 1 MPa. _Geochim. Cosmochim. Acta_**1989**, _53_, 1163-1170. [CrossRef]
* (19) An, S.; Tang, K.; Nemati, M. Simultaneous biodieselphurization and denitrification using an oil reservoir microbial culture: Effects of sulphide loading rate and sulphide to nitrate loading ratio. _Water Res._**2010**, _44_, 1531-1541. [CrossRef] [PubMed]
* (20) Sposob, M.; Dinamarca, C.; Bakke, R. Short-term temperature impact on simultaneous biological nitrogen-sulphur treatment in EGSB reactor. _Water Sci. Technol._**2016**, _74_, 1610-1618. [CrossRef] [PubMed]
* (21) Schedel, M.; Truper, H.G. Anaerobic oxidation of thiosulfate and elemental sulfur in Thiobacillus denitrificans. _Arch. Microbiol._**1980**, _124_, 205-210. [CrossRef]
* (22) Chen, C.; Xu, X.J.; Xie, P.; Yuan, Y.; Zhou, X.; Wang, A.J.; Lee, D.J.; Ren, N.Q. Pyrosequencing reveals microbial community dynamics in integrated simultaneous desulfurization and denitrification process at different influent nitrate concentrations. _Chemosphere_**2017**, _171_, 294-301. [CrossRef] [PubMed] | The excessive H\\({}_{2}\\)S presence in water and wastewater can lead to corrosion, toxicity, and biological processes inhibition--i.e., anaerobic digestion. Production of H\\({}_{2}\\)S can occur in psychrophilic conditions. Biological removal of HS\\({}^{-}\\) by addition of NO\\({}_{3}\\)\\({}^{-}\\) as an electron acceptor under psychrophilic (10 \\({}^{\\circ}\\)C) conditions in a continuous flow experiment is evaluated here. Four different N/S molar ratios--0.35, 0.40, 0.60, and 1.30--were tested in an expanded granular sludge bed (EGSB) reactor. Samples were analyzed daily by ion chromatography. Efficient psychrophilic HS\\({}^{-}\\) removal with sulfur products oxidation control by NO\\({}_{3}\\)\\({}^{-}\\) supply is documented. The highest HS\\({}^{-}\\) removal was obtained at N/S = 0.35 and 1.30 (89.1 \\(\\pm\\) 2.2 and 89.6 \\(\\pm\\) 2.9%). Removal of HS\\({}^{-}\\) was less at mid-N/S with the lowest value (76.9 \\(\\pm\\) 2.6%) at N/S = 0.60. NO\\({}_{3}\\)\\({}^{-}\\) removal remained high for all N/S ratios. N/S molar ratio influenced the sulfur products distribution with less S\\({}^{0}\\) and increase in SO\\({}_{4}\\)\\({}^{2-}\\) effluent concentration with increasing N/S ratio. Oxidation of HS\\({}^{-}\\) and accumulated S\\({}^{0}\\) occurred simultaneously at N/S ratios >0.35. The observations are explained by culture flexibility in utilizing available resources for energy gain.
autotrophic denitrification; elemental sulfur recovery; psychrophilic conditions; sulfate production; sulfide removal; N/S ratio impact +
Footnote †: journal: water | Write a summary of the passage below. | 452 |
arxiv-format/2211_07316v1.md | # Bayesian Layer Graph Convolutional Network for Hyperspectral Image Classification
Mingyang Zhang,, Ziqi Di, Maoguo Gong,,
Yue Wu,, Hao Li,, Xiangming Jiang,
M. Zhang, Z. Di, M. Gong, H. Li and X. Jiang are with the School of Electronic Engineering, and the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, No. 2 South Tailai Road, Xi'an 710071, China (e-mail: [email protected]; [email protected]).Y. Wu is with the School of Computer Science and Technology, Xidian University, Xi'an 710071, China.
## I Introduction
With the rapid development of remote sensing sensors, hyperspectral images have drawn more and more attentions in academic research and various applications [1, 2], such as mineral detection, soil testing, medical image, and urban planning. Compared with conventional optical remote sensing image, hyperspectral images contain abundant spectral information, which are consists of hundreds of contiguous spectral channels [3, 4]. Due to this character, besides spatial context information, hyperspectral images possess unique spectral signatures for identifying materials and ground truth. For the last few years, spatial-spectral classification for hyperspectral images has become a hot research issue in remote sensing society.
In early researches, hyperspectral image classification focuses on designing various feature extraction methods manually. Feature extraction is designed to mapping the original data into a new feature space where the spatial and spectral features are more distinguished and better for classification. For spectral feature extraction, there are two main methods: linear transformation based methods and nonlinear transformation based methods. Principal component analysis (PCA) [5] and independent component analysis (ICA) [6] are two representative methods for linear transformation based methods. They use matrix factorization to obtain a new feature space. For nonlinear transformation based methods, locally linear embedding (LLE) [7] and locality preserving projections (LPP) [8] based methods are two conventional feature extraction methods in pattern recognition, which are based on manifold learning for seeking new feature space. However, researchers found that only extracting spectral features has limited improvement in classification accuracy. Since spatially adjacent ground truth is more relevant [9], spatial context features are also considered for feature extraction combined with spectral features. Random field based methods are classic spatial feature extraction methods. Thereinto, Markov random field (MRF) and conditional random field (CRF) are two representative ones. MRF can build models according to the relevance of adjacent pixels, which can be used in multiple ground truth scenes and multiple distribution models. Many works have been proposed based on MRF. For example, an adaptive MRF was proposed for hyperspectral feature extraction [10]. Further, Sun proposed a spatially adaptive Markov random field (MRF) prior in the hidden field [11], which makes the spatial term more smooth. CRF is a probabilistic model for labels and structural data, which shows potentials for spatial feature extraction. Various CRF-based methods have been proposed for spatial feature extraction, such as CRF combined with sparse representation [12], CRF combined with ensemble learning [13], and CRF combined with sub-pixel techniques [14]. Besides, morphological feature based methods are also effective ways for spatial feature extraction, among which, extended morphological attribute profiles (EMAP) is a representative one [15]. Based on EMAP, some solid works have been proposed, such as multiple feature learning [16] and histogram-based attribute profiles methods [17].
The conventional manually designed feature extraction methods consist of monolayer structure generally, which can be considered as shallow models. However, hyperspectral images have shown the characteristics of nonlinear, high dimensions and redundancy. Due to the capacity limitation of shallow models, it is always not ideal for shallow models when processing this type of data. To address this issue, the deeplearning models have been introduced to hyperspectral image classification and some inspiring works have been proposed to implement spatial-spectral classification in recent years. Chen first proposed to introduce a deep learning model to hyperspectral classification [18], which using a stacked auto-encoder structure to extract high level features. Based on this work, a deep belief network was proposed to extract spatial-spectral features for hyperspectral images [19]. After this, various convolutional neural network (CNN) based methods started to make their marks in hyperspectral classification, due to their great potentials for extracting spatial features. A 2D and 3D CNN framework were proposed to implement classification for hyperspectral images with a supervised style [20], which further improves the classification performance. Lee proposed a contextual CNN with a deeper structure to extract more subtle spatial features for classification [21]. A diverse region-based CNN was proposed to obtain promising features with semantic context-aware representation [22]. Recently, a CNN with multiscale convolution and diversified metric framework was proposed, which combines multiscale learning and determinantal point process priors to improve the diversity of features for classification [23]. However, CNN based methods bring another challenge for hyperspectral images classification. The absence of labeled samples can not sufficiently support the training of CNN based models. To address this issue, data augmentation and unsupervised pre-training are introduced to CNN based methods. For data augmentation, Li proposed a pixel-pair feature extraction strategy to enlarge the labeled samples [24]. With the enlarged labeled samples, the training of deep CNN model can be improved effectively. For unsupervised pre-training strategy, Romero proposed a greedy layerwise training framework to implement unsupervised pre-training [25]. Further, Mou proposed an encoder-decoder structure to train a residual CNN block in an unsupervised style [26]. Moreover, a generative model based on Wasserstein generative adversarial network was proposed [27], which can pre-train a CNN subnetwork without labeled samples.
Recently, besides Euclidean-space based deep learning methods, the graph-space based deep learning methods have attracted more attention for the research of hyperspectral image classification, which maps the original data from Euclidean space to graph space. Due to the concepts of nodes and edges, compared with traditional Euclidean space, the graph space can better explore and exploit the relationship among the whole pixels of a hyperspectral image [28]. Based on this feature, some graph neural network methods have been proposed for the hyperspectral image classification task. Wan transformed hyperspectral data to a graph style using image segmentation techniques, and proposed a multiscale dynamic graph convolutional network (GCN) to implement node classification [29]. Based on [29], a context-aware dynamic graph convolutional network was proposed, which can flexibly explore the relations among graph-style data [30]. Meanwhile, due to the ability of handling the whole graph i.e. the whole data, some semi-supervised graph-based methods such as GCNs have been proposed, which can utilize the labeled and unlabeled samples simultaneously to reduce the requirement of labeled training samples [28, 31, 32]. Moreover, some effective variants of GCNs have been further designed. In [33] and [34], GCNs have been designed in dual channel style to explore multiscale spatial information and label distribution respectively. For better update the graph structure, in [35] and [36], an adaptive sampling strategy and an attention mechanism are combined with the GCNs-based methods respectively. Liu combined the advantages of convolution neural networks with GCNs to better utilize pixel and superpixel level features [37]. From the view of training efficiency, Hong proposed a minibatch GCN which can reduce computational cost in large scale remote sensing data analysis [38].
However, the aforementioned deep learning-based methods, including CNN and GCN, use point estimation for data features, which makes the network easy to get overfitted with poor generalization ability during the learning process. At the same time, since each forward propagation is performed with a single sampling of the weight space, the convergence speed and learning efficiency are greatly limited, and it is incapable for the network to estimate the credibility of its output. In addition, in the face of the imbalanced sample distribution of hyperspectral remote sensing datasets, it is difficult for deep learning methods to maintain high classification accuracy on minority classes, which increases the risks of miss alarming corresponding to high-value samples.
To address the issues discussed above, we propose a Bayesian layer graph convolutional network (BLGCN), in which we design the Bayesian layer. Different from the traditional neural network, Bayesian strategy represents weight matrices in distribution form, which transforms the training process of networks from point estimation to distribution estimation. The weight matrices we get from sampling are combined with input features to obtain the network output, i.e., the classification results.
We apply the variational inference method to calculate the posterior distribution of the weight matrices and classification loss. Consequently, based on the given prior distribution and multiple sampling in each forward propagation, the overfitting problem is effectively solved, and the generalization ability of the neural network is significantly improved.
While adapting the Bayesian neural network, we give the definition of Bayesian layer and design its structure. The participation of Bayesian layer enables researchers to fine-tune the network backbone to satisfy the specific task. Compared with the traditional Bayesian neural network, our method can reach the balance of high classification accuracy and low time cost.
Besides, in order to tackle the task of minority class recognition, generative adversarial methods [39] are introduced to perform feature learning and data generation for minority class. By expanding the training set with generation data, the recognition accuracy of minority class is effectively increased.
At the same time, for the purpose of using the uncertainty information of classification results to guide the training process, we design a dynamic control strategy with Bayesian method. By setting the threshold on the validation set classification accuracy, it is stipulated that the training process will be terminated when the upper limit of the confidence interval reaches the threshold. This strategy shortens the time cost of training process and increases the training efficiency.
To sum up, our main contributions in this paper are listed as follows:
(1) We propose the BLGCN framework, which can reach the balance of high classification accuracy and low time cost.
(2) The Bayesian method carried by Bayesian layer enables the network to quantify the uncertainty of the classification results.
(3) The generative adversarial method we import solve the minority class recognition problem efficiently.
(4) A dynamic control strategy based on Bayesian method is involved in the training process which further reduces the time cost of model learning.
The remainder of our paper is organized as follows. Background and motivations of the methods we used are introduced in Section II. In Section III we formulate the theoretical basis of BLGCN and relevant image processing methods, including data-augmentation with generative adversarial methods and our training strategy. We conduct the experiments in Section IV. Finally, this article is concluded in Section V.
## II Background and Motivation
### _Graph Convolutional Network_
Since the graph convolutional networks presented by Kipf&Welling in 2017 [40], it has been introduced into a wide range of applications, including recommendation system, graph embedding and node classification. As an important part of remote sensing images analysis, the application of GCN on hyperspectral image classification can also be seen in several works [41, 42, 43]. To begin with, the GCN method assumes that the given settings can be simplified into a graph \\(G_{obs}=(V,\\varepsilon)\\), where \\(V\\) represents the set of \\(N\\) nodes and \\(\\varepsilon\\) is the set of their edges. Each node \\(i\\) has a set of feature vectors \\(x_{i}\\in\\mathbb{R}^{d\\times 1}\\) in \\(d\\) dimensions, and the label set related to a subset of nodes \\(L\\subset V\\) can be expressed as \\(y_{i}\\). In the image classification task, the value of label set identifies categories.
The core layer of a GCN can be expressed as follows:
\\[\\begin{split}\\mathbf{H}^{(1)}&=\\sigma(\\hat{\\mathbf{ A}}_{G}\\mathbf{X}\\mathbf{W}^{(0)}),\\\\ \\mathbf{H}^{(l+1)}&=\\sigma(\\hat{\\mathbf{A}}_{G} \\mathbf{H}^{(l)}\\mathbf{W}^{(l)}).\\end{split} \\tag{1}\\]
Here \\(\\mathbf{W}^{(l)}\\) denotes the weights of the \\(l\\)th layer in a GCN. \\(\\mathbf{H}^{(l)}\\) represents the output features from \\(l\\)th layer, and the function \\(\\sigma\\) is a nonlinear activation function. The normalized adjacency matrix \\(\\hat{\\mathbf{A}}_{G}\\) is derived from a given graph and determines the mixing situation of the output features across the graph at each convolutional layer.
For an \\(L\\)-layers GCN, the final output is \\(Z=\\mathbf{H}^{(L)}\\). The model learns the weights of network through the backpropagation with the target of minimizing the error metric function between the given labels \\(Y_{L}\\) and the network prediction outputs \\(Z_{L}\\).
### _Bayesian Method_
With the development of convolutional neural networks and Transformers in recent years, mainstream deep learning frameworks have become more and more complex, and the network depth can reach hundreds or even thousands of layers. Although these large-scale neural networks are capable of information perception and feature extraction, there still remains hidden dangers of easy overfitting and lack of the ability to estimate credibility of the model output. Hence, Bayesian deep learning method [44] has been widely used in research and gets a significant effect since it can capture the cognitive uncertainty inherent in data-driven models as well as maintaining a high accuracy [45].
For the given dataset \\(\\mathbf{X}\\) and label set \\(\\mathbf{Y}\\), when predicting the probability distribution of the test data pair \\(x^{*}\\) and \\(y^{*}\\), according to the marginal probability calculation, we have
\\[p(y^{*}\\mid x^{*},\\mathbf{Y},\\mathbf{X})=\\int p(y^{*}\\mid x^{*},\\omega)p(\\omega\\mid \\mathbf{Y},\\mathbf{X}) \\tag{2}\\]
where \\(\\omega\\) is the model parameter, and the problem is transformed into finding the maximum posterior distribution of the parameters on the training dataset \\(\\mathbf{X}\\),\\(\\mathbf{Y}\\). According to the Bayesian formula, we have
\\[p(\\omega\\mid\\mathbf{Y},\\mathbf{X})=\\frac{p(\\mathbf{Y}\\mid\\mathbf{X},\\omega)p(\\omega)}{p(\\mathbf{Y }\\mid\\mathbf{X})}. \\tag{3}\\]
The integral in Eq. 2 is intractable for most situations, and there are various integral methods proposed to infer the \\(p(\\omega\\mid\\mathbf{Y},\\mathbf{X})\\) including variational inference [46] and Markov Chain Monte Carlo (MCMC) method [44].
The general Bayesian deep learning is defined slightly differently in various articles, but it usually refers to the Bayesian neural network. Based on the framework of the traditional convolutional neural network, Bayesian neural network inherits the method of Bayesian deep learning. Assuming the weight \\(\\omega\\) subjects to a specific distribution, the network optimizing goal is to maximizing the posterior distribution \\(p(\\omega\\mid\\mathbf{Y},\\mathbf{X})\\). The backpropagation process remains the same with traditional neural network.
By structurally combining probabilistic model and neural network, Bayesian neural network also keeps their advantages. While retaining the ability of neural network perception and feature extraction, Bayesian method applies distribution estimation instead of point estimation by representing the weight space with probability distribution. It provides theoretical premise for quantifying the confidence level of the learning results. By measuring the uncertainty of learning results, we can design evaluation models, which dynamically evaluate the training progress and determine whether to proceed. At the same time, the training method based on distribution estimation speeds up the learning convergence process and ensures the robustness of the training results by sampling the weight distribution multiple times in a forward propagation process.
### _Generative Adversarial Network_
Nowadays, GAN [39] is becoming a popular model with strong generative capabilities and high efficiency. Generally, it consists of a discriminative network \\(\\mathbf{D}\\) and a generative network \\(\\mathbf{G}\\). The generative network \\(\\mathbf{G}\\) needs to generate features that match the training features \\(\\mathbf{x}\\) as much as possible based on the given random features \\(\\mathbf{z}\\), which subjects to a specific distribution (e.g., uniform distribution or the Gaussian distribution). On the contrary, the aim of the discriminative network \\(\\mathbf{D}\\) is to distinguish whether the sample data is generated features or training features \\(\\mathbf{x}\\). The training processes of the two networks are arranged to perform alternately, which can be considered as a two-player mini-max game and has a value function as follows:
\\[\\begin{split}\\min_{\\mathbf{G}}\\max_{\\mathbf{D}}V(\\mathbf{D},\\mathbf{G})& =\\mathbb{E}_{\\mathbf{x}\\sim p_{data}(\\mathbf{x})}[\\log\\mathbf{D}(\\mathbf{x})]\\\\ &+\\mathbb{E}_{\\mathbf{z}\\sim p_{\\mathbf{z}}(\\mathbf{z})}[\\log(1-\\mathbf{D}(\\mathbf{G} (\\mathbf{z})))].\\end{split} \\tag{4}\\]
It is assumed that the training features \\(\\mathbf{x}\\) subject to the distribution of \\(p_{data}(\\mathbf{x})\\), and the random features \\(\\mathbf{z}\\) subject to an arbitrary distribution \\(p_{\\mathbf{z}}(\\mathbf{z})\\). As training processes, the discriminator network and the generative network tend to converge and reach a point where the features generated are so real that the discriminator is unable to judge whether they are real features.
When processing hyperspectral remote sensing image data, researchers are often troubled by the problem of sample imbalance. In the Indian Pines dataset, the sample size of the smallest class is only equals to eight thousandths of the sample size of the largest class. The resulting model will easily have poor classification performance on the minority class. If we simply replicate the minority class data, we can achieve high recognition accuracy on the training dataset, but the model we got will lack generalization ability when dealing with unknown datasets. Therefore, we need to design the model from the data generation point of view. The traditional generative adversarial network has strong generative ability and adaptability, but when dealing with the pixel data with long-sequence features in hyperspectral images, the feature extraction ability will be greatly reduced, and become difficult to generate new features.
Over the past years, GAN methods including 1D-GAN [47], 3D-GAN [47] and other networks [48, 49, 50] have demonstrated their superior ability in unsupervised or semi-supervised learning, pioneers have started to apply them in dealing with hyperspectral remote sensing problems. However, we find that these models have limited feature expression capabilities and overfitting problem. The feature extraction method also loses part of the original information.
Therefore, we make some improvements to the generative adversarial network to solve the problem. By means of feature vector interleaved combination and unit matrix feature extraction, the newly designed generative adversarial network achieves better feature expression ability and strong generalization at the same time.
## III Methodology
This section elaborates our proposed BLGCN model and the corresponding HSI processing framework. First, we preprocess the input HSI raw data with the help of simple linear iterative cluster (SLIC) algorithm [51], and construct feature matrix and adjacency matrix. After that, we train our GAN network for the minority classes, augment the minority classes data, and update the adjacency matrix. We input the obtained features into the BLGCN model, and dynamically adjust the training strategy by quantifying the uncertainty of the output.
### _Data Preprocessing_
In order to meet the input requirements of our model, we need to perform preprocessing operations on the original image. We first apply the SLIC algorithm to divide the original image into a number of superpixel blocks, the pixels in each block have strong spatial-spectral similarity with each other. The essence of the SLIC algorithm is \\(K\\)-means clustering, which starts to evolve from several initial points randomly placed on the image, and continuously clusters pixels with similar spatial-spectral characteristics. When the segmentation is completed, the original data can be simplified to a set of superpixel blocks.
For each superpixel block, we calculate the category distribution of pixels in the block and select the most occurring category as the label of the superpixel block. To improve the classification efficiency, we filter out all superpixel patches annotated as background. In order to extract the spatial information in the original image, we save the adjacency relationship between the superpixels in adjacency matrix and mark it as \\(\\mathbf{A}\\) based on the original spatial adjacency relationship between pixels.
We calculate the mean value of the pixel features contained over each dimension in superpixel block and save it as the texture feature \\(\\mathbf{t}\\), we sum the squares of the feature vector mean value and save it as the spectral feature \\(\\mathbf{s}\\). After concatenating the texture feature vector and the spectral feature vector we can get the feature matrix \\(\\mathbf{F}=(\\mathbf{t}\\parallel\\mathbf{s})\\).
### _Data Augmentation with GAN_
Some of the preprocessed hyperspectral images still have a serious problem of sample imbalance. To solve it, we design a data augment framework based on the characteristics of feature matrix with the help of generative adversarial method, the framework is shown in Fig. 1, the integral settings in the framework are based on the Indian Pines HSI dataset.
#### Iii-B1 Formulation
We design the generative adversarial data augmentation framework by answering two questions. The first one is how to increase the amounts of the minority class training samples inputted into the generative adversarial network while ensuring the quality of data, and the second task is how to learn and generate high-dimensional feature vectors without losing the original information.
#### Iii-B2 Feature Extraction
After the image preprocessing process, for the sample class \\(m_{i}\\), we mark the number of superpixel blocks in the class as \\(b_{m}\\), and the feature vector of superpixel block as \\(f_{m}^{j}\\in\\mathbb{R}^{(1\\times d)},j\\in[0,b_{m})\\), \\(d\\) represents the dimension of the feature vector after preprocessing. We filter out the maximum amounts of superpixel blocks in the classes as \\((b_{m})_{max}\\). For the class \\(m_{i}\\) which meets the conditions of \\(b_{m_{i}}<0.02\\times(b_{m})_{max}\\), we define it as minority class, extract its feature vectors \\(f_{m_{i}}^{j}\\) and splice it vertically into a minority class feature matrix \\(\\mathbf{F}_{i}\\in\\mathbb{R}^{b_{m_{i}}\\times d}\\).
Due to the high spectral dimension of the feature matrix, the existing mainstream spectral feature extraction methods and minority class data generation strategies are usually performed by applying PCA method [5], which leads to the loss of effective information. As a result, it reduces the classification accuracy of the model.
In order to achieve a balance between generation efficiency and data diversity, we design a transition matrix optimization strategy based on adversarial methods, we use the original feature vector as the input of the discriminator and train the discriminator's ability to identify minority class feature vectors. The feature matrix perturbed by the transition matrix is imported into the generator, then we input the generated result into the discriminator. The generator and discriminator are trained respectively. We assume that the weight matrix of the generator is \\(\\mathbf{W_{G}}\\), in order to realize the perturbation operation on the original feature matrix, it is necessary to initialize the \\(mean(\\mathbf{W_{G}})=1\\), and set a small variance to adapt to the data characteristic of the preprocessed feature matrix.
#### Iii-B3 Feature Enhancement
For the purpose of cooperating with the optimization strategy designed, we construct a feature enhancement algorithm for minority class sequence data. For the minority class feature matrix \\(\\mathbf{F}_{i}\\) generated from preprocessing, we replicate it longitudinally to \\(d\\)-dimension and get \\(\\mathbf{F}_{i}^{\\prime}\\), we import it into the discriminator as real data for training. We perform the operation \\(\\mathbf{F}_{i}^{\\prime\\prime}=\\mathbf{F}_{i}^{\\prime}\\times\\mathbf{I}_{d}\\) and do the perturbation to matrix \\(\\mathbf{F}_{i}^{\\prime\\prime}\\) with generator. Through this process, we construct a diagonal feature matrix that can be input into the generator for convolution, and the output of the generator has the characteristic that each row is a slice of new generated feature vector. By mixing in order, we effectively increase the data diversity of the training set. At the same time, if the replication order of the original feature matrix \\(\\mathbf{F}_{i}\\) to generate the feature matrix is adjusted, we will get different forms of enhanced features vector, which greatly reduces the overfitting probability of the model.
#### Iii-B4 Loss Function
We design the loss function of the data augmentation model by inheriting the form of the GAN network loss function. To set an example, we only perform the data enhancement operation to a single class and ignore the mark of \\(i\\), we define \\(\\mathbf{y}_{real}\\) and \\(\\mathbf{y}_{fake}\\) which have all elements equal to 1 and 0 with a proper size. First we calculate the training loss \\(\\mathbf{G}_{loss}\\) of the generator
\\[\\mathbf{G}_{loss}=BCE\\_loss(\\mathbf{D}(\\mathbf{G}(\\mathbf{F}^{\\prime}\\otimes\\mathbf{I}), \\mathbf{y}_{real}), \\tag{5}\\]
in which the \\(BCE\\_loss\\) refers to the binary cross entropy loss function, then input the original data and the generated data into the discriminator. Training loss \\(\\mathbf{D}_{loss}\\) with following equations:
\\[\\mathbf{D}_{real\\_loss} =BCE\\_loss(\\mathbf{D}(\\mathbf{F}),\\mathbf{y}_{real}) \\tag{6}\\] \\[\\mathbf{D}_{fake\\_loss} =BCE\\_loss(\\mathbf{D}(\\mathbf{G}(\\mathbf{F}^{\\prime}\\otimes\\mathbf{I}), \\mathbf{y}_{fake})\\] (7) \\[\\mathbf{D}_{loss} =\\mathbf{D}_{real\\_loss}+\\mathbf{D}_{fake\\_loss}. \\tag{8}\\]
We define the final loss function as follows:
\\[\\min_{\\mathbf{G}}\\max_{\\mathbf{D}}V(\\mathbf{D},\\mathbf{G})=\\mathbf{G}_{loss}-\\mathbf{D}_{loss}. \\tag{9}\\]
After the training process, we take the output of the generator filling into the original feature matrix according to the model learning requirements, so as to realize the data enhancement for the minority class.
### _Bayesian Layer and BLGCN_
#### Iii-C1 Bayesian Deep Learning
In general, for a Bayesian deep neural network, its predicted value can be calculated by Eq. 2, and the calculation depends on the approximate inference of the posterior distribution \\(p(y^{*}\\mid x^{*},\\mathbf{Y},\\mathbf{X})\\) expressed in Eq. 3. We take the approach of variational inference and use a simple distribution \\(q(\\omega)\\) to approximate the posterior distribution \\(p(\\omega\\mid\\mathbf{Y},\\mathbf{X})\\).
In order to make \\(q(\\omega)\\) fit \\(p(\\omega\\mid\\mathbf{Y},\\mathbf{X})\\) as closely as possible, we only need to minimize Kullback-Leibler divergence \\(KL[q(\\omega)\\parallel p(\\omega\\mid\\mathbf{Y},\\mathbf{X})]\\)[44].
Fig. 1: GAN-frameworkTo present uncertainty, we assume that both the approximate posterior and the prior follow Gaussian distribution, i.e.,
\\[q(\\omega)=q(\\omega\\mid\\theta),\\theta=(\\mu,\\sigma) \\tag{10}\\]
\\[p(\\omega_{i}\\mid\\boldsymbol{Y},\\boldsymbol{X})\\sim\\mathrm{N}(\\omega_{i}\\mid \\mu_{i},{\\sigma_{i}}^{2}). \\tag{11}\\]
The original problem can be transformed into optimizing the parameter
\\[\\begin{split}\\theta^{*}&=\\operatorname*{arg\\, min}_{\\theta}KL[q(\\omega\\mid\\theta)\\parallel p(\\omega\\mid\\boldsymbol{Y}, \\boldsymbol{X}))]\\\\ &=\\operatorname*{arg\\,min}_{\\theta}\\mathbb{E}_{q(\\omega|\\theta)}[ \\log\\frac{q(\\omega\\mid\\theta)}{p(\\omega\\mid\\boldsymbol{Y},\\boldsymbol{X})}] \\\\ &=\\operatorname*{arg\\,min}_{\\theta}\\mathbb{E}_{q(\\omega|\\theta)}[ \\log\\frac{q(\\omega\\mid\\theta)p(\\boldsymbol{Y}\\mid\\boldsymbol{X})}{p(\\omega \\mid\\boldsymbol{Y},\\boldsymbol{X})p(\\omega)}]\\\\ &=\\operatorname*{arg\\,min}_{\\theta}\\mathbb{E}_{q(\\omega|\\theta)}[ \\log\\frac{q(\\omega\\mid\\theta)}{p(\\boldsymbol{Y}\\mid\\boldsymbol{X},\\omega)p( \\omega)}]\\end{split} \\tag{12}\\]
and expressed as minimizing the loss function
\\[Loss=KL[q(\\omega\\mid\\theta)\\parallel p(\\omega)]-\\mathbb{E}_{q(\\omega)}\\log(p (\\boldsymbol{Y}\\mid\\boldsymbol{X},\\omega)). \\tag{13}\\]
We apply the parameter trick here [45], for the weights \\(\\omega_{i}\\sim\\mathrm{N}(\\mu_{i},{\\theta_{i}}^{2})\\), we have \\(\\omega_{i}=\\mu_{i}+\\theta_{i}\\odot\\epsilon_{i}\\), where \\(\\epsilon_{i}\\sim\\mathrm{N}(0,1)\\), and \\(\\odot\\) represents Hadamard product. Then for the backpropagation of the loss function Eq. 13, we have
\\[\\begin{split}&\\frac{\\partial}{\\partial\\theta}\\mathbb{E}_{q( \\epsilon)}\\{\\log[\\frac{q(\\omega\\mid\\theta)}{p(\\boldsymbol{Y}\\mid\\boldsymbol{X },\\omega)p(\\omega)}]\\}\\\\ &=\\mathbb{E}_{q(\\epsilon)}\\{\\frac{\\partial}{\\partial\\theta}\\log[ \\frac{q(\\omega\\mid\\theta)}{p(\\boldsymbol{Y}\\mid\\boldsymbol{X},\\omega)p( \\omega)}]\\}.\\end{split} \\tag{14}\\]
In order to ensure that the range of parameter \\(\\theta\\) includes the entire real number axis, we need to perform the reparameter trick on it, with \\(\\sigma=\\log(1+e^{\\rho})\\) we have \\(\\theta=(\\mu,\\rho)\\).
#### Iii-B2 Bayesian Layer
The Bayesian neural network used in the current research field is relatively fixed in form, and has not been effectively combined with the convolutional neural network [45]. Researchers usually simply migrate the Bayesian forward propagation method and loss function to the traditional neural networks, and ignore adjusting the network structure in combination with specific tasks. Since the weight matrix of Bayesian neural network obeys a specific distribution, whether it is dealt with variable inference method or the MCMC method, we need to perform multiple sampling in the forward propagation with approximate calculation, which will take a great time cost. And when dealing with the hyperspectral image classification task, directly applying Bayesian theory to completely transform graph convolutional neural networks limits the room to fine-tune the network framework and improve the classification results.
Therefore, we propose the concept of Bayesian layer. We define the Bayesian layer as a convolutional layer which all weight parameters and biases are represented in distributed form as shown in Fig. 2. In the process of forward propagation, the input features matrix will be divided into several row vectors \\(\\overrightarrow{x_{n}}\\), and the Bayesian matrix can be seen as a set of column vectors \\(\\overrightarrow{h_{n}}\\), which is assembled by weight parameters of \\(\\mu\\) and \\(\\theta\\). In the matrix multiplication operation, we have \\(\\overrightarrow{y_{nm}}=\\overrightarrow{x_{n}}\\times\\overrightarrow{h_{m}}\\).
At the same time, we stipulate that the loss function must have the form of KL divergence which can be optimized by the variational inference method. By analyzing the form of the Bayesian convolutional neural network loss function, we find that the likelihood function term can be replaced by a multi-classification loss. The reason is, given the model output \\(y\\), the likelihood function \\(L(\\omega\\mid y)\\) about the parameter \\(\\omega\\) is numerically equals to the probability \\(p(Y=y\\mid\\omega)\\) of the output \\(y\\) given the parameter \\(\\omega\\). In the process of each forward propagation, the weight parameters of the non-Bayesian layer remain unchanged when the Bayesian layer is sampled multiple times, so each time the gradient is updated in the backpropagation, the existence of non-Bayesian layer will not influence the derivation operation towards the Bayesian layer.
As a result, if we use the classification loss function of traditional neural network such as negative log-likelihood loss (nll loss) to represent the likelihood function, it can provide the theoretical basis for the realization of the Bayesian layer. Here we can rewrite Eq. 13 as follows:
\\[\\begin{split} Loss&=\\sum_{i}\\log q(\\omega_{i}\\mid \\theta_{i})-\\sum_{i}\\log p(\\omega_{i})\\\\ &-\\sum_{j}\\log p(y_{j}\\mid\\omega,x_{j})\\\\ &=\\sum_{i}\\log q(\\omega_{i}\\mid\\theta_{i})-\\sum_{i}\\log p(\\omega _{i})-nll\\_loss.\\end{split} \\tag{15}\\]
According to the former demonstration, when upgrading the weight parameters of a Bayesian layers convolutional neural network in gradient calculation and back propagation, we know that the prior distribution loss \\(\\sum_{i}\\log p(\\omega_{i})\\) and posterior distribution loss \\(\\sum_{i}\\log q(\\omega_{i}\\mid\\theta_{i})\\) only relate to the weights of Bayesian layer, and the multi-classification loss relates to all the weight parameters of the neural network. Therefore, the parameter updates of the Bayesian layer and the non-Bayesian layer can be performed sequentially under the same process without interfering with each other.
Meanwhile, it also gives us more choices in diversification. Facing various tasks, we can choose different insertion methods of the Bayesian layer, which can achieve the uncertainty quantization output while ensuring that the loss function structure of the original neural network is not damaged.
#### Iii-B3 BLGCN
We combine the idea of graph convolution with Bayesian layers, where the output of each Bayesian layer is multiplied with the preprocessed adjacency matrix to fuse
Fig. 2: Bayesian Layerthe spatial features of hyperspectral images. More specifically, the adjacency matrix \\(\\mathbf{A}\\) got from the preprocessing is renormalized with the method proposed by Kipf&Welling [40]. To generalize the formulation of \\(\\hat{\\mathbf{A}}_{G}\\), we perform the trick
\\[\\hat{\\mathbf{A}}_{G}=\\hat{\\mathbf{D}}^{-\\frac{1}{2}}\\hat{\\mathbf{A}}\\hat{ \\mathbf{D}}^{-\\frac{1}{2}} \\tag{16}\\]
with \\(\\hat{\\mathbf{A}}=\\mathbf{A}+\\mathbf{I}\\) and \\(\\hat{\\mathbf{D}}_{ii}=\\sum_{j}\\hat{\\mathbf{A}}_{ij}\\). Here, \\(\\mathbf{I}\\) denotes the identity matrix which has proper size to calculate. Hence, for each Bayesian layer, the output can be expressed with Eq. 1 after sampling on the weight parameter matrix.
The main framework of BLGCN is shown in Fig. 3, we set a feature extraction module before the two Bayesian layer modules, which consists of two full-connected convolutional layers and a ReLU layer, it can roughly extract the data features and provide a solid foundation for the Bayesian processing.
It should be noted that for the data-enhanced hyperspectral data, in order to maintain the consistency of the spatial information, the adjacency matrix needs to be expanded. We assume that the newly generated superpixels have similar spatial characteristics with the original superpixels of the same class. We randomly match the generated ones with those initial superpixels, and give them the same adjacency relationship with the matched ones, thereby regenerating a new adjacency matrix.
### _Training Strategy based on Bayesian Method_
The prediction results given by deep learning models are not always reliable, and some application fields of hyperspectral image classification have high requirements for confidence. By modeling uncertainty with output variance, we dynamically evaluate the confidence level of the training results and make decisions on whether terminate the training process.
From the central limit theorem, we know that if
and the remaining 200 spectral bands are used for model classification. This dataset contains 16 kinds of landscapes. After the image preprocessing, the pixels in the dataset are aggregated into several superpixels. The number of superpixels in each landscape class and their corresponding training and testing samples are listed in Table I. The ground-truth map is shown in Fig. 4.
The University of Pavia dataset was acquired over the Pavia University in Italy in 2001. It contains 610\\(\\times\\)340 pixels in 103 spectral bands and has a wavelength range from 0.43 \\(\\mu\\)m to 0.86 \\(\\mu\\)m after removing the noisy bands. Table III lists the details of the superpixels and the amounts of training and testing samples. Fig. 6 shows the ground-truth map of the dataset.
Salinas dataset captured the Salinas Valley in California.
This dataset contains 512\\(\\times\\)127 pixels in 224 bands. 20 bands absorbed by water vapor are removed and the remaining 204 spectral bands are used for classification. Its ground-truth map is shown in Fig. 5 and the number of superpixels with their information are listed in Table II.
The University of Houston dataset is provided by the IEEE GRSS Data Fusion Contest in 2013. This dataset consists of 349\\(\\times\\)1905 pixels and contains 144 spectral bands covering the range from 364 nm to 1046 nm. The number of superpixels in each landscape class and their corresponding training and testing samples are given in Table IV with its ground-truth map shown in Fig. 7.
In our experiment, four metrics are adopted to quantificationally evaluate the classification performance which are per-class accuracy, overall accuracy (OA), average accuracy (AA), and kappa coefficient (Kappa).
### _Experimental Settings_
In our experiment, the proposed model is implemented via Pytorch with the Adam optimizer. We train our model with 0.2 dropout rate, 5\\(\\times\\)\\(10^{-4}\\) weight decay rate. We adopt the learning rate decay strategy in the training process. We set the initial learning rate as 1\\(\\times\\)\\(10^{-3}\\) and use the multistep learning rate strategy with gamma fixed to 0.9, and three milestones are 1500, 2500 and 3500. The threshold parameter of assigning the pseudo label to the unlabeled nodes is set to 0.9, which means that besides the training sample, all the rest samples are distributed to the validation set.
In the SLIC algorithm in image preprocessing, we set the compactness parameter to 0.08 for all experiments. And in our GAN algorithm in data augment process, the initial standard deviation of the generator weight matrix and the discriminator weight matrix are set to 1\\(\\times\\)\\(10^{-5}\\) and 0.01, respectively. Because of the data characteristics after the normalization operation in data preprocessing, the learning rate in our GAN network is set to 1\\(\\times\\)\\(10^{-7}\\). For the landscape class which has the superpixels sample size less than 0.02 times of the
Fig. 6: Salinas. (a) False color image. (b) Ground truth map.
Fig. 7: Houston University. (a) False color image. (b) Ground truth map.
superpixels sample size of the largest class sample, we define it as minority class and perform the data augment process towards it, the generated samples are filled into the original dataset until the sample size of the minority class reaches the threshold of 0.05 times of the superpixels sample size of the largest class sample.
### _Classification Results_
In order to evaluate the effectiveness of our proposed BLGCN model, we conduct comparative experiments with various comparison algorithms on the selected dataset. To make the experiment results reliable, various SOTA algorithms are selected as the competitors. Among the algorithms based on traditional machine learning, we selected the support vector machine algorithm (SVM) [52]. The commonly used SVM kernel includes linear kernel, polynomial kernel, Gaussian kernel and Sigmoid kernel. We make the SVM network search for optimization in the given parameter set, and then select the appropriate hyperparameters for the classification task. For convolutional neural network based on deep learning methods, we select the 3D-CNN convolutional neural network [53] which performs well over the HSI datasets. For graph convolutional neural network based model, we select several algorithms that have been widely used in recent years, such as GCN [40], spectral-spatial graph convolutional network (S\\({}^{2}\\)GCN) [31], and multiscale dynamic GCN (MDGCN) [29], etc. They are powered by the GCN idea to improve the network in a targeted manner, and have achieved better classification results on the HSI datasets. We perform 30 independent experiments on each dataset based on the algorithm chosen based on the application and recorded the mean and standard deviation of the results. It is worth noting that since the classification accuracies are quite poor when the 3D-CNN model is performed on the HSI datasets with 10% training set ratio, we only show the classification results with 30% training set ratio on the last three datasets.
The classification results on the Indian Pines dataset show the effectiveness of our algorithm. The traditional algorithm (SVM) and the original GCN have serious misclassification problem in the face of minority class categories classification task, and the accuracy of the 3D-CNN method based on full-supervised classification is also relatively low. When the proportion of training set samples is increased from 10% to 30%, the classification effect has been significantly improved. Compared with other GCN methods, our method has obvious advantages in the task of distinguishing minority class categories, i.e., Class 7 and Class 9, while maintaining high classification accuracy on other sample categories. The relevant results are listed in Table V.
Table VI shows the classification effects of different algorithms on the Pavia University dataset. Among these methods, the algorithm we proposed still maintains the highest classification accuracy, and has achieved 100% recognition and classification accuracy in several categories. It is worth noting that due to the relatively balanced sample size of the PU dataset, the 3D-CNN model also reaches a good classification effect after we increase the training set ratio to 30%, and gets the highest classification accuracy in some categories. For the samples class that are scattered in spatial such as Class 4, the classification methods based on GCN fail to achieve good results. The main reason is that its spatial adjacency relationship is relatively sparse, and makes it easy to be misclassified when using the GCN methods.
The classification results we get on the Salinas dataset are listed in Table VII. Compared with other algorithms, BLGCN basically achieves the highest classification accuracy in all categories, while maintaining high stability. We find that the SVM method also has higher classification accuracy than the deep learning algorithm, which shows that there is also room for the application of traditional classification methods in datasets with sufficient and balanced sample data such as Salinas.
We also apply BLGCN on the Houston University dataset to compare the classification efficiency. As we can see from Table VIII, the algorithm proposed in this paper greatly improves the classification accuracy, has obvious advantages over other algorithms, and has high classification performance on various sample classes. Besides, the comparison algorithms based on GCN have the problem that the classification accuracies on the majority samples classes such as Class 7 \\(\\sim\\) 12 are relatively low due to the minor ratio of training set samples. Because of the strong generalization ability in Bayesian layer, our model alleviates this problem successfully.
### _Ablation Study_
We design ablation experiments for multiple functional modules implemented in BLGCN, and verify the necessity of each module in order to achieve high accuracy in classification. It is also been validated that dynamic control effects by processing the Indian Pines dataset.
#### Iv-D1 Graph convolution operation
By using the method of graph convolution, our algorithm effectively extracts the spatial adjacency relationship between the superpixels of hyperspectral images, which greatly improves the classification efficiency. Among our method, the key operation of extracting the spatial relationship is the convolution operation of the feature matrix and the adjacency matrix. Therefore, we verify the necessity of the graph convolution module by removing the adjacency matrix convolution operation. The relevant comparison results are listed in Table IX. We can find that after removing the graph convolution operation, the overall classification results are greatly affected, and the accuracy rates have dropped significantly.
#### Iv-D2 Minority class data generation module
Facing the problem of low accuracy in minority class classification that often occurs in practical classification tasks, this algorithm combines the generative adversarial network method to achieve effective expansion of minority classes. Therefore, we design a comparative experiment of removing the minority generation module and directly input the original data into the classification module. We list the results of the experiments in Table IX, in which the classification accuracies of the minority class 1, 7, and class 9 have dropped significantly after the minority class data generation module is removed, and they cannot be effectively classified at all.
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline Class & SVM & 3D-CNN(0.1) & 3D-CNN(0.3) & GCN & S\\({}^{2}\\)GCN & MDGCN & BLGCN \\\\ \\hline
1 & 98.80\\(\\pm\\)0.10 & 98.38\\(\\pm\\)0.04 & 97.44\\(\\pm\\)0.55 & 97.90\\(\\pm\\)0.43 & 98.26\\(\\pm\\)0.02 & **100.00\\(\\pm\\)0.00** \\\\
2 & 99.80\\(\\pm\\)0.11 & 98.88\\(\\pm\\)0.06 & 99.94\\(\\pm\\)0.13 & 98.07\\(\\pm\\)0.29 & 98.18\\(\\pm\\)0.13 & **100.00\\(\\pm\\)0.00** \\\\
3 & 98.94\\(\\pm\\)0.51 & 99.19\\(\\pm\\)0.32 & 96.88\\(\\pm\\)2.42 & 96.06\\(\\pm\\)0.55 & 98.08\\(\\pm\\)0.15 & **100.00\\(\\pm\\)0.00** \\\\
4 & 99.25\\(\\pm\\)0.16 & 99.38\\(\\pm\\)0.17 & 97.42\\(\\pm\\)0.62 & 97.99\\(\\pm\\)0.51 & 95.81\\(\\pm\\)0.87 & **100.00\\(\\pm\\)0.00** \\\\
5 & 99.00\\(\\pm\\)0.00 & 99.11\\(\\pm\\)0.38 & 98.53\\(\\pm\\)0.17 & 96.46\\(\\pm\\)0.60 & 96.27\\(\\pm\\)1.32 & **99.11\\(\\pm\\)1.04** \\\\
6 & 99.91\\(\\pm\\)0.12 & 99.55\\(\\pm\\)0.05 & 100.00\\(\\pm\\)0.00 & 98.21\\(\\pm\\)0.19 & 97.40\\(\\pm\\)1.02 & **100.00\\(\\pm\\)0.00** \\\\
7 & 99.88\\(\\pm\\)0.15 & 99.50\\(\\pm\\)0.11 & 99.72\\(\\pm\\)0.27 & 97.95\\(\\pm\\)0.24 & 96.49\\(\\pm\\)1.56 & **100.00\\(\\pm\\)0.00** \\\\
8 & 84.77\\(\\pm\\)0.24 & 77.78\\(\\pm\\)11.49 & 97.68\\(\\pm\\)0.43 & 69.88\\(\\pm\\)5.89 & 91.18\\(\\pm\\)2.96 & **99.19\\(\\pm\\)1.04** \\\\
9 & 99.56\\(\\pm\\)0.10 & 99.75\\(\\pm\\)0.05 & 99.42\\(\\pm\\)0.01 & 97.22\\(\\pm\\)0.97 & 98.28\\(\\pm\\)0.79 & **100.00\\(\\pm\\)0.00** \\\\
10 & 96.23\\(\\pm\\)0.63 & 97.57\\(\\pm\\)0.05 & 95.84\\(\\pm\\)0.49 & 89.95\\(\\pm\\)2.41 & 96.62\\(\\pm\\)1.69 & **97.67\\(\\pm\\)1.93** \\\\
11 & 97.81\\(\\pm\\)1.24 & 97.66\\(\\pm\\)0.50 & 99.08\\(\\pm\\)1.94 & 96.90\\(\\pm\\)1.66 & 97.68\\(\\pm\\)2.04 & **99.55\\(\\pm\\)0.63** \\\\
12 & 97.41\\(\\pm\\)0.61 & 98.69\\(\\pm\\)0.12 & 100.00\\(\\pm\\)0.00 & 98.44\\(\\pm\\)0.96 & 97.39\\(\\pm\\)2.57 & **100.00\\(\\pm\\)0.00** \\\\
13 & 99.31\\(\\pm\\)0.60 & 98.61\\(\\pm\\)0.68 & 100.00\\(\\pm\\)0.00 & 96.73\\(\\pm\\)1.02 & 95.91\\(\\pm\\)3.19 & **100.00\\(\\pm\\)0.00** \\\\
14 & 97.41\\(\\pm\\)0.64 & 97.86\\(\\pm\\)0.93 & 95.72\\(\\pm\\)4.65 & 94.67\\(\\pm\\)1.87 & 96.23\\(\\pm\\)0.89 & **98.97\\(\\pm\\)0.00** \\\\
15 & 71.80\\(\\pm\\)0.81 & 76.68\\(\\pm\\)5.87 & 75.94\\(\\pm\\)9.67 & 69.56\\(\\pm\\)3.58 & **94.06\\(\\pm\\)2.09** & 91.79\\(\\pm\\)1.22 \\\\
16 & **99.21\\(\\pm\\)0.21** & 94.47\\(\\pm\\)0.18 & 93.69\\(\\pm\\)3.79 & 95.81\\(\\pm\\)1.03 & 95.58\\(\\pm\\)1.45 & 94.32\\(\\pm\\)8.00 \\\\ \\hline OA & 92.63\\(\\pm\\)1.71 & 91.03\\(\\pm\\)3.10 & 94.75\\(\\pm\\)1.66 & 87.40\\(\\pm\\)1.24 & 96.52\\(\\pm\\)2.81 & **98.30\\(\\pm\\)0.22** \\\\ AA & 95.90\\(\\pm\\)0.03 & 95.88\\(\\pm\\)1.09 & 96.22\\(\\pm\\)1.33 & 93.14\\(\\pm\\)0.10 & 96.52\\(\\pm\\)0.92 & **99.02\\(\\pm\\)0.24** \\\\ Kappa & 91.80\\(\\pm\\)0.20 & 90.05\\(\\pm\\)3.39 & 94.13\\(\\pm\\)1.86 & 86.12\\(\\pm\\)2.01 & 95.27\\(\\pm\\)0.96 & **98.10\\(\\pm\\)0.25** \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE V: CLASSIFICATION results on Indian Pines dataset
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline Class & SVM & 3D-CNN(0.3) & GCN & S\\({}^{2}\\)GCN & MDGCN & BLGCN \\\\ \\hline
1 & 99.80\\(\\pm\\)0.10 & 98.38\\(\\pm\\)0.04 & 97.44\\(\\pm\\)0.55 & 97.90\\(\\pm\\)0.43 & 98.26\\(\\pm\\)0.02 & **100.00\\(\\pm\\)0.00** \\\\
2 & 99.80\\(\\pm\\)0.11 & 98.88\\(\\pm\\)0.06 & 99.94\\(\\pm\\)0.13 & 98.07\\(\\pm\\)0.29 & 98.18\\(\\pm\\)0
#### Iv-D3 Dynamic control training module guided by confidence intervals
Due to the uncertainty of the Bayesian neural network itself, we combine this feature to quantify the output uncertainty by calculating the confidence interval of the classification result, and dynamically control the training process whether to proceed. We design a comparative experiment on this module by removing the uncertainty calculation of the BLGCN classification results and the two thresholds in the training process, and we plot the correspondence between the accuracy of the validation set and the training batch in Fig. 8. It is shown that when given a 95% confidence level, the model automatically stops learning when the confidence interval reaches the second threshold, which saves plenty of training time.
## V Conclusion
In this paper, we propose a novel BLGCN method for HSI classification. The proposed Bayesian layer is applied to the GCN-based networks to quantify the uncertainty of the output results. Since the distribution form is presented on only a few parts of the network, the proposed method maintains high classification accuracy and generalization ability. Meanwhile, newly designed GANs on minority class data are trained to enlarge its capacity and solve the sample imbalance problem in HSI dataset. To improve the training efficiency, a dynamic control strategy is designed for an early termination of the training process when the confidence interval reaches the threshold. The experimental results on four open source HSI datasets demonstrate the superiority of our proposed BLGCN. In addition, ablation studies are arranged to verify the contribution of different modules including graph convolution operation, data generation module and dynamic control strategy.
In the future, further research will be implemented on the various application field of Bayesian layer, including image classification and graph node prediction. Moreover, we will continue to improve the theoretical basis of Bayesian layer and make it available to various neural networks.
Fig. 8: Ablation Study of Dynamic Control
## References
* [1] L. He, J. Li, C. Liu, and S. Li, \"Recent advances on spectral-spatial hyperspectral image classification: An overview and new guidelines,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 3, pp. 1579-1597, Mar. 2018.
* [2] X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, and F. Fraundorfer, \"Deep learning in remote sensing: A comprehensive review and list of resources,\" _IEEE Geosci. Remote Sens. Mag._, vol. 5, no. 4, pp. 8-36, Dec. 2017.
* [3] L. Zhang, L. Zhang, and B. Du, \"Deep learning for remote sensing data: A technical tutorial on the state of the art,\" _IEEE Geosci. Remote Sens. Mag._, vol. 4, no. 2, pp. 22-40, Jun. 2016.
* [4] C.-I. Chang, _Hyperspectral data exploitation: theory and applications_. John Wiley & Sons, 2007.
* [5] J. Shlens, \"A tutorial on principal component analysis,\" _arXiv preprint arXiv:1404.1100_, 2014.
* [6] M. Dalla Mura, A. Villa, J. A. Benediktsson, J. Chanussot, and L. Bruzzone, \"Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis,\" _IEEE Geosci. Remote Sens. Lett._, vol. 8, no. 3, pp. 542-546, May. 2011.
* [7] S. T. Roweis and L. K. Saul, \"Nonlinear dimensionality reduction by locally linear embedding,\" _Science_, vol. 290, no. 5500, p. 2323, Dec. 2000.
* [8] X. He and P. Niyogi, \"Locality preserving projections,\" in _Proc. 28th Int. Conf. Neural Int. Process. Syst._, 2004, pp. 153-160.
* [9] W. R. Tobler, \"A computer movie simulating urban growth in the detroit region,\" _Econ. Geography_, vol. 46, pp. 234-240, Jun. 1970.
* [10] B. Zhang, S. Li, X. Jia, L. Gao, and M. Peng, \"Adaptive markov random field approach for classification of hyperspectral imagery,\" _IEEE Geosci. Remote Sens. Lett._, vol. 8, no. 5, pp. 973-977, Sep. 2011.
* [11] L. Sun, Z. Wu, J. Liu, L. Xiao, and Z. Wei, \"Supervised spectral-spatial hyperspectral image classification with weighted markov random fields,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, no. 3, pp. 1490-1503, Mar. 2015.
* [12] P. Zhong and R. Wang, \"Modeling and classifying hyperspectral imagery by crfs with sparse higher order potentials,\" _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 2, pp. 688-705, Feb. 2011.
* [13] F. Li, L. Xu, P. Siva, A. Wong, and D. A. Clausi, \"Hyperspectral image classification with limited labeled training samples using enhanced ensemble learning and conditional random fields,\" _IEEE J. Sel. Topics Appl. Earth Observ._, vol. 8, no. 6, pp. 2427-2438, Jun. 2015.
* [14] J. Zhao, Y. Zhong, Y. Wu, L. Zhang, and H. Shu, \"Sub-pixel mapping based on conditional random fields for hyperspectral remote sensing imagery,\" _IEEE J. Sel. Topics Appl. Earth Observ._, vol. 9, no. 6, pp. 1049-1060, Sep. 2015.
* [15] A. Plaza, P. Martinez, R. Perez, and J. Plaza, \"A new approach to mixed pixel classification of hyperspectral imagery based on extended morphological profiles,\" _Pattern Recognit._, vol. 37, no. 6, pp. 1097-1116, 2004.
* [16] J. Li, X. Huang, P. Gamba, J. M. Bioucas-Dias, L. Zhang, J. A. Benediktsson, and A. Plaza, \"Multiple feature learning for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 53, no. 3, pp. 1592-1606, Mar. 2015.
* [17] B. Demir and L. Bruzzone, \"Histogram-based attribute profiles for classification of very high resolution remote sensing images,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 4, pp. 2096-2107, Apr. 2016.
* [18] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, \"Deep learning-based classification of hyperspectral data,\" _IEEE J. Sel. Topics Appl. Earth Observ._, vol. 7, no. 6, pp. 2094-2107, Jun. 2014.
* [19] Y. Chen, X. Zhao, and X. Jia, \"Spectral-spatial classification of hyperspectral data based on deep belief network,\" _IEEE J. Sel. Topics Appl. Earth Observ._, vol. 8, no. 6, pp. 2381-2392, Jun. 2015.
* [20] Y. Chen, H. Jiang, C. Li, X. Jia, and P. Ghamisi, \"Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 10, pp. 6232-6251, Oct. 2016.
* [21] H. Lee and H. Kwon, \"Going deeper with contextual cnn for hyperspectral image classification,\" _IEEE Trans. Image Process._, vol. 26, no. 10, pp. 4843-4855, Oct. 2017.
* [22] M. Zhang, W. Li, and Q. Du, \"Diverse region-based cnn for hyperspectral image classification,\" _IEEE Trans. Image Process._, vol. 27, no. 6, pp. 2623-2634, Jun. 2018.
* [23] Z. Gong, P. Zhong, Y. Yu, W. Hu, and S. Li, \"A cnn with multiscale convolution and diversified metric for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 6, pp. 3599-3618, Jun. 2019.
* [24] W. Li, G. Wu, F. Zhang, and Q. Du, \"Hyperspectral image classification using deep pixel-pair features,\" _IEEE Trans. Geosci. Remote Sens._, vol. 55, no. 2, pp. 844-853, Feb. 2017.
* [25] A. Romero, C. Gatta, and G. Camps-Valls, \"Unsupervised deep feature extraction for remote sensing image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 54, no. 3, pp. 1349-1362, Mar. 2016.
* [26] L. Mou, P. Ghamisi, and X. Z. Zhu, \"Unsupervised spectral-spatial feature learning via deep residual conv-deconv network for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 1, pp. 391-406, Jan. 2018.
* [27] M. Zhang, M. Gong, Y. Mao, J. Li, and Y. Wu, \"Unsupervised feature extraction in hyperspectral images based on wasserstein generative adversarial network,\" _IEEE Trans. Geosci. Remote Sens._, vol. 57, no. 5, pp. 2669-2688, May. 2019.
* [28] P. Sellars, A. I. Aviles-Rivero, and C.-B. Schonlieb, \"Superpixel contracted graph-based learning for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 6, pp. 4180-4193, Jan. 2020.
* [29] S. Wan, C. Gong, P. Zhong, B. Du, L. Zhang, and J. Yang, \"Multiscale dynamic graph convolutional network for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 5, pp. 3162-3177, Nov. 2019.
* [30] S. Wan, C. Gong, P. Zhong, S. Pan, G. Li, and J. Yang, \"Hyperspectral image classification with context-aware dynamic graph convolutional network,\" _IEEE Trans. Geosci. Remote Sens._, vol. 59, no. 1, pp. 597-612, May. 2020.
* [31] A. Qin, Z. Shang, J. Tian, Y. Wang, T. Zhang, and Y. Y. Tang, \"Spectral-spatial graph convolutional networks for semisupervised hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 16, no. 2, pp. 241-245, Sep. 2018.
* [32] L. Mou, X. Lu, X. Li, and X. X. Zhu, \"Nonlocal graph convolutional networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 58, no. 12, pp. 8246-8257, May. 2020.
* [33] X. He, Y. Chen, and P. Ghamisi, \"Dual graph convolutional network for hyperspectral image classification with limited training samples,\" _IEEE Trans. Geosci. Remote Sens._, Mar. 2021.
* [34] S. Wan, S. Pan, P. Zhong, X. Chang, J. Yang, and C. Gong, \"Dual interactive graph convolutional networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, May. 2021.
* [35] Y. Ding, J. Feng, Y. Chong, S. Pan, and X. Sun, \"Adaptive sampling toward a dynamic graph convolutional network for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, Dec. 2021.
* [36] J. Bai, B. Ding, Z. Xiao, L. Jiao, H. Chen, and A. C. Regan, \"Hyperspectral image classification based on deep attention graph convolutional network,\" _IEEE Trans. Geosci. Remote Sens._, Mar. 2021.
* [37] Q. Liu, L. Xiao, J. Yang, and Z. Wei, \"Cnn-enhanced graph convolutional network with pixel-and superpixel-level feature fusion for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 59, no. 10, pp. 8657-8671, Nov. 2020.
* [38] D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, and J. Chanussot, \"Graph convolutional networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 59, no. 7, pp. 5966-5978, Aug. 2020.
* [39] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \"Generative adversarial nets,\" _Advances in neural information processing systems_, vol. 27, 2014.
* [40] T. N. Kipf and M. Welling, \"Semi-supervised classification with graph convolutional networks,\" _arXiv preprint arXiv:1609.02907_, 2016.
* [41] D. Hong, L. Gao, X. Wu, J. Yao, and B. Zhang, \"Revisiting graph convolutional networks with mini-batch sampling for hyperspectral image classification,\" in _Proc. 11th Workshop Hyperspectral Image. Signal Process., Evol. Remote Sens. (WHISPERS)_, Mar. 2021, pp. 1-5.
* [42] J.-Y. Yang, H.-C. Li, Z.-C. Li, and T.-Y. Ma, \"Spatial-spectral tensor graph convolutional network for hyperspectral image classification,\" _2021 IEEE International Geoscience and Remote Sensing Symposium IGASS_. IEEE, 2021, pp. 2222-2225.
* [43] J. Chen, L. Jiao, X. Liu, L. Li, F. Liu, and S. Yang, \"Automatic graph learning convolutional networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 60, pp. 1-16, Dec. 2021* [47] L. Zhu, Y. Chen, P. Ghamisi, and J. A. Benediktsson, \"Generative adversarial networks for hyperspectral image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 9, pp. 5046-5063, Mar. 2018.
* [48] J. Yin, W. Li, and B. Han, \"Hyperspectral image classification based on generative adversarial network with dropblock,\" in _Proc. IEEE Int. Conf. Image Process. (ICIP)_, Sep. 2019, pp. 405-409.
* [49] H. Wang, C. Tao, J. Qi, H. Li, and Y. Tang, \"Semi-supervised variational generative adversarial networks for hyperspectral image classification,\" in _Proc. IGARSS_, Jul. 2019, pp. 9792-9794.
* [50] Y. Zhan, D. Hu, Y. Wang, and X. Yu, \"Semisupervised hyperspectral image classification based on generative adversarial networks,\" _IEEE Geoscience and Remote Sensing Letters_, vol. 15, no. 2, pp. 212-216, Dec. 2017.
* [51] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, \"Slic superpixels compared to state-of-the-art superpixel methods,\" _IEEE Trans. Pattern Anal. Mach. Intell._, vol. 34, no. 11, pp. 2274-2282, Nov. 2012.
* [52] B.-C. Kuo, C.-S. Huang, C.-C. Hung, Y.-L. Liu, and I.-L. Chen, \"Spatial information based support vector machine for hyperspectral image classification,\" in _Proc. IEEE IGARSS_, Jul. 2010, pp. 832-835.
* [53] A. B. Hamida, A. Benoit, P. Lambert, and C. B. Amar, \"3-d deep learning approach for remote sensing image classification,\" _IEEE Trans. Geosci. Remote Sens._, vol. 56, no. 8, pp. 4420-4434, Apr. 2018. | In recent years, research on hyperspectral image (HSI) classification has continuous progress on introducing deep network models, and recently the graph convolutional network (GCN) based models have shown impressive performance. However, these deep learning frameworks are based on point estimation suffering from low generalization and inability to quantify the classification results uncertainty. On the other hand, simply applying the distribution estimation based Bayesian Neural Network (BNN) to classify the HSI is unable to achieve high classification efficiency due to the large amount of parameters. In this paper, we design a Bayesian layer as an insertion layer into point estimation based neural networks, and propose a Bayesian layer graph convolutional network (BLGCN) model by combining graph convolution operations, which can effectively extract graph information and estimate the uncertainty of classification results. Moreover, a Generative Adversarial Network (GAN) is built to solve the sample imbalance problem of HSI dataset. Finally, we design a dynamic control training strategy based on the confidence interval of the classification results, which will terminate the training early when the confidence interval reaches the presented threshold. The experimental results show that our model achieves a balance between high classification accuracy and strong generalization. In addition, it can quantify the uncertainty of outputs.
Bayesian layer; graph convolutional network; hyperspectral image classification; generative adversarial network. | Condense the content of the following passage. | 268 |
arxiv-format/2011_06425v2.md | # StrObe: Streaming Object Detection
from LiDAR Packets
Davi Frossard 1,2 Simon Suo 1,2 Sergio Casas 1,2 James Tu 1,2
**Rui Hu 1 Raquel Urtasun 1,2**
1 Uber Advanced Technologies Group 2 University of Toronto
{frossard, suo, sergio.casas, james.tu, rui.hu, urtasun}@uber.com
## 1 Introduction
Perceiving the world is a critical task in modern robotics applications. Self-driving vehicles must first process sensory information to perform object detection and estimate the free space before attempting to plan a safe and comfortable maneuver towards the goal. LiDAR has become the main sensing modality in most self-driving vehicles due to the geometrical richness it provides. Most prevalent LiDAR sensors operate by collecting a rotating scan of the environment, typically completing revolutions at a 10hz rate. However, as the sensor rotates, observations arrive as a stream of spatio-temporal points \\((x,y,z,t)\\) grouped in fine-grained _packets_, each spanning approximately 10ms. This gives rise to a rolling shutter effect shown in Figure 1, where objects in different locations are observed asynchronously.
Modern autonomous systems accumulate the LiDAR packets into a full \\(360^{\\circ}\\) sweep before running perception. This waiting time adds significant latency to the pipeline, particularly for objects that were seen in the earlier packets in the sweep. It also introduces an erroneous assumption that all observations in the full sweep are made synchronously. In reality, when the perception model receives the input, there is already a discrepancy between the outdated observations and the true state of the world, illustrated in Figure 1. Furthermore, there is a temporal discontinuity in the sweep where the earliest and the latest packets meet which creates artifacts in the point cloud.
For safety-critical applications like self-driving, even minimal delays may result in catastrophic outcomes. For example, in the presence of high-speed vehicles, building a sweep from a 10Hz LiDAR introduces a latency of 100ms, which translates to several meters of error in free space estimation. Having lower latency is crucial in safety-critical situations where the vehicle must quickly perceive and react to avoid harmful events. Therefore, it is important to process incoming sensory information with minimal latency.
Processing individual LiDAR packets can be challenging, since only a small sector of the scene is observable as illustrated in Figure 1. Objects of interest are often fragmented across different LiDAR packets, particularly when close to the sensor. Coincidentally, that is also when high accuracy and low latency are the most important as _close range_ objects are typically the most critical to safety. Thus, individual packets alone may be insufficient for high quality detections, making it necessary to incorporate past observations.
Existing LiDAR object detectors generally assume access to a full 360 degree sweep, or a large subregion (e.g., front view) that spans all objects of interest. As such, these models do not explicitly reason about objects split across multiple observations. As shown in our experiments, directly adopting full-sweep models for processing individual LiDAR packets is not a good solution due to the partial observation and lack of global context. Conversely, exploiting multiple sweeps [1; 2] provides richer geometrical evidence as more LiDAR points are collected over time. However, most current solutions are computationally inefficient as each packet would be processed as many times as the duration of the history. As such, naively aggregating historical sensory information at the input level is not amenable to emitting low latency object detections from fine-grained LiDAR packets.
In this paper we propose StrObe, a novel detection model which exploits the sequential nature of LiDAR observations and efficiently reuses past computation to stream low latency object detections from LiDAR packets. Our approach voxelizes the incoming LiDAR packets into a Bird's-Eye View (BEV) grid, and uses an efficient convolutional backbone to process only the relevant region. Furthermore, we introduce a multi-scale spatial memory that is read and updated with each LiDAR packet. This allows us to reuse past computation, and make the incremental processing of incoming LiDAR packets lightweight and efficient. Importantly, we achieve an end-to-end latency of 21 ms (from observing an actor to emitting a detection) on an NVIDIA 2080Ti: 10ms for accumulating a packet and 11ms for model inference. In contrast, even fast full sweep detectors [3] operate at an order of magnitude higher latencies: Taking 100ms to accumulate the sweep and another 28ms for model inference, for a total of 128ms.
Our second contribution is a novel large scale benchmark for evaluating streaming object detection from LiDAR packets. Unlike existing public datasets, PacketATG4D contains LiDAR data at the packet level, along with accurate ego-pose and associated object bounding box annotations at the same temporal resolution (i.e., 100Hz). We also propose a novel metric _latency-aware mAP_ to explicitly take latency into account when evaluating perception. We show that our approach far outperforms the state-of-the-art when the data buffering latency is taken into account, while still matching the performance in the conventional setting.
## 2 Related Work
3D object detection has made tremendous progress in recent years due to the advances of deep learning and the availability of large-scale labeled datasets. The topic of how to effectively process LiDAR data has received significant attention and many approaches have been proposed. Point clouds have been processed in perspective format using a range image [4; 5]. By converting the point cloud into an image, these approaches can leverage the vast body of knowledge on 2D object detection to build good architectures for the task. However, such methods suffer from the same challenge present in 2D detection: high variance in receptive field requirements as a function of depth.
Figure 1: Objects are observed at different times when building a full LiDAR sweep, indicated as the solid boxes. If a full sweep from a 10hz LiDAR is accumulated before detection, a latency of 100ms will be introduced and by the time a detection is available (Packet 10) it no longer reflects the state of the world.
To tackle these issues, some methods perform 3D detection directly on the unstructured 3D points. This is usually achieved through first extracting local signatures with a fully connected layer [6; 7; 8; 9] or by using deformable filters [10]. An alternative framework is to voxelize the points into a regularly spaced 3D grid, making reasoning on point clouds amenable to convolutional architectures. Early works [11; 12; 13; 2] leverage 3D convolutions, but they are memory intensive. Others [14; 15; 16] exploit the sparsity of point clouds to reduce redundant computation and make higher resolution processing feasible. BEV detectors [3; 17; 1] avoid heavy computation by exploiting efficient 2D convolutions over a top-down pseudo-image of the scene. Other methods have leveraged hybrid representation of points and voxels [18; 19; 20; 21; 22] to exploit the benefits of both representations.
However, the aforementioned methods assume a full sweep is available, which requires the sensor to complete a full rotation and incurs latency. Previous works have explored the problem of latency in different settings, for instance on the effect of model runtime for 2D object detection [23], or how the temporal aspect of point clouds is relevant for odometry and mapping [24; 25]. Concurrent work [26] has considered streaming object detections from a rolling shutter LiDAR. However, their model uses an LSTM to maintain the state, which does not leverage the spatial nature of the problem. Furthermore, their evaluation does not capture the impact latency has on the accuracy of state estimation.
## 3 Low Latency Detection on Streaming LiDAR
In this paper, we propose StrObe, a low-latency object detector that emits detections from streaming LiDAR observations. As illustrated in Figure 2, as the LiDAR sensor spins, it yields data in sector packets (each roughly spanning \\(36^{\\circ}\\) in our 10Hz LiDAR). As opposed to previous models, which buffer this data into a full sweep before processing, our proposed method operates at the packet level. In doing so, we lower our latency by 90ms. A fundamental component to our approach is a novel spatial memory module design to reuse past computation, and make incremental processing of incoming LiDAR packets lightweight and effective.
### Streaming Object Detection
The overall architecture of our model is illustrated in Figure 3. The network takes as input a LiDAR packet and an HD map, which is useful as a prior on the location of actors (e.g., a vehicle is more likely to be on the road than on the sidewalk). For each packet we first voxelize the points and rasterize into a BEV pseudo-image with height as the channel dimension [3]. Following [17; 1], we also rasterize the map into a BEV pseudo-image, where each channel corresponds to a different layer of the map (e.g., crosswalks, roads, etc). We then extract features using our novel regional convolutions (Figure 3 - a, b), which only compute features in the rectangular area defined by the packet. A latent spatial representation of the scene is then maintained using a memory module (Figure 3 - c, d, e). Lastly, we channel-wise concatenate multi-scale features and regress detection parameters using our output header (Figure 3 - f).
Figure 2: Existing point cloud perception methods wait 100ms to accumulate the full sweep (**right**). StrObe (**left**) is able to process each packet and emit new detections with high accuracy and minimal latency, while leveraging global context by continuously updating a spatial memory that keeps track of previously seen packets.
Regional Convolution Layer:To reduce latency while leveraging the proven strength of BEV representations and 2D convolutions, we propose to process the input with a local operator, which we call regional convolution. Specifically, for an input \\(x\\) and coordinates \\(x_{0},x_{1},y_{0}\\) and \\(y_{1}\\), we extract features \\(\\mathbf{y}\\) only on the region \\(\\mathbf{x}[x_{0}:x_{1},y_{0}:y_{1}]\\), where the brackets denote indexing at the rectangle defined by the coordinate ranges. This allows us to leverage locality to minimize wasted computation.
\\[\\mathbf{y}=f_{\\text{region}}\\left(\\mathbf{x}[x_{0}:x_{1},y_{0}:y_{1}],\\mathbf{ w}\\right) \\tag{1}\\]
In practice, \\(f_{\\text{region}}\\) is a sequence of 2D convolution, ReLU activation and Group Normalization [27]. Furthermore, for both the LiDAR packet and HD map, the region coordinates are defined as the minimal rectangle that fully encloses all points in the LiDAR packet. This is illustrated in Figure 3 - a, b.
Spatial Memory:While regional convolutions allow us to efficiently ingest packets, independently processing them is not sufficient for accurate perception since objects will often be fragmented across many packets. Furthermore, a single observation of an object far away will typically yield few points due to the sparsity of the sensor at range. We would thus like to leverage information from previous scans of the region. However, naively processing the history of observations every time we receive a packet results in redundant computation and slow inference. Instead, our approach iteratively builds a global spatial memory from a series of partial observations while at the same time producing new detections with each LiDAR packet, Figure 3 - c. This enables us to re-use past computation and produce low-latency and accurate detections. Importantly, the LiDAR points are registered on a consistent coordinate frame defined by a continuous ego-pose. The memory is aligned with this pose by bilinearly resampling its features to account for ego-motion with every new packet (Figure 3 - c, d). This guarantees that the LiDAR and map features are consistently aligned with the spatial memory in the same coordinate frame.
Memory Update:As each LiDAR packet arrives, the spatial memory is incrementally updated with new local features to reflect the latest state (Figure 3 - d). Each update step is done through aggregation of the current memory state \\(\\mathbf{m}\\) and the incoming local features \\(\\mathbf{y}\\). Specifically, we employ a channel reduction with learned parameters \\(\\mathbf{w}\\) as follows
\\[\\mathbf{m}^{\\prime}[x_{0}:x_{1},y_{0}:y_{1}]=f_{\\text{memory}}\\left(\\mathbf{ m}[x_{0}:x_{1},y_{0}:y_{1}],\\mathbf{y},\\mathbf{w}\\right). \\tag{2}\\]
In practice, \\(f_{\\text{memory}}\\) channel-wise concatenates \\(\\mathbf{m}\\) and \\(\\mathbf{y}\\), resulting in a tensor with \\(2c\\) channels, then applies two blocks of 2D convolution, ReLU activation and Group Normalization, with the second block bringing the number of channels back to \\(c\\). This is illustrated as the red dotted arrows in Figure 3 - e.
Multi-Scale Backbone:In order to leverage the semantic representations of feature maps at different scales (i.e., richer geometry on higher resolutions; richer semantics on lower) we employ a multi-scale backbone for the extraction of both LiDAR and HD map features. Together with the spatial memory at each scale, the benefits of this are twofold: It allows the model to regress accurate and low latency detections from partial observations by remembering the features from immediately preceding packets. It also makes it possible for the network to persist long term features that are useful to detect objects through occlusion over multiple sweeps as well as overwrite previous features when stronger evidence is available.
Architecture Details:We employ a BEV grid with resolution of 0.2m for each voxel. This grid then goes through 4 blocks of [2, 2, 3, 6] Regional Convolution layers with [24, 64, 128, 256] channels, followed by Max Pooling with a stride of 2. Each block has a corresponding Spatial Memory that holds the pre-pooling state of the features. In parallel, features are extracted from the HD map with a backbone that consists of a sequence of 4 blocks with [2, 2, 3, 3] Regional Convolution layers with [16, 32, 64, 128] channels. After each block, Max Pooling with a stride of 2 is employed. The feature maps from each block of both the LiDAR and HD map backbones are then bilinearly resized to a common resolution of 0.8m, channel-wise concatenated, and processed by one last block of 4 Regional Convolutions with 256 channels.
Detection Header:We perform multi-class BEV detection for vehicles, cyclists, and pedestrians via a single-stage detection header consisting of 2 convolutional layers that predict the classification and regression targets for each cell in the fused feature map (hereinafter referred to as \"anchors\"). All objects are defined via their centroid \\((b_{x},b_{y})\\) and confidence \\(\\sigma\\), whereas cyclists and vehicles also have length, width, and heading \\((b_{l},b_{w},b_{\\phi})\\) in BEV. For the confidence, we predict its logit \\(\\log\\frac{\\sigma}{1-\\sigma}\\). We define the centroid of the box \\((b_{x},b_{y})\\) as an offset \\((\\Delta x,\\Delta y)\\) from the coordinates of the center point of its anchor pixel \\((a_{x},a_{y})\\):
\\[(b_{x},b_{y})=(a_{x}+\\Delta x,a_{y}+\\Delta y). \\tag{3}\\]
For the vehicle dimensions we predict \\([\\log l,\\log w]\\), which encourages the network to learn a prior on the dimension of the boxes (low variance should be expected from the dimension of vehicles). The heading \\(b_{\\phi}\\) is parameterized by the tangent value. In particular, we predict a signed ratio so that the specific quadrant can be retrieved:
\\[b_{\\phi}=\\arctan\\frac{\\theta_{1}}{\\theta_{2}}. \\tag{4}\\]
### Learning
Following common practice in object detection [28], we employ a multi-task loss over classification and bounding box regression to optimize the model (using \\(\\alpha=2.0\\)), i.e:
\\[\\mathcal{L}=\\mathcal{L}_{\\text{reg}}+\\alpha\\mathcal{L}_{\\text{cls}}. \\tag{5}\\]
Regression Loss:It is defined as the weighted sum of the smooth \\(\\ell_{1}\\) loss [29] between the ground truth vector of detection parameters \\(\\hat{\\mathbf{y}}=[\\Delta x,\\Delta y,\\log w,\\log l,\\theta_{1},\\theta_{2}]\\) and predictions \\(\\mathbf{y}\\) with \\(\\gamma=[1,1,1,1,2,2]\\). Note that \\(\\log w,\\log l,\\theta_{1}\\) and \\(\\theta_{2}\\) are not considered for pedestrians since we are only concerned with predicting their centroid.
\\[\\mathcal{L}_{\\text{reg}}(\\mathbf{y},\\hat{\\mathbf{y}})=\\frac{1}{N}\\sum_{i=0}^{ N}\\gamma\\cdot\\mathrm{smooth}_{\\ell 1}(\\mathbf{y}_{d}^{i}-\\hat{\\mathbf{y}}_{d}^{i}) \\tag{6}\\]
Classification Loss:It is defined as the binary cross entropy between the predicted scores and the ground truth. Due to severe class imbalance between positive \\(\\hat{\\mathbf{y}}_{\\text{pos}}\\) and negative \\(\\hat{\\mathbf{y}}_{\\text{neg}}\\) anchors given that most pixels in the BEV scene do not contain an object, we employ hard negative mining:
\\[\\mathcal{L}_{\\text{cls}}(\\mathbf{y},\\hat{\\mathbf{y}})=\\frac{1}{N}\\sum_{i=0}^{ N}\\hat{\\mathbf{y}}_{\\text{pos}}^{i}\\log\\mathbf{y}+\\frac{1}{K}\\sum_{i=0}^{N} \\mathbbm{1}\\left[i\\in\\mathcal{N}_{K}\\right](1-\\hat{\\mathbf{y}}_{\\text{neg}}^ {i})\\log(1-\\mathbf{y}) \\tag{7}\\]
where \\(\\mathcal{N}_{K}\\) is a set containing \\(K\\) hard negative anchors. This is obtained by sampling 750 anchors for vehicles, 1500 for cyclists and pedestrians, and picking the 20 with highest loss for each class.
Sequential Training:Due to the sequential nature of the memory, the model is trained sequentially through examples that contain 50 packets (corresponding to 0.5s). Back-propagation through time is used to compute gradients across the memory. Furthermore, the model is trained to remember by supervising it on objects with 0 points, as long as the object was seen in any of the previous packets. In practice, due to GPU memory constraints, we only compute the forward pass in the first 40 packets to warm-up the memory, then forward and backward through time in the last 10 packets.
Figure 3: StrObe performs regional convolution on LiDAR packets and HD maps, using a multi-scale spatial memory for global reasoning. \\(\\triangle\\) is interpolation and \\(\\|\\) channel-wise concatenation.
## 4 Experimental Evaluation
We evaluate our model on a real world dataset for 3D object detection. In particular, we compute mean average precision (mAP) in the default detection setting (using full \\(360^{\\circ}\\) sweeps) and propose a new metric that takes into account the latency incurred by different input granularities (i.e., per-packet processing vs. sweep building). Our experimental results show that our model far outperforms the baselines in the per-packet setting while remaining competitive with the state-of-the-art in the full sweep setting. Furthermore, our latency evaluation also uncovers a problem with the mAP metric in the default setting as it does not accurately measure real world performance.
Dataset:Since there is no public available dataset that provides packets, we collect a new dataset, PacketATG4D, containing 6500 snippets with diverse conditions (e.g., geographical, lighting, road topology, vehicle types). The LiDAR rotates at a rate of 10hz and emits new packets at 100Hz - each roughly covering a \\(36^{\\circ}\\) region - for a total of 16,250,000 packets (1,625,000 frames). Accurate ego-pose is available for each LiDAR packet via a commercial localization system. Labels provide both the spatial extents and motion of vehicles, cyclists and pedestrians, from which we can extract accurate bounding boxes at discrete observation times as well as in continuous time through the use of a precise motion model. Note that if the observation of an instance is split across packets, each packet will have an instance of the label according to the pose at the timestamp of the packet.
Baselines:We provide a wide range of baselines that exploit different representations. **HD-NET**[17] is a detection model that processes input point clouds into occupancy voxels and performs 2D convolution in BEV using the \\(z\\) axis voxels and HD maps as feature channels. **PointRCNN**[30] processes raw LiDAR inputs using a PointNet [6] backbone to perform foreground segmentation and generate region-of-interest (RoI) proposals. The RoI proposals are then processed by a classification and bounding box refinement network to output 3D detections. **PointPillars**[21, 16] groups input points into discrete bins in BEV and uses PointNet [6] to extract features of each bin. The BEV features are then processed with 2D convolutions to generate detection outputs. Note that the PointRCNN and PointPillar baselines do not make use of HD maps.
Metrics:We evaluate our method using mean average precision (_mAP_) as our metric with IOU thresholds of [0.5, 0.7] for vehicles, [0.3, 0.5] for cyclists. For pedestrians, we use the \\(\\ell_{2}\\) distance to centroid with thresholds [0.5m, 0.3m] since we treat the detections as circles with a fixed radius, thus only the centroid is predicted. We evaluate with latency-aware labels that take into account the delay introduced by aggregating consecutive packets (_Latency mAP_). We refer the reader to Figure 4 for an illustration of this metric. We re-define the detection label for each object in the scene as its state at detection time (green box), rather than observation time (red box), which does not accurately
\\begin{table}
\\begin{tabular}{l|r|r|r} \\hline & Accumulation (ms) & Inference (ms) & Total Latency (ms) \\\\ \\hline PointRCNN [18] & 100 & 390 & 490 \\\\ PointPillars [21, 16] & 100 & 37 & 137 \\\\ HDNET [17] & 100 & 28 & 128 \\\\ Our StrObe & **10** & **11** & **21** \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: **End-to-end Latency:** We report the end-to-end latency in ms of the models as defined by the time it takes to accumulate the data, run inference and decode detections. Accumulation considers a LiDAR operating at 10hz and inference is done with a NVIDIA 2080Ti.
Figure 4: In contrast to the commonly used mAP, our proposed metric takes into account the latency between observation time and detection emission time.
reflect the current state of the world. The benefits of this metric are twofold: (1) It evaluates how well the detector models the state of the real world and the quality of the information that would be used by downstream motion planning, and (2) it allows for a direct comparison with standard detection metrics, thus making apparent the effects of latency.
End-to-end Latency:Since implementations might differ, we did not consider model inference times in the latency aware detection metric. However, it is an important factor in the end-to-end latency for safety since it indicates the minimal amount of time the system would require to be able to recognize an actor, i.e., the time taken for sensor data acquisition, model inference, and emission of a corresponding detection for the actor to downstream systems. We report end-to-end latency timings in Table 1; our approach leads to a much faster (on average 6x!) detection emission time.
Latency-aware Detection:Table 2 shows our results for PacketATG4D. In the leftmost setting - _Packet Stream_ - all models are first trained on detection using LiDAR packets (as opposed to full sweeps) and evaluated using the state of the labels at the time of detection (i.e., green box in Figure 4). Our model far outperforms the baselines, which do not have memory and struggle with partial observations (i.e., a single packet as opposed to the full sweep). In the right portion of the table - _Full Sweep_ - the models are trained using a traditional full sweep setting and evaluation is done using the label states at the end of the sweep (therefore in the worst case an object could move for 100ms before evaluation).
Latency-unaware Detection:We additionally evaluate in the standard object detection setting, not taking into account the sweep building latency and using the labels for each object in the scene at the time of observation (i.e., when the LiDAR hit the object). The leftmost columns of Table 3 show the results of the models trained in a packet setting. A key takeaway from these results stems from comparing the numbers in the \"Packet Stream\" setting between Tables 2 and 3, which shows that the 10ms latency introduced by accumulating a single packet is negligible in the mAP settings we evaluate, since performance remains the same. Conversely, comparing the \"Full Sweep\" setting in Tables 2 and 3 shows considerable degradation in metrics. This indicates that the performance of full sweep models in the real world would be considerably lower.
Ablation Studies:We first ablate the memory component of the model. In particular, we evaluate two alternative approaches: (1) _No Memory_: A memoryless instantiation of our model; (2) _Attention_: A memory module that uses linear attention to update the spatial memory (see supplementary for more details). As shown in Table 4, memory is a fundamental component for effective perception from partial observations. Furthermore, the attention based memory updates are outperformed by our approach which learns the aggregation function through convolutions. We also evaluate our model without the HD map component to evaluate its importance. The results in Table 4 (_No Map_ row)
\\begin{table}
\\begin{tabular}{l|c c|c c|c c||c c|c c|c c} \\hline \\hline & \\multicolumn{6}{c||}{**Packet Stream**} & \\multicolumn{6}{c}{**Full Sweep**} \\\\ Model & Vehicle & \\multicolumn{3}{c|}{Pedestrian} & \\multicolumn{3}{c||}{Cyclist} & \\multicolumn{3}{c|}{Vehicle} & \\multicolumn{3}{c|}{Pedestrian} & \\multicolumn{3}{c}{Cyclist} \\\\ & 0.5 & 0.7 &.5m &.3m & 0.3 & 0.5 & 0.5 & 0.7 &.5m &.3m & 0.3 & 0.5 \\\\ \\hline HDNET [17] & 75.6 & 63.6 & 71.0 & 63.9 & 21.3 & 15.3 & 79.6 & 57.8 & **80.2** & **69.8** & 54.6 & 33.8 \\\\ PointPillars [21, 16] & 66.8 & 47.7 & 53.4 & 49.2 & 16.8 & 6.1 & 84.2 & 61.1 & 74.4 & 68.9 & 56.1 & 34.9 \\\\ PointRCNN [18] & 70.2 & 63.1 & 49.3 & 47.5 & 28.4 & 25.9 & 72.4 & 57.4 & 54.8 & 52.4 & 31.9 & 26.7 \\\\ Our StrObe & **91.8** & **80.5** & **80.3** & **72.5** & **60.8** & **40.7** & **86.4** & **66.4** & 76.7 & 67.8 & **61.0** & **39.5** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: **Latency mAP**: Labels are considered at detection emission times.
\\begin{table}
\\begin{tabular}{l|c c|c c|c c||c c|c c|c c} \\hline \\hline & \\multicolumn{6}{c||}{**Packet Stream**} & \\multicolumn{6}{c}{**Full Sweep**} \\\\ Model & Vehicle & \\multicolumn{3}{c|}{Pedestrian} & \\multicolumn{3}{c||}{Cyclist} & \\multicolumn{3}{c|}{Vehicle} & \\multicolumn{3}{c}{Pedestrian} & \\multicolumn{3}{c}{Cyclist} \\\\ & 0.5 & 0.7 &.5m &.3m & 0.3 & 0.5 & 0.5 & 0.7 &.5m &.3m & 0.3 & 0.5 \\\\ \\hline HDNET [17] & 75.7 & 63.7 & 71.1 & 64.1 & 21.3 & 15.3 & **89.5** & **77.2** & **84.3** & **74.7** & **68.3** & **45.5** \\\\ PointPillars [21, 16] & 66.9 & 48.0 & 53.5 & 49.3 & 16.9 & 6.2 & 84.8 & 70.6 & 74.2 & 69.2 & 56.1 & 36.3 \\\\ PointRCNN [18] & 70.3 & 63.3 & 49.3 & 47.5 & 28.4 & 25.8 & 73.1 & 66.9 & 54.6 & 52.7 & 31.4 & 26.9 \\\\ Our StrObe & **91.8** & **80.5** & **80.3** & **72.5** & **60.8** & **40.7** & 87.4 & 76.1 & 76.9 & 69.0 & 61.3 & 41.4 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: **Common mAP**: Labels are considered at their observation times.
show that while the map backbone proved to be overall beneficial to the model, is not a fundamental component as its removal does not lead to major degradations in metrics.
Qualitative Results:The qualitative results in Figure 5 show the predictions of the model over 4 consecutive packets in 3 snippets. The network is able to predict boxes even before points are visible due to the memory module. It can also update the positions of detections as new points arrive to best exploit the evidence.
## 5 Conclusion
We have proposed a novel method for perception of point cloud streaming data. Our approach produces highly accurate object detections at very low latency by using regional convolutions over individual LiDAR packets alongside a spatial memory that keeps track of previous observations. We also introduced a new latency-aware metric that quantifies the cost of data buffering, and how that affects the quality of the models in the real world, which are inevitably affected by latency. Results on the large-scale PacketATG4D show that our approach far outperforms the state-of-the-art in the packet setting that takes into account latency, while being competitive in the commonly adopted full sweep setting. For future work, we intend to expand the use of the memory module for long term tracking through occlusion and motion forecasting.
\\begin{table}
\\begin{tabular}{l|c c c c c} \\hline \\hline & \\multicolumn{2}{c}{Vehicle} & \\multicolumn{2}{c}{Pedestrian} & \\multicolumn{2}{c}{Cyclist} \\\\ & 0.5 & 0.7 &.5m &.3m & 0.3 & 0.5 \\\\ \\hline No Memory & 75.6 & 63.6 & 71.0 & 63.9 & 21.3 & 15.3 \\\\ Attention & 89.3 & 78.2 & 75.9 & 67.9 & 53.5 & 35.3 \\\\ No Map & 90.6 & 79.9 & 79.3 & 71.8 & 59.6 & 40.5 \\\\ Our StrObe & **91.8** & **80.5** & **80.3** & **72.5** & **60.8** & **40.7** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: **Ablation studies:** Our multi-scale spatial memory is a critical component in our model. Using maps is beneficial but not critical. Labels are at detection emission time.
Figure 5: Qualitative results of StrObe. Each column is a sequence of packets from the same snippet. Detected vehicles are shown in red, cyclists in yellow and pedestrians in blue.
## References
* [1] S. Casas, W. Luo, and R. Urtasun. Intentnet: Learning to predict intention from raw sensor data. In _Conference on Robot Learning (CoRL)_, pages 947-956, 2018.
* [2] W. Luo, B. Yang, and R. Urtasun. Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 3569-3577, 2018.
* [3] B. Yang, W. Luo, and R. Urtasun. Pixor: Real-time 3d object detection from point clouds. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 7652-7660, 2018.
* [4] B. Li, T. Zhang, and T. Xia. Vehicle detection from 3d lidar using fully convolutional network. In _Robotics: Science and Systems (RSS)_, 2016.
* [5] G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, and C. K. Wellington. Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 12677-12686, 2019.
* [6] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 652-660, 2017.
* [7] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen. Pointcnn: Convolution on x-transformed points. In _Advances in Neural Information Processing Systems (NeurIPS)_, pages 820-830, 2018.
* [8] B.-S. Hua, M.-K. Tran, and S.-K. Yeung. Pointwise convolutional neural networks. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 984-993, 2018.
* [9] S. Wang, S. Suo, W.-C. Ma, A. Pokrovsky, and R. Urtasun. Deep parametric continuous convolutional neural networks. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 2589-2597, 2018.
* Workshop on Sets & Partitions_, 2019.
* [11] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 1912-1920, 2015.
* [12] D. Maturana and S. Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In _International Conference on Intelligent Robots and Systems (IROS)_, pages 922-928, 2015.
* [13] B. Li. 3d fully convolutional network for vehicle detection in point cloud. In _International Conference on Intelligent Robots and Systems (IROS)_, pages 1513-1518, 2017.
* [14] G. Riegler, A. Osman Ulusoy, and A. Geiger. Octnet: Learning deep 3d representations at high resolutions. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 3577-3586, 2017.
* [15] M. Ren, A. Pokrovsky, B. Yang, and R. Urtasun. Sbnet: Sparse blocks network for fast inference. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 8711-8720, 2018.
* [16] Y. Yan, Y. Mao, and B. Li. Second: Sparsely embedded convolutional detection. _Sensors_, 18 (10):3337, 2018.
* [17] B. Yang, M. Liang, and R. Urtasun. Hdnet: Exploiting hd maps for 3d object detection. In _Conference on Robot Learning (CoRL)_, pages 146-155, 2018.
* [18] Y. Chen, S. Liu, X. Shen, and J. Jia. Fast point r-cnn. In _International Conference on Computer Vision (ICCV)_, pages 9775-9784, 2019.
* [19] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, 2020.
* [20] Z. Yang, Y. Sun, S. Liu, X. Shen, and J. Jia. Std: Sparse-to-dense 3d object detector for point cloud. In _International Conference on Computer Vision (ICCV)_, pages 1951-1960, 2019.
* [21] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 12697-12705, 2019.
* [22] Y. Zhou, P. Sun, Y. Zhang, D. Anguelov, J. Gao, T. Ouyang, J. Guo, J. Ngiam, and V. Vasudevan. End-to-end multi-view fusion for 3d object detection in lidar point clouds. In _Conference on Robot Learning (CoRL)_, pages 923-932, 2020.
* [23] M. Li, Y.-X. Wang, and D. Ramanan. Towards streaming image understanding. _European Conference on Computer Vision (ECCV)_, 2020.
* [24] H. Alismail, L. D. Baker, and B. Browning. Continuous trajectory estimation for 3d slam from actuated lidar. In _International Conference on Robotics and Automation (ICRA)_, pages 6096-6101, 2014.
* [25] J. Zhang and S. Singh. Low-drift and real-time lidar odometry and mapping. _Autonomous Robots_, 41(2):401-416, 2017.
* [26] W. Han, Z. Zhang, B. Caine, B. Yang, C. Sprunk, O. Alsharif, J. Ngiam, V. Vasudevan, J. Shlens, and Z. Chen. Streaming object detection for 3-d point clouds. _arXiv preprint arXiv:2005.01864_, 2020.
* [27] Y. Wu and K. He. Group normalization. In _European Conference on Computer Vision (ECCV)_, pages 3-19, 2018.
* [28] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In _Advances in neural information processing systems (NeurIPS, pages 91-99, 2015.
* [29] R. Girshick. Fast R-CNN. In _International Conference on Computer Vision (ICCV)_, pages 1440-1448, 2015.
* [30] S. Shi, X. Wang, and H. Li. PointRCNN: 3d object proposal generation and detection from point cloud. In _Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 770-779, 2019. | Many modern robotics systems employ LiDAR as their main sensing modality due to its geometrical richness. Rolling shutter LiDARs are particularly common, in which an array of lasers scans the scene from a rotating base. Points are emitted as a stream of packets, each covering a sector of the \\(360^{\\circ}\\) coverage. Modern perception algorithms wait for the full sweep to be built before processing the data, which introduces an additional latency. For typical 10Hz LiDARs this will be 100ms. As a consequence, by the time an output is produced, it no longer accurately reflects the state of the world. This poses a challenge, as robotics applications require minimal reaction times, such that maneuvers can be quickly planned in the event of a safety-critical situation. In this paper we propose StrObe, a novel approach that minimizes latency by ingesting LiDAR packets and emitting a stream of detections without waiting for the full sweep to be built. StrObe reuses computations from previous packets and iteratively updates a latent spatial representation of the scene, which acts as a memory, as new evidence comes in, resulting in accurate low-latency perception. We demonstrate the effectiveness of our approach on a large scale real-world dataset, showing that StrObe far outperforms the state-of-the-art when latency is taken into account, and matches the performance in the traditional setting. | Provide a brief summary of the text. | 287 |
arxiv-format/2402_18618v1.md | # Plan Green Index estimation based on data collected by remote sensing for Romanian cities
Marian NECULA([email protected])
Economic Cybernetics and Statistics Doctoral School,
The Bucharest University of Economic Studies
Tudorel ANDREI, Ph.D.([email protected])
Faculty of Cybernetics, Statistics and Economic Informatics,
The Bucharest University of Economic Studies
Bogdan OANCEA, Ph.D.([email protected])
The Faculty of Business and Administration, University of Bucharest
Mihaela PAUN, Ph. D.([email protected]@incdsb.ro)
The Faculty of Business and Administration, University of Bucharest /
Bioinformatics Department, National Institute for R &D for Biological Sciences
## 1 Introduction
Land surface monitoring based on satellite imagery has been used since the 1960s and 1970s and has been a prolific source of important data for an extremely wide range of users. With initial applications in the military and meteorological fields, the monitoring capabilities have been widely expanded, for example the Sentinel missions within the Copernicus program (ESA, 2022a) generated in 2021 approximately 32 petabytes of data. In this context, there is an increased interest from the official statistics to explore and possibly incorporate this data source into the statistical production. Thus, within the ESSnet on Big Data II project, a pilot was carried out to explore the potential of remote sensing data (Work Package H, 2021) supported by several case studies, among which we list applications regarding the monitoring of surfaces, agricultural crops and natural vegetation, applications regarding the monitoring of environmental indicators included in sustainable development strategies, etc. Also, the the United Nations Statistics Division, through projects exploring Big Data data sources (Task Team on Earth Observations, 2017), estimates that this data source has a growing potential to be used in statistical production, either in complementing existing statistics or developing innovative statistics in response to the increased demand from data users.
Another argument for researching the potential of data sources collected through remote sensing derives from the modernization strategy of the National Institute of Statistics (INS, 2022) which considers the diversification of data sources and methods. The main limitation identified in the development of such projects is represented by the human resource that must have knowledge both in the field of official statistics and in the computational methods related to the exploitation of the new data sources.
Within the Sustainable Development Goal 11.7 regarding universal access to green public spaces, indicator 70 was proposed regarding the proportion of green spaces relative to the total area of urban settlements, in order to monitor the quality of life within human settlements classified as cities and to ensure equitable access to green infrastructure. The implementing of such an indicator is built on the assumption that a city whose green infrastructure is maintained and increased in relation to the total area, is translated into economic and social effort to ensure resilience to extreme climatic phenomena and improve the quality of life, e.g. reducing the effects of pollution, reducing crime, increasing residents' satisfaction, etc. (UN, 2022). At the European level, there are many initiatives regarding the sustainable development of cities and the conservation/expansion of green infrastructure and ecosystems, among which we mention the Urban Agenda (European Commission, 2016),the European Strategy on Green Infrastructure (European Commission, 2013) or the decision of the European Parliament on the General Union Environment Action Program to 2020 (European Parliament 2013). Among the European initiatives for monitoring urban surfaces, the most important, from our point of view, is represented by the Land Surface Monitoring Service, within the Copernicus program of the European Commission developed in partnership with the European Space Agency (European Commission, 2022). The Copernicus program provides users, free of charge, with access to data obtained through remote sensing, through the Sentinel space missions, but also to data obtained through in situ collection (ESA, 2022c). Green urban spaces, according to the definition presented in the Mapping and Assessment of Ecosystems and their Services report of the European Commission (European Commission 2014), represent land surfaces fully or partially covered with vegetation, components of the green infrastructure of cities. The definition used does not specify what type of vegetation covers the respective surfaces and does not functionally address the type of use of the green space, i.e. whether it is public or private property, whether it is vegetation of natural origin or as a result of human development, whether the type of vegetation is interspersed between buildings and infrastructure or is continuous in nature, etc. Thus, two types of classifications of urban green spaces are presented that can contain the following atomic elements:
* green buildings (e.g. the roof is covered with vegetation);
* private property (alleys, singular trees, etc.);
* public gardens and parks;
* lands covered with vegetation intended for agriculture;
* lands covered with natural vegetation;
* areas of the region between water and land (shore, coastal areas, etc.).
Within the classification used in the Copernicus program (ESA, 2022b), of data collected in-situ through the CORINE Land Use Land Cover statistical research, green urban spaces are named green urban areas and can include: parks; ornamental gardens; private properties landsced with vegetation; botanical and zoological gardens; public squares covered with vegetation; green spaces between glades; cemeteries; areas with vegetation for recreational purposes; etc.
This classification does not include: agricultural land included in the urban area, cemeteries outside the urban area and other possible types of surfaces covered with vegetation.
## 2 Data and Methods
The present study uses two sources of remote sensing data, namely:
- (HDF Group, 2022), a scientific data transmission standard independent of the hardware or software architecture of computing machines. The MODIS hdf5 file contains the NDVI (Normalized Difference Vegetation Index) data and associated spatial attributes. The file contains the NDVI as well as the Enhanced Vegetation Index (EVI). More details about the NDVI vegetation index are presented later in this chapter. On average, the size of a MODIS hdf5 file for NDVI is between 150 and 250 megabytes (covering an area of about 5500 km\\({}^{\\wedge}\\)2). The period for which we downloaded the data is July 2022.
* Sentinel-2 multispectral data were accessed through the Open Access Hub (ESA, 2022d). The data is provided in SENTINEL-SAFE format (ESA, 2022e), a.zip archive containing metadata files (requirement of data preprocessing, data quality, etc.) and the actual image files in.jp2 (JPEG2000) format for those 13 spectral bands and other pre-calculated products (cloud mask, pixel value quality mask, surface classification mask, etc.). On average, the size of such an archive is about 1 gigabyte (referred to an area of 100km\\({}^{\\wedge}\\)2). The period for which the data was downloaded is between 01-Jun-2022 and 31-Jul-2022.
MODIS is one of the instruments for measuring the electromagnetic radiation reflected by the earth's surface, installed on the Terra satellite (NASA, 2022b) launched and operated by NASA in 1999. The objective of the mission is to monitor the earth's surface and atmosphere. The Terra satellite covers the entire Earth's surface on average every two days and through MODIS provides images at a spatial resolution of 250, 500 and 1000 meters. The instruments are capable of recording electromagnetic radiation (36 spectral bands) between 400 and 14400 nm. The data provided is pre-processed and prepared for use in scientific or other analyzes with minimal effort. MODIS datasets include, among others: vegetation indices; land surface temperature and temperature anomaly detection (fires); the reflectance of the earth's surface. Certain data sets are available daily, depending on the degree of pre-processing or spatial/ temporal coverage required by users.
Sentinel-2 is one of the space missions of the European Space Agency, with the objective of monitoring land surfaces (ESA, 2022a). The mission consists of two satellites launched between 2016 and 2017, Sentinel-2A and Sentinel-2B, with polar orbits, phase-shifted by 180 [\\(\\sim\\)\\(\\circ\\)], which have an average revisit time at the Equator of about 5 days. The satellites are equipped with a MultiSpectral Instrument (MSI), which can detect electromagnetic radiation between 400 and 2200 nm (13 spectral bands). The instrument can provide data at a spatial resolution between 10m and 60m. In Romania, the duration between two consecutive visits to the same area is about 10 days, on average, or about 3-4 times a month. The big disadvantage of the two instruments, in particular, the one installed on the Sentinel-2 satellites, is represented by the surfaces masked by clouds or extreme atmospheric effects. For both data sources, and/or other similar data sources, data sets are available, according to a convention, on pre-processing levels necessary to perform a certain type of analysis (Level 1A, Level 1B, Level 1C, Level 2A, Level 2B, etc.). In this case we have retrieved L2A (Level 2A) data sets, which have been corrected by the data provider, both geometrically and optically, and can be used directly in the analyses. Thus, the object of this analysis is represented by a dimensionless physical quantity called spectral (directional) reflectance, which represents the ratio between the radiance reflected by the surface of a material and the radiance incident on that surface and directly depends on the type/material of the surface (ISO, 1989).
In figure 1, the image obtained by the MSI Sentinel 2 instrument is represented graphically, by merging the spectral bands related to the reflectance in the visible range. When passing through the Sentinel 2 observation areas, it covers an area with an average width of 290 km.
**Sentinel 2 MSI related spectral bands in the visible range (RGB)**
_Figure 1_In the case of MODIS data, we downloaded the dataset containing the green index, represented in figure 3 for the entire surface of the country. The image was obtained by mosaic between area 194 and 204 related to the cartographic design mode used by MODIS. As a side note, MODIS covers an area 2330 km wide in a single pass.
**MODIS image - normalized difference vegetation index**
**(June 2022, area 194 + area 204 Romania)**
_Figure 3_As auxiliary data, we used the cartographic delimitation of the county-seat municipalities, for all 41 counties and the municipality of Bucharest, graphically represented in figure 4. The delimitation used does not include the territorial administrative unit with the same name, and refers strictly to the urban, specific urban area component city of the county seat. For example, the territorial administrative unit Alba Iulia is composed of the city of Alba Iulia (within and outside the built-up area), Barabant, Micesti, Oarda si Paclisa, from which we strictly choose the city of Alba Iulia.
The total data size is approximately 80 Gb for Sentinel 2 and 300 Mb for Modis. In the case of Sentinel 2, it was necessary to increase the temporal period of interest to 2 months, in order to identify data sets that do not contain the missing date or whose quality is not affected by the extreme presence of cloud cover of the surfaces of interest.
**Romania. County residences and the municipality of Bucharest.**
_Figure 4_
The normalized difference vegetation index is a popular measure in remote sensing data analysis applications for the identification/visualization of vegetated areas (Weier and Herring, 2000). The index is a dimensionless quantity, with values in the closed interval [-1, 1], quantifying the degree of vegetation coverage of the earth's surface. According to Huete, Justice and van Leeuwen (1999), the normalized difference vegetation index was created based on empirical observations of the interaction between electromagnetic waves, light in the red visible spectrum and near infrared spectrum, and vegetation.
Through the measurements, an increase in the intensity of radiation in the near infrared zone and the absorption of red light from the visible spectrum was observed. NDVI has some ability to discriminate between the types of vegetation covering an area (agricultural vegetation, forest, shrubs, etc.), as well as the quality of that vegetation (dry, green, etc.). The sensor-independent formula is:
\\[NDVI=\\frac{NIR-RED}{NIR+RED}\\]
where
\\(NIR=\\)_the associated near-infrared wavelength between 750 and 1400 nm_
\\(RED=\\)_the wavelength associated with red visible range between 620 and 750 nm._
For MODIS, the spectral bands associated with the two wavelengths are band 1, red visible range, and band 2, near infrared, and in the case of Sentinel 2, band 4, red visible range, and band 8, near infrared. However, there are differences between the calculation methods.
MODIS NDVI is already calculated based on the following algorithm:
_Step 1: The NDVI index is estimated for each MODIS sensor record over an area of interest._
_Step 2: The composite NDVI index for 16 days at a resolution of 250m per pixel is selected as the maximum of the NDVI value associated with the same pixel (spatial coordinates), and of course, the value is not affected by measurement errors, either due to the sensor or external factors._
The MODIS NDVI composite index is used in the paper. In the case of Sentinel 2, the index is calculated for a single record (calendar date) based on the previously described calculation formula and the availability of an image that is not affected by measurement errors, or excessive presence of clouds.
In order to select an optimal threshold for the discrimination between areas covered with vegetation and those with non-vegetation, we performed manual comparisons between the images represented in natural (visible) colors provided by the Google Maps service (satellite image layer) and different intermediate discrimination thresholds starting from literature data, respectively the 0.5 to 0.7 NDVI threshold for MODIS, and the 0.3 to 0.6 NDVI threshold for Sentinel-2, both thresholds built with a step of 0.05. In table 1. The values for the discrimination threshold for each county seat city and the average used for the final calculation of the areas covered with vegetation are presented. Thus, for MODIS we obtained a value close to 0.58 and for Sentinel 2 we obtained 0.4.
From figure 5, we can see the sensitivity of the results to the selection of the discrimination threshold between the surfaces, the threshold being in an inversely proportional, almost linear relationship with the vegetation area detected by the green index.
Figure 6 shows the histogram of NDVI values for the Bucharest. From the graph it can be seen that the distribution of NDVI values is similar, but, on the one hand, there are differences derived from the different resolution of the two sensors and, on the other hand, differences between the parameters of the two distributions, e.g. MODIS values are concentrated around the 0.5 point, and Sentinel 2 values tend, roughly, to a point in the range [0.3-0.4]. These values are similar to the values resulting from the manual determination of the discrimination threshold used to estimate the green index.
\\begin{tabular}{|l|c|c|} \\hline Residence & Modis & Sentinel 2 \\\\ \\hline Alba Iulia & 0.6 & 0.35 \\\\ \\hline Alexandria & 0.55 & 0.4 \\\\ \\hline Arad & 0.6 & 0.4 \\\\ \\hline Bacau & 0.65 & 0.45 \\\\ \\hline Baia Mare & 0.65 & 0.4 \\\\ \\hline Bistrita & 0.6 & 0.4 \\\\ \\hline Botosani & 0.5 & 0.35 \\\\ \\hline Braila & 0.6 & 0.45 \\\\ \\hline Brasov & 0.65 & 0.45 \\\\ \\hline Bucuresti & 0.7 & 0.45 \\\\ \\hline Buzau & 0.55 & 0.35 \\\\ \\hline Calarasi & 0.55 & 0.35 \\\\ \\hline Cluj-Napoca & 0.65 & 0.4 \\\\ \\hline Constanta & 0.55 & 0.4 \\\\ \\hline Craiova & 0.6 & 0.35 \\\\ \\hline Deva & 0.65 & 0.45 \\\\ \\hline Drobeta-Turnu Severin & 0.5 & 0.4 \\\\ \\hline Focsani & 0.55 & 0.4 \\\\ \\hline Galati & 0.6 & 0.35 \\\\ \\hline Giurgiu & 0.55 & 0.4 \\\\ \\hline Iasi & 0.6 & 0.4 \\\\ \\hline Miercurea Ciuc & 0.55 & 0.35 \\\\ \\hline Oradea & 0.65 & 0.35 \\\\ \\hline Piatra-Neamt & 0.65 & 0.45 \\\\ \\hline Pitesti & 0.6 & 0.5 \\\\ \\hline Ploiesti & 0.6 & 0.35 \\\\ \\hline Ramnicu Valcea & 0.6 & 0.4 \\\\ \\hline Resita & 0.5 & 0.45 \\\\ \\hline Satu Mare & 0.55 & 0.35 \\\\ \\hline Sfantu Gheorghe & 0.55 & 0.35 \\\\ \\hline Sibiu & 0.55 & 0.35 \\\\ \\hline Slatina & 0.55 & 0.45 \\\\ \\hline Slobozia & 0.6 & 0.45 \\\\ \\hline Suceava & 0.55 & 0.4 \\\\ \\hline Targoviste & 0.6 & 0.35 \\\\ \\hline Targu Jiu & 0.55 & 0.4 \\\\ \\hline Targu Mures & 0.5 & 0.35 \\\\ \\hline Timisoara & 0.5 & 0.4 \\\\ \\hline Tulcea & 0.55 & 0.35 \\\\ \\hline Vaslui & 0.5 & 0.45 \\\\ \\hline Zalau & 0.55 & 0.35 \\\\ \\hline Mean & **0.58** & **0.4** \\\\ \\hline \\end{tabular}
\\begin{tabular}{|l|c|c|} \\hline Residence & Modis & Sentinel 2 \\\\ \\hline Alba Iulia & 0.6 & 0.35 \\\\ \\hline Alexandria & 0.55 & 0.4 \\\\ \\hline Arad & 0.6 & 0.4 \\\\ \\hline Bacau & 0.65 & 0.45 \\\\ \\hline Baia Mare & 0.65 & 0.4 \\\\ \\hline Bistrita & 0.6 & 0.4 \\\\ \\hline Botosani & 0.5 & 0.35 \\\\ \\hline Braila & 0.6 & 0.45 \\\\ \\hline Brasov & 0.65 & 0.45 \\\\ \\hline Bucuresti & 0.7 & 0.45 \\\\ \\hline Buzau & 0.55 & 0.35 \\\\ \\hline Calarasi & 0.55 & 0.35 \\\\ \\hline Cluj-Napoca & 0.65 & 0.4 \\\\ \\hline Constanta & 0.55 & 0.4 \\\\ \\hline Craiova & 0.6 & 0.35 \\\\ \\hline Deva & 0.65 & 0.45 \\\\ \\hline Drobeta-Turnu Severin & 0.5 & 0.4 \\\\ \\hline Focsani & 0.55 & 0.4 \\\\ \\hline Galati & 0.6 & 0.35 \\\\ \\hline Giurgiu & 0.55 & 0.4 \\\\ \\hline Iasi & 0.6 & 0.4 \\\\ \\hline Miercurea Ciuc & 0.55 & 0.35 \\\\ \\hline Oradea & 0.65 & 0.35 \\\\ \\hline Piatra-Neamt & 0.65 & 0.45 \\\\ \\hline Pitesti & 0.6 & 0.5 \\\\ \\hline Ploiesti & 0.6 & 0.35 \\\\ \\hline Ramnicu Valcea & 0.6 & 0.4 \\\\ \\hline Resita & 0.5 & 0.45 \\\\ \\hline Satu Mare & 0.55 & 0.35 \\\\ \\hline Sfantu Gheorghe & 0.55 & 0.35 \\\\ \\hline Sibiu & 0.55 & 0.35 \\\\ \\hline Slatina & 0.55 & 0.45 \\\\ \\hline Slobozia & 0.6 & 0.45 \\\\ \\hline Suceava & 0.55 & 0.4 \\\\ \\hline Targoviste & 0.6 & 0.35 \\\\ \\hline Targu Jiu & 0.55 & 0.4 \\\\ \\hline Targu Mures & 0.5 & 0.35 \\\\ \\hline Timisoara & 0.5 & 0.4 \\\\ \\hline Tulcea & 0.55 & 0.35 \\\\ \\hline Vaslui & 0.5 & 0.45 \\\\ \\hline Zalau & 0.55 & 0.35 \\\\ \\hline Mean & **0.58** & **0.4** \\\\ \\hline \\end{tabular}
**Figure 5**
**Histogram of MODIS and Sentinel 2 NDVI values for Bucharest**
_Figure 6_a) Access methods
To access the data we developed two R procedures, one for MODIS and the other for Sentinel 2, using the getSpatialData packages (Schwalb-Willmann, 2022) and the sen2r package (Ranghetti et al, 2022). The download procedure requires a valid account on the EarthData data service, for MODIS, and on the Open Access Hub, for Sentinel 2. The procedure essentially involves selecting a product associated with a mission/sensor (i.e. MODIS TERRA), selecting a window temporal and a region of interest. The native map projection of the data was used for area calculation, sinusoidal projection for MODIS, respectively UTM/WGS84 for Sentinel 2 (it was not necessary to reproject the data for Sentinel 2, given that we did not encounter the problem of a city of residence being located at border region between two adjacent UTM zones).
b) Auxiliary data
As auxiliary data, we used the delimitation of the intra-urban areas of the county-seat cities in vector format, transformed to the native cartographic projection of MODIS and Sentinel data.
c) Data pre-processing.
In the case of MODIS, we extracted the regions of interest from the mosaic raster, applied the discrimination threshold and calculated the area covered by vegetation/the percentage of the total area covered by vegetation. For Sentinel 2 we created a pyramid formed by the spectral bands: B02, B03, B04 and B08, of which we used the first 3 associated with the visible light spectrum to create the \"true\" color image of the surface of interest, respectively B04 and B08 to calculate the NDVI index, then filtered with the associated discrimination threshold.
The R scripts used for computation are available at the following github address: [https://github.com/MarianNecula/NDVI_experimental.git](https://github.com/MarianNecula/NDVI_experimental.git)
## 3 Results
Table 3 shows the results obtained after estimating the green index for MODIS and Sentinel 2 data. Between the two sensors, there are differences between the MODIS and Sentinel 2 green index values, either as a result of the spatial resolution, the processed MODIS data having the resolution of 250 meters per pixel, and Sentinel 2 data 10 meters per pixel. For example, in an area of a pixel (square) with a side of 250 meters or a pixel at the MODIS resolution, vegetation represents the majority class (\\(>\\) 50%) covering that
area, and the rest \\(<=50\\%\\) is represented by other classes, such as built-up areas, then that pixel will be classified as belonging to a vegetated area. In the case of Sentinel 2, the observations being made much more granular, following the same example, the discrimination between a pixel associated with the vegetation class and a pixel associated with another class, is made at the level of a pixel (square) with a side of 10 meters.
**Estimated area of the surfaces covered with vegetation for the county seat cities. -July 2022 Romania**
\\begin{tabular}{|l|c|c|c|c|c|} \\hline & & The surface & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} \\\\ Name & Estimated & \\multicolumn{1}{c|}{of the vegetation} & \\multicolumn{1}{c|}{the surface of the vegetation} & \\multicolumn{1}{c|}{the vegetation} & \\multicolumn{1}{c|}{the vegetation} & \\multicolumn{1}{c|}{the vegetation} \\\\ & Surface & \\multicolumn{1}{c|}{(km2)} & \\multicolumn{1}{c|}{(\\%) - (\\%) -} & \\multicolumn{1}{c|}{\\(\\%\\) - Sentinel} & \\multicolumn{1}{c|}{(km2) -} & \\multicolumn{1}{c|}{(km2) -} \\\\ & & \\multicolumn{1}{c|}{MODIS} & \\multicolumn{1}{c|}{2} & \\multicolumn{1}{c|}{MODIS} & \\multicolumn{1}{c|}{Sentinel 2} \\\\ \\hline Alba Iulia & 15.52 & 23 & 15 & 3.57 & 2.33 \\\\ \\hline Alexandria & 9.62 & 7 & 15 & 0.67 & 1.44 \\\\ \\hline Arad & 45.57 & 13 & 9 & 5.92 & 4.1 \\\\ \\hline Bacau & 34.18 & 27 & 29 & 9.27 & 9.91 \\\\ \\hline Baia Mare & 31.75 & 35 & 31 & 11.12 & 9.85 \\\\ \\hline Bistrita & 16.7 & 31 & 18 & 5.18 & 3.01 \\\\ \\hline Botosani & 15.37 & 5 & 20 & 0.77 & 3.07 \\\\ \\hline Braila & 33.17 & 22 & 28 & 7.34 & 9.28 \\\\ \\hline Brasov & 37.55 & 30 & 18 & 11.3 & 6.76 \\\\ \\hline Bucuresti & 244.51 & 27 & 29 & 66.23 & 70.86 \\\\ \\hline Buzau & 20.52 & 5 & 10 & 1.03 & 2.05 \\\\ \\hline Calarasi & 19.41 & 15 & 12 & 2.93 & 2.33 \\\\ \\hline Clui-Napoca & 93.09 & 59 & 32 & 54.95 & 29.8 \\\\ \\hline Constanta & 44.19 & 4 & 16 & 1.78 & 7.06 \\\\ \\hline Craiova & 44.28 & 12 & 12 & 5.31 & 5.31 \\\\ \\hline Deva & 12.43 & 44 & 36 & 5.47 & 4.47 \\\\ \\hline Drobeta-Turnu & 12.98 & 0 & 6 & 0 & 0.78 \\\\ \\hline Focsani & 11.86 & 2 & 18 & 0.24 & 2.13 \\\\ \\hline Galati & 56.12 & 6 & 18 & 3.39 & 10.09 \\\\ \\hline Giurgiu & 26.39 & 50 & 36 & 13.23 & 9.49 \\\\ \\hline Iasi & 46.42 & 29 & 27 & 13.53 & 12.52 \\\\ \\hline Miercurea Ciuc & 9.14 & 45 & 21 & 4.13 & 1.92 \\\\ \\hline Oradea & 54.69 & 28 & 27 & 15.3 & 14.75 \\\\ \\hline Piatra-Neamt & 18.29 & 36 & 28 & 6.61 & 5.12 \\\\ \\hline Pitesti & 28.91 & 37 & 25 & 10.72 & 7.23 \\\\ \\hline Ploiesti & 50.92 & 40 & 28 & 20.44 & 14.25 \\\\ \\hline Ramnicu Valcea & 10.11 & 34 & 26 & 3.44 & 2.63 \\\\ \\hline Resita & 16.2 & 62 & 40 & 10.03 & 6.47 \\\\ \\hline Satu Mare & 21.81 & 11 & 21 & 2.4 & 4.58 \\\\ \\hline Sfantu Gheorghe & 10.95 & 41 & 26 & 4.5 & 2.84 \\\\ \\hline Sibiu & 25.57 & 8 & 19 & 2.05 & 4.86 \\\\ \\hline Slatina & 17.56 & 9 & 15 & 1.58 & 2.64 \\\\ \\hline Slobozia & 9.68 & 22 & 17 & 2.14 & 1.64 \\\\ \\hline \\end{tabular}
**Romanian Statistical Review nr. 4 / 2023**In Appendix we provide a few maps for the lowest (Drobeta Turnu Severin), highest (Resita), ranked city, respectively the capital city in terms of vegetation coverage estimates.
**Top county residences according to the percentage of the total area covered with vegetation**
## 4 Limitations and Discussions
a) The low resolution of MODIS data can be an impediment, when estimates are needed for small areas, where vegetation is interspersed between buildings and is below the area covered by MODIS pixels (\\(<\\)250m). On the other hand, MODIS can provide time series starting from 2000, by comparison with data provided by the Sentinel 2 mission, starting in 2016. From the literature, we identified a series of algorithms that can be used to fuse MODIS data with those of Sentinel 2 to extend the time span while increasing the spatial resolution of the data.
b) Image quality is an important factor. The data can be affected by 2 problems: the complete or partial absence of data from a satellite pass through the area of interest, for example which can be triggered by errors at the sensor level, or the presence of massive clouds that prevent light radiation from reaching to the sensor. The latter type of problem can be partially countered with data coming from another type of sensor, the active monitoring of the earth's surface, of the radar type, which allows the detection of both landforms and natural or anthropogenic elements that cover the earth's surface. In this case, the complexity of data preprocessing increases accordingly, given the need to calibrate the radar signal for a multitude of factors.
c) The construction of a time series for this type of application is considered to identify if there have been substantial changes in the areas covered with vegetation, e.g. the uncontrolled expansion of residential areas within cities as a result of the boom in the real estate market over the last 20 years.
d) There are many other types of applications for green indices in agriculture or forestry that can be developed based on remote sensing data.
e) Within the project we are considering the design of a data dissemination application through a GIS application, with a spatiotemporal selection and comparison functionality between different areas and time periods.
## 6 Conclusions
This document describes a case study on the use of remote sensing data in official statistics. Starting from a similar project carried out by the Statistics Canada, we used data from the MODIS mission, and additionally data from the Sentinel-2 mission to estimate an urban green index and to make a classification of county residences in Romania according to of the percentage of the city's surface that contains a form of vegetation. At the NIS level (2021) there is a survey that collects and disseminates data on the area covered with vegetation at the level of cities (Verdure spots area in municipalities and towns by macro regions, development regions and counties - matrix GOS103B from the Tempo database), but there are differences at the level of definition of green space from cities. Thus, in the NIS survey, green spaces are considered: parks, public gardens, squares with vegetation, plots with trees and flowers, forests, cemeteries, sports fields and bases, by comparison with the present index that assimilates with vegetation all types of surfaces that contain vegetation, regardless of ownership or use. Using auxiliary data, models or algorithms can be created by which green spaces can be detected or reduced to those listed for the NIS survey to obtain an alternative to the current way of producingthis type of statistic. Also, an argument in favor of the exploitation of this data source, the spatial and temporal granularity of the statistics can be considerably increased, from cities and municipalities to areas of arbitrary size, from an annual coverage to a monthly statistic or even with a bi-frequency monthly. A disadvantage is the considerable pre-processing effort, given the size and specificity of these data, while having a set of interdisciplinary knowledge (GIS, remote sensing, statistics). This classification can be used as a test in the experimental statistics section to check the potential interest of statistical data users relative to the advantages/disadvantages of the data source.
## References
* [1] European Commission 2014. \"Mapping and Assessment of Ecosystems and Their Services.\" [https://ec.europa.eu/environment/nature/knowledge/ecosystem_assessment/pdf/102.pdf](https://ec.europa.eu/environment/nature/knowledge/ecosystem_assessment/pdf/102.pdf).
* [2] European Commission 2013. \"The EU Strategy on Green Infrastructure.\" [https://ec.europa.eu/environment/nature/ecosystems/strategy/index_en.htm](https://ec.europa.eu/environment/nature/ecosystems/strategy/index_en.htm).
* [3] ----. 2016 \"The Urban Agenda for EU.\" [https://futurium.ec.europa.eu/en/urban-agenda/pages/what-urban-agenda-eu](https://futurium.ec.europa.eu/en/urban-agenda/pages/what-urban-agenda-eu).
* [4] ----. 2022 \"Copernicus Earth Observation Programme.\" [https://www.copernicus.eu/en/about-copernicus](https://www.copernicus.eu/en/about-copernicus).
* [5] European Parliament 2013. \"7th Environmental Action Programme.\" [https://eur-lex.europa.eu/legal-content/EN/TXT?uri=CELEX:32013D1386](https://eur-lex.europa.eu/legal-content/EN/TXT?uri=CELEX:32013D1386).
* [6] European Spatial Agency (ESA) 2022a. \"2021 Copernicus Sentinel Data Access Annual Report.\" [https://scihibu.copernicus.eu/twiki/pub/SciHubWebPortal/AnnualReport2021/COPE-SERCO-RP-22-1312-SentinelDataAccessAnnual](https://scihibu.copernicus.eu/twiki/pub/SciHubWebPortal/AnnualReport2021/COPE-SERCO-RP-22-1312-SentinelDataAccessAnnual) ReportY2021 merged v1.0.pdf
* [7] ----. 2022b \"Green Urban Areas Classification.\" [https://land.copernicus.eu/](https://land.copernicus.eu/).
* [8] ----. 2022c. \"Land Monitoring Service.\" [https://land.copernicus.eu/](https://land.copernicus.eu/).
* [9] ----. 2022d. \"Open Access Hub\" [https://scihibu.copernicus.eu/dhus/#home](https://scihibu.copernicus.eu/dhus/#home)
* [10] ----. 2022e. \"Data Formats\" [https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi/data-formats](https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi/data-formats)
* [11] ----. 2022f. \"Level-2AAlgorithm Overview.\" [https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm](https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm)
* [12] Didan, K. Munoz, A. B. Solano, R. Huete, A. 2015 \"MODIS Vegetation Index User's Guide\". [https://vip.arizona.edu/documents/MODIS/MODIS](https://vip.arizona.edu/documents/MODIS/MODIS) VI UsersGuide June 2015.C6.pdf
* [13] HDF Group, 2022. \"General HDF5 User's Guide\". [https://portal.hdfgroup.org/display/HDF5/HDF5+User+Guides](https://portal.hdfgroup.org/display/HDF5/HDF5+User+Guides)
* [14] Huete, A. Justice, C. Van Leeuwen, W.J.D. 1999 \"MODIS vegetation index (MOD13)\" [https://www.researchgate.net/publication/268745810_MODIS_vegetation_index_MOD13](https://www.researchgate.net/publication/268745810_MODIS_vegetation_index_MOD13)
* Heat Transfer by Radiation
- Physical Quantities and Definitions.\" [https://www.iso.org/standard/16943.html](https://www.iso.org/standard/16943.html).
* [16] Institutul National de Statistica, 2014. \"Strategia de dezvoltare a Sistemului Statistic National si a statisticici ofaciale a Romaniel in perioda 2015-2020\". [https://insse.ro/cms/files/legislative/programe%20si%20strategi/Strategie](https://insse.ro/cms/files/legislative/programe%20si%20strategi/Strategie) 2015-2020.pdf
* [17] Institutul National de Statistica, 2021. Suprafata spatiilor vezi in municipli si orase, pe macoregiuni, regiuni de dezvoltare si judetete. [http://statistici.insse.ro:8077/tempo-online/#/pages/ables/insse-table](http://statistici.insse.ro:8077/tempo-online/#/pages/ables/insse-table)18. National Space Agency (NASA) 2022a. \"EarthData\" [https://search.earthdata.nasa.gov/search](https://search.earthdata.nasa.gov/search)
* 19. National Space Agency (NASA) 2022b. \"Moderate Resolution Imaging Spectroradiometer.\" [https://modis.gsfc.nasa.gov/](https://modis.gsfc.nasa.gov/)
* Indicatorul 70.\" [https://indicators.report/targets/11-7/](https://indicators.report/targets/11-7/).
* 21. Ranghetti, L. Boschett, M. Nutini, F. Busetto, L. 2022. \"sen2r: An R toolbox for automatically downloading and preprocessing Sentinel-2 satellite data\" [https://sen2r.ranghetti.info/](https://sen2r.ranghetti.info/)
* 22. Schwalb-Willmann, J. 2022. \"getSpatialData R package\" [https://github.com/16EAGLE/getSpatialData](https://github.com/16EAGLE/getSpatialData)
* 23. STATCAN, 2021. \"Urban greenness, 2001, 2011 and 2019\". [https://www150.statcan.gc.ca/n1/pub/16-002-x/2021001/article/00002-eng.htm](https://www150.statcan.gc.ca/n1/pub/16-002-x/2021001/article/00002-eng.htm)
* 24. UNSD Task Team on Earth Observations (2017), \"Earth Observations for Official Statistics. Satellite Imagery and Geospatial Data Task Team report\" [https://unstats.un.org/bigdata/task-teams/earth-observation/UNGWG_Satellite_Task_Team_Report_WhiteCover.pdf](https://unstats.un.org/bigdata/task-teams/earth-observation/UNGWG_Satellite_Task_Team_Report_WhiteCover.pdf)
* 25. Weier, J. Herring, D. 2000 Measuring Vegetation (NDVI & EVI) [https://earthobservatory.nasa.gov/features/Measuring](https://earthobservatory.nasa.gov/features/Measuring) Vegetation
| _The modernization of official statistics involves the use of new data sources, such as data collected through remote sensing. The document contains a description of how an urban green index, derived from the SDG 11.7 objective, was obtained for Romania's 41 county seat cities based on free data sets collected by remote sensing from the European and North American space agencies. The main result is represented by an estimate of the areas of surfaces covered with vegetation for the 40 county seat towns and the municipality of Bucharest, relative to the total surface. To estimate the area covered with vegetation, we used two data sets obtained by remote sensing, namely data provided by the MODIS mission, the TERRA satellite, and data provided by the Sentinel 2 mission from the Copernicus space program. Based on the results obtained, namely the surface area covered with vegetation, estimated in square kilometers, and the percentage of the total surface area or urban green index, we have created a national top of the county seat cities._
**Keywords:**_ official statistics, experimental statistics, remote sensing, urban vegetation._
**JEL: R140_**
1. corresponding author. | Summarize the following text. | 234 |
arxiv-format/2104_10934v1.md | # Close contact risk assessment for SARS-CoV-2 infection
G. Cortellessa
1 Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Cassino, FR, Italy 1
L. Stabile
1 Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Cassino, FR, Italy 1
F. Arpino
1 Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Cassino, FR, Italy 1
D.E. Faleiros
2 Maritime and Transport Technology, TU Delft, Netherlands 2
W. van den Bos
3 International Laboratory for Air Quality and Health, Queensland University of Technology, Brisbane, Qld, Australia 3
L. Morawska
3 International Laboratory for Air Quality and Health, Queensland University of Technology, Brisbane, Qld, Australia 3
G. Buonanno
1 Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Cassino, FR, Italy 1
## 1 Introduction
The COVID-19 pandemic has highlighted the key role of droplets exhaled during respiratory activities (such as oral breathing, speaking, sneezing, coughing) as potential virus carriers leading to risk of infections and/or diseases (Morawska and Cao, 2020). Three possible routes of transmission are generally considered: the fomite, and large droplet and airborne droplet routes (Li, 2021a).
The large droplet route has been incorrectly considered to be the primary route for most respiratory infections since the beginning of the last century (Chapin, 1912; Flugge, 1897), and the associatedprotective measure of social distancing of 1-2 m (varying in each country as a function of activities) is known and imposed worldwide. The large droplet route is commonly identified with close contact during which an infectious bacterium or virus can be transmitted effectively via specific routes, as discussed below. Despite the importance, a recent search of the literature revealed that the characterization of exposure to droplets exhaled at close contact remains surprisingly unexplored. We point out that the term of \"droplets\" in this paper refers to all sizes, from the smallest droplets of sub-micron size to the largest ones with dimensions of hundreds of microns. Recent studies have identified two major routes for close contact transmission (Chen et al., 2020): the large droplet route and the airborne droplet route. The large droplet route concerns the deposition of large droplets from the infected subject on the mucous membranes (lips, eyes, nostrils) of the susceptible subject (Gallo et al., 2021; Lu et al., 2020). According to Xie et al. (2007), who revisited previous guidelines by Wells (1934) with improved evaporation and settling models, droplets larger than about 100 um in diameter rapidly settle out of the air by gravity, with the infective range being within a short distance of the source. In contrast, the airborne droplet route involves droplets smaller than about 100 um in diameter that, as soon they are exhaled, decrease their diameter by evaporation and can be inhaled by the receiving host as interconnected multiphase flow processes (Balachandar et al., 2020). In fact, the surrounding ambient air enables the exhaled droplets to evaporate and rapidly shrink to droplet nuclei, reaching a final size that is governed by the initial amount of non-volatiles. As an example, by considering the average peak value of exhaled air velocity for breathing and speaking and the evaporation time, a droplet with a 50 um diameter would shrink to a droplet nucleus before it reached a nearby person (Zhang et al., 2020).
Recently Chen et al. (2020), using a simple mathematical model of exhaled flows and both droplet deposition and inhalation phenomena, found the airborne droplet route to dominate in the social distancing range of 1-2 m during both speaking and coughing. The large droplet route (\\(>\\) 100 um) only dominates when the distance is lower than 0.2 m while talking or 0.5 m while coughing, whereas when the subjects are more than 0.3 m apart, the large droplet route can be neglected even while coughing.
Although the work of Chen et al. (2020) represents a novel analysis in the investigation of infection transmission in close contact, it has limitations mainly associated with the simplicity of the proposed analytical model, i.e. adopting steady state conditions, not considering the fluid dynamics connected to the breathing of the subjects, and using corrective coefficients to simulate the inhalation process. In this regard, thermo-fluid dynamic modeling represents a more advanced approach to solve complex flow behaviors (i.e. three-dimensional, transient) typical of respiratory activities, and of the related droplet emission and inhalation, which are not possible with ordinary calculus. However, to date,most of the thermo-fluid dynamic analyses performed on this very complex subject are limited to non-pathogen carrying droplet emission and/or simulate simplified conditions (e.g. constant emission, steady-state analyses) (Ai et al., 2019; Ai and Melikov, 2018).
A further limitation of the analysis proposed by Chen et al. (2020) is that it estimates the fluid dynamics of droplets but does not investigate the issues related to host-to-host viral airborne transmission, which is fundamental in an infectious risk assessment. To this end, the authors recently presented an approach to evaluate the viral load emitted by infected individuals (Buonanno et al., 2020b) that takes into account the effect of other parameters such as inhalation rate, type of respiratory activity, and activity level. Such an approach has been applied to prospective and retrospective cases (Buonanno et al., 2020a), and then extended to viruses other than SARS-CoV-2 (Mikszewski et al., 2021), adopting a simplified zero-dimensional model and allowing the risk of infection due to airborne droplets to be estimated in different indoor scenarios involving people sharing room air and maintaining distancing. Nonetheless, the well-mixed hypothesis of such an approach cannot be applied when it comes to evaluating the risk of infection in close contact scenarios.
On the basis of the abovementioned approach allowing evaluation of the viral load emitted, in this paper the authors present an integrated approach aimed at assessing the close contact risk of infection from SARS-CoV-2. For this purpose, a numerical approach has been developed to estimate the volume of the droplets and the corresponding viral load received by a susceptible subject (through inhalation and deposition) at different distances in close contact scenarios (distance less than 2 m). Particle image velocimetry (PIV) measurements were conducted to characterize the air flow exhaled during human expiratory activities to validate the modelling results. Therefore, on the basis of the integrated approach between thermo-fluid dynamic modeling of exhaled droplets and viral load, an infectious risk assessment is presented for a close contact scenario represented by a speaking infected subject (emitter) and a susceptible subject (receiver) in the case of a face-to-face orientation and stagnant air conditions.
## 2 Materials and methods
The proposed SARS-CoV-2 infectious risk assessment is characterized by an integrated approach, based on the following main steps: (i) development of a three-dimensional Eulerian-Lagrangian numerical model to describe droplet spread once emitted by a speaking person in transient conditions; this is based on an Eulerian-Lagrangian approach, in which the continuum equations are solved for the air flow and Newton's equation of motion is solved for each droplet (sections 2.1, 2.2); (ii) PIV measurements to define the boundary conditions and to validate the numerical model (section 2.3); iii) definition of a droplet emission model including droplet diameters from 0.5 \\(\\upmu\\)m to 800 \\(\\upmu\\)m emittedby an adult while speaking (section 2.4); and iv) infectious SARS-CoV-2 risk assessment in a close contact scenario by considering the contributions of the large droplet and airborne droplet routes as well as the distance between the speaking infected subject and a susceptible subject (section 2.5). The assessment has been performed in stagnant air conditions, which clearly represents the worst scenario in terms of virus spread as it could occur also in outdoor environments with negligible wind speed.
### Eulerian-Lagrangian based model to simulate the droplet dynamics at close contact
The Computational Fluid Dynamics (CFD) technique has been adopted for numerical description of velocity, pressure, and temperature fields, along with the motion and interaction of the droplets with the fluid. The fully open source finite volume based openFOAM software has been employed. This choice was dictated by the need to have a fully open and flexible tool with complete control over the solved partial differential equations (PDEs), boundary conditions, and correlations employed for SARS-CoV-2 risk assessment. The substantial complexity of the adopted approach has paid off by allowing detailed description in space and time of thermo-fluid dynamic fields and associated droplet motion. Additionally, the use of the openFoam software offered the ability to directly access the source code, so modifying available mathematical models, boundary conditions, and thermophysical models is possible, as well as implementing new ones if necessary.
From a mathematical point of view, the droplet motion inside the air flow has been modeled by employing the Lagrangian particle tracking (LPT) approach, based on a dispersed dilute two-phase flow. In particular, the spacing between droplets in the exhaled air plume is sufficiently large and the volume fraction of the droplets sufficiently low (\\(<\\!10^{-3}\\)) to justify the use of a Eulerian-Lagrangian approach, in which the continuum equations are solved for the air flow (continuous phase) and Newton's equation of motion is solved for each droplet. The continuum equations solved for an unsteady incompressible Newtonian fluid are widely described in the available scientific literature (Arpino et al., 2014; Massarotti et al., 2006; Scungio et al., 2013) and are not reported here for brevity.
Since the flow regime associated to breathing activity is laminar, no turbulence has been considered in the numerical investigations. The droplet motion has been described using a solving approach based on the following LPT equation:
\\[m_{d}\\frac{d\\mathbf{u}_{d}}{dt}=\\mathbf{F}_{D}+\\mathbf{F}_{g} \\tag{1}\\]
and
\\[\\frac{dx_{d}}{dt}=\\mathbf{u}_{d} \\tag{2}\\]where \\(m_{d}\\left(kg\\right)\\) is the mass of the droplet; \\(\\mathbf{u}_{d}\\left(\\frac{m}{s}\\right)\\) represents the droplet velocity; \\(t\\left(s\\right)\\) is the time; \\(\\mathbf{F}_{D}\\left(N\\right)\\) and \\(\\mathbf{F}_{g}(N)\\) are, respectively, the drag and gravity forces acting on the droplet; and \\(x_{d}\\left(m\\right)\\) represents the trajectory of the droplet. The drag force is given by Crowe (2011):
\\[F_{D}=m_{d}\\,\\frac{18}{\\rho_{d}\\cdot d_{d}^{2}}\\,C_{D}\\,\\,\\frac{Re_{d}(\\mathbf{u}- \\mathbf{u}_{d})}{24} \\tag{3}\\]
In eq. (3), \\(\\rho_{d}\\left(\\frac{kg}{m^{3}}\\right)\\), \\(d_{d}(m)\\) and \\(Re_{d}\\) represent, respectively, the density, diameter and Reynolds number of the droplet. The droplet density has been considered constant and equal to \\(1200\\) kg m\\({}^{3}\\). The \\(Re_{d}\\) is calculated as:
\\[Re_{d}=\\frac{\\rho(|\\mathbf{u}-\\mathbf{u}_{d}|)d_{d}}{\\mu} \\tag{4}\\]
where \\(\\rho\\left(\\frac{kg}{m^{3}}\\right)\\) is the air density, whereas the drag coefficient, \\(C_{D}\\), in equation (3) is evaluated as a function of the droplet Reynolds number:
\\[C_{D}=\\begin{cases}\\frac{24}{Re_{d}}&\\text{if }Re_{d}<1\\\\ \\frac{24}{Re_{d}}(1+0.15\\cdot Re_{d}^{0.687})&\\text{if }1\\leq Re_{d}\\leq 1000 \\\\ 0.44&\\text{if }Re_{d}>1000\\end{cases} \\tag{5}\\]
Droplet collisions are considered to be elastic, and the equations of motion for the droplets are solved assuming a two-way coupling: the flow field affects the droplet motion and vice-versa.
### Scenario analyzed: close contact during speaking
The Eulerian-Lagrangian based model described in section 2.1 has been applied to the analysis of droplet dispersion in close contact during speaking. In particular, face-to-face interactions between two subjects (infected emitter and susceptible receiver) of the same height, located at different distances in the range 0.25-1.75 m, were studied. The susceptible subject was considered to be a mouth-breather, thus airborne droplets were inhaled through the mouth. Moreover, because oral, nasal (Gallo et al., 2021), and ocular mucosa (Lu et al., 2020) have been recognized as possible transmission routes for respiratory viruses, large droplet deposition onto mouth, ocular and nostril surfaces due to their inertial trajectories was estimated through the Eulerian-Lagrangian numerical model.
In Figure 1, the computational domain including the external surfaces, the emitter, and the receiver is illustrated. The cad file for emitter and receiver was obtained as an opensource file by the website \"gradcad.com\". The mathematical model described in section 2.1 was numerically solved using the open source openFOAM software, based on the finite volume technique, under the boundary conditions presented in Table 1.
modeled as circular surfaces with a diameter of 1.13 cm, and the eyes were modeled as ellipses with axes of 2.76 and 1.38 cm, respectively (Chao et al., 2009; Chen et al., 2020).
In Figure 2, the boundary condition in terms of air velocity at the mouths of the emitter and receiver is also graphed; in particular, a sinusoidal approximation of breathing is adopted to realistically simulate a real interaction between two subjects. The volumetric flow rates were selected as the average values among those indicated by Abkarian et al. (2020): 1 L s-1 for speaking and 0.45 L s-1 for mouth breathing. In particular, the transient sinusoidal velocity profile applied at the receiver mouth presents an amplitude of 1 m s-1 and a frequency of 0.2 s-1, assuming a time period of 5 s for a full breath. The amplitude value was selected on the basis of the PIV measurement results reported in section 3.1. Velocity peaks of 5 m s-1 mounted on the sinusoidal velocity profile were considered for the emitter during speaking, as confirmed by Abkarian et al. (2020) and by the PIV experimental analysis for speaking (see PIV results in section 3.1). As concerns the velocity vector direction from the emitter's mouth, a conical jet flow was considered, adopting a cone angle equal to 22\\({}^{\\circ}\\) with random velocity directions in intervals of 0.1 s. This adopted angle was calculated by Abkarian et al. (2020) to enclose 90% of the particles in a cone passing through the mouth exit, and was verified to remain stable with time after the initial cycles.
Careful attention was paid to the computational mesh construction; in particular, simulations were performed employing hexahedral-based unstructured computational grids, and realized employing the open source snappyHexMesh algorithm, chosen on the basis of a proper mesh sensitivity analysis. In particular, three meshes were selected: Mesh 1 composed of 687380 elements, Mesh 2 composed of 1801060 elements, and Mesh 3 composed of 3023827 elements. The average percent deviation amongst the velocity fields obtained comparing Mesh 1 and Mesh 2 was equal to 6.56%, while comparing Mesh 2 and Mesh 3 resulted in an average percent deviation equal to 1.93%. Because the
Figure 2: Schematization of the surfaces of interest for emitter and receiver (eyes, nostrils, and mouth) and the transient velocity profile adopted as a boundary condition at the emitter and receiver mouths.
percent deviation amongst the velocity fields obtained between Mesh 2 and Mesh 3 was low (lower than 5%), the simulations were carried out adopting Mesh 2. As an example, Figure 3 shows the computational grid employed (Mesh 2) to simulate droplet spread in the case of an interpersonal distance of 0.76 m. The grid is refined in correspondence of the solid surfaces, where a boundary layer region is added to better capture the viscous region gradients, and presents a maximum non-orthogonality value of about 50.
### Particle image velocimetry experimental investigations
PIV measurements, aimed at validating the fluid-dynamic simulation results, were performed at TU Delft's laboratories to study the air flow exhaled during human expiratory activities. The experimental setup (Figure 4) consists of an sCMOS camera from LaVision (\\(2560\\times 2160\\) px) coupled with a Nikon objective lens (35 mm focal length), an Nd:YAG Quantel laser (Evergreen, 200 mJ per pulse) and a smoke generator. The laser sheet (2-3 mm thickness) was formed from below the mouth, and passed through the subject's mid-plane, whereas the sCMOS camera was positioned approximately at the subject's mouth height, at a distance of 80 cm from the laser sheet, with the objective's axis perpendicular to it. The image magnification was 0.05, rendering a field of view of 30 cm (height) \\(\\times\\) 36 cm (width), while the resolution was 0.14 mm px-1. Images were acquired in
Figure 3: Computational grid employed (Mesh 2, 1801060 elements) to simulate droplet spread in the case of an interpersonal distance of 0.76 m.
frame-straddling mode (double frame, single exposure) at 10 Hz, with a time interval of 500 \\(\\upmu\\)s between frames.
The subject (male, 32 years old, 1.84 m, 80 kg) was protected against the laser light by safety goggles and a black screen positioned in front of him (not shown), with a 5 cm diameter opening for the mouth-exhaled air. A 3 cm long cylinder of the same diameter was placed at the opening to help position the head and to block the laser light from below. The head was positioned with the subject's nose slightly touching the upper surface of the cylinder; therefore, inhalation and exhalation through the nose did not influence the measured flow velocities. The entire setup, including the subject, was encompassed by a black tent (about 15 m\\({}^{3}\\)), whose main objective was to contain the smoke. The entire tent was filled with smoke by turning the smoke generator on for about 2 s with the tent closed, and waiting for about 10 min for the smoke to become homogeneously spread and the flow disturbances due to the smoke generator to become negligible.
Three different respiratory activities were investigated: inhaling through the nose and exhaling from the mouth, inhaling through and exhaling from the mouth, and speaking. Each activity was recorded for a duration of 50 s (500 images), which comprised about five respiratory cycles. The speaking activity consisted of reciting an excerpt from the rainbow passage (Fairbanks, 1941), a speech often used for the study of voice and articulation and representative of the multiple sounds of the English language: _\"When the sunlight strikes raindrops in the air, they act as a prism and form a rainbow. The rainbow is a division of white light into many beautiful colours. These take the shape of a long round arch, with its path high above, and its two ends apparently beyond the horizon. There is, according to legend, a boiling pot of gold at one end. People look, but no one ever finds it. When a
Figure 4: Particle image velocimetry experimental setup.
man looks for something beyond his reach, his friends say he is looking for the pot of gold at the end of the rainbow.\"_
The images were processed via cross-correlation analysis, using the software DaVis 8.4 from LaVision. The final interrogation window was \\(48\\times 48\\) px (\\(7\\times 7\\) mm\\({}^{2}\\)) with 75% overlap, yielding about 160 \\(\\times\\) 200 vectors per image. Typical uncertainty of a PIV displacement measurement is 0.1 px (Raffel et al., 2018); because the velocity magnitude close to the mouth varied in the range of 1-5 m s\\({}^{\\text{-1}}\\) (3-18 px), the uncertainty of the instantaneous velocity is estimated to be within 0.5%-3%.
### Droplet emission
The number of droplets exhaled by the infected subject as a function of the diameter per unit time, i.e. the droplet number emission rate (ER\\({}_{\\text{N}}\\), droplet s\\({}^{\\text{-1}}\\)), was estimated starting from the number distribution of the droplets emitted by an adult person provided by (Johnson et al., 2011; Morawska et al., 2009). They measured the droplet distribution from 0.5 \\(\\upmu\\)m to about 1000 \\(\\upmu\\)m in close proximity of an adult person's mouth while speaking, in order to consider negligible the droplet evaporation phenomenon. Such measurement was extremely challenging; indeed, the experimental analysis was performed in a purpose build wind tunnel (named the expired droplet investigation system, EDIS) applying two separate measurement techniques to cover the entire size range: an aerodynamic particleizer (up to 20 \\(\\upmu\\)m) and a droplet deposition analysis (20-1000 \\(\\upmu\\)m). For the sake of brevity, the experimental analyses performed in that study are not exhaustively described here; interested readers can refer to the original papers for further details.
To make the simulations affordable, the droplet distributions (Johnson et al., 2011; Morawska et al., 2009) were fitted through a simplified distribution. In particular, from the number distribution provided by (Johnson et al., 2011; Morawska et al., 2009), the volume distribution was calculated considering spherical droplets, then both number and volume distributions were fitted through simplified distributions made up of seven diameters (i.e. seven size ranges). Because the evaporation phenomenon occurs quickly as soon as the droplets are emitted (Balachandar et al., 2020; Xie et al., 2007), in the present paper the post-evaporation number and volume distributions were considered in the simulation. To this end, the volume droplet distribution before evaporation (i.e. as emitted) was reduced to that resulting from the quick evaporation, which is the volume fraction of non-volatiles in the initial droplet, here considered equal to 1% (Balachandar et al., 2020). Therefore, the droplet shrinkage due to evaporation reduces the droplet diameter to about 20% of the initial emitted size. Additionally, the shrinking effect is not homogeneous for the entire size range. In particular, as reported in the scientific literature (Balachandar et al., 2020; Xie et al., 2007), the evaporation is slow for very small droplets (\\(<1\\) \\(\\upmu\\)m) and quite negligible for large droplets. Therefore, we (i) grouped all the droplets \\(<1\\)\\(\\mu\\)m after the evaporation (i.e. droplets \\(<4.6\\)\\(\\mu\\)m at emission) in a single size interval labelled as 1 \\(\\mu\\)m droplet diameter; (ii) considered the droplet nuclei resulting from the evaporation process for droplets with an initial diameter of 4.6-90 \\(\\mu\\)m (reduced to droplets with a diameter of 1-19.2 \\(\\mu\\)m after evaporation); and (iii) neglected the evaporation for droplets \\(>90\\)\\(\\mu\\)m at emission. The resulting number and volume distributions are summarized in Table 2 and in Figure 5. In Table 2 the resulting droplet number (ER\\({}_{\\rm N}\\)) and volume (ER\\({}_{\\rm v}\\), pre-evaporation) emission rates are also reported, calculated by multiplying the number (or volume) concentration at each size by the expiration flow rate of a speaking subject while standing (1.0 L s\\({}^{-1}\\), average value measured for an adult by Abkarian et al. (2020)). The total droplet number and volume concentrations are reported in Table 2: the total droplet number concentration (0.25 droplet cm\\({}^{-3}\\)) is the same before and after evaporation, whereas a small variation was recognizable for the volume concentration (\\(6.27\\times 10^{-5}\\) and \\(6.19\\times 10^{-5}\\)\\(\\mu\\)L cm\\({}^{-3}\\) before and after evaporation, respectively) due to shrinkage of droplets initially \\(<90\\)\\(\\mu\\)m. In terms of the number concentration (or emission rate), the contribution of the airborne droplets is 98%, whereas it is only 1% and 0.01% in terms of volume concentration (or emission rate) before and after evaporation, respectively, thus confirming that a reduced number of large droplets mostly contributes to the total volume emitted.
\\begin{table}
\\begin{tabular}{|c|c c c c c|c c c|} \\hline \\multirow{3}{*}{**Type of droplets**} & \\multicolumn{4}{c|}{**Pre-evaporation**} & \\multicolumn{4}{c|}{**Post-evaporation**} \\\\ \\cline{2-9} & **Droplet diameter,** & **dN/dlog(d)** & **dV/dlog(d)** & **ER\\({}_{\\rm N}\\)** & **ER\\({}_{\\rm v}\\)** & **DR\\({}_{\\rm total}\\) diameter,** & **dN/dlog(d)** & **dV/dlog(d)** \\\\ & **d\\({}_{\\rm d}\\)(\\(\\mu\\)m)** & (droplet cm\\({}^{-3}\\)) & (\\(\\mu\\)L cm\\({}^{-3}\\)) & (droplet s\\({}^{-1}\\)) & (\\(\\mu\\)L s\\({}^{-1}\\)) & **d\\({}_{\\rm d}\\)(\\(\\mu\\)m)** & (droplet cm\\({}^{-3}\\)) & (\\(\\mu\\)L cm\\({}^{-3}\\)) \\\\ \\hline & 4.6 \\(\\mu\\)m & 0.266 & 1.39\\(\\times 10^{-10}\\) & 217.6 & 1.14\\(\\times 10^{-5}\\) & 1 \\(\\mu\\)m & 0.266 & 1.39\\(\\times 10^{-8}\\) \\\\ & (\\(<0.5\\) to 4.6 \\(\\mu\\)m
### Estimation of the dose received by the susceptible subject and infectious risk assessment
The viral load carried by the droplets exhaled by the infected subject was evaluated as the product of the droplet volume (discussed in the previous section) and the corresponding viral load. The viral load of an infected subject, \\(c_{v}\\), can vary significantly (several orders of magnitude) (Buonanno et al., 2020, 2020; Mikszewski et al., 2021); thus, to achieve a proper infection risk assessment of an exposed subject, all the possible viral load data should be considered. In other words, when calculating the dose of RNA copies received by the susceptible subject (through inhalation or deposition), the probability distribution function of \\(c_{v}\\) values should be considered, which is the probability of occurrence of each \\(c_{v}\\) value. Data of the viral load in sputum so far available in the scientific literature (Fajnzylber et al., 2020; Pan et al., 2020; Wolfel et al., 2020) can be fitted through a log-normal distribution characterized by average and standard deviation \\(c_{v}\\) values of \\(\\log_{10}\\) 5.6 and \\(\\log_{10}\\) 1.2 RNA copies mL-1 (Mikszewski et al., 2021), i.e. 1st, 50th, and 99th percentiles equal to \\(6.4\\times 10^{2}\\), \\(4.0\\times 10^{5}\\), and \\(2.5\\times 10^{8}\\) RNA copies mL-1, respectively.
The large and airborne droplet doses of RNA copies (\\(D_{large}\\) and \\(D_{airborne}\\)) received by the susceptible subject for each \\(c_{v}\\) value were calculated as:
\\[D_{large}(c_{v})=\\int\\limits_{0}^{\\tau}\\big{(}V_{d-large}(t)\\cdot c_{v}\\big{)}dt \\tag{6}\\]
Figure 5: Droplet number (a) and volume (b) distributions adopted in the simulations as fitted through seven size ranges; in particular, distributions pre- and post-evaporation are reported to show how airborne and large droplets are affected by the evaporation phenomenon.
\\[D_{airborne}(c_{v})=\\int_{0}^{T}\\big{(}V_{d-airborne-pre}(t)\\cdot c_{v}\\big{)}dt\\]
where \\(V_{d\\text{-}\\text{\\it large}}(t)\\) and \\(V_{d\\text{-}\\text{\\it airborne-}pre}(t)\\) are the doses of airborne droplets inhaled and large droplets deposited as a function of the exposure time (\\(t\\)), and \\(T\\) is the total exposure time. The authors point out that the viral load carried by the droplet is related to the initial droplet volume (i.e. before evaporation), and evaporation leads to a reduction in the droplet volume. The RNA copies do not evaporate; thus, the \\(V_{d\\text{-}\\text{\\it airborne-}pre}\\) term refers to the dose of airborne droplets calculated with the initial (pre-evaporation) volume. The \\(V_{d\\text{-}\\text{\\it airborne-}pre}\\) term has been adopted to distinguish it from the actual doses of airborne droplets inhaled (\\(V_{d\\text{-}\\text{\\it airborne-}post}\\)), i.e. droplets with the actual volume at the time of inhalation (i.e. post-evaporation). The total dose of RNA copies received by the exposed subject for each \\(c_{v}\\) value was then evaluated by summing up the deposition and inhalation contributions, i.e. \\(D_{total}(c_{v})=D_{large}(c_{v})+D_{airborne}(c_{v})\\).
From the dose of RNA copies, the probability of infection (\\(P_{l}\\)) of the exposed subject for each \\(c_{v}\\) was calculated adopting a well-known exponential dose-response model (Haas, 1983; Sze To and Chao, 2010; Watanabe et al., 2010):
\\[P_{l}(c_{v})=1-e^{-\\frac{D_{total}(c_{v})}{HID_{63}}}\\hskip 36.135pt(\\%) \\tag{7}\\]
where \\(HID_{63}\\) represents the human infectious dose for 63% of susceptible subjects, i.e. the number of RNA copies needed to initiate the infection with a probability of 63%. For SARS-CoV-2, a \\(HID_{63}\\) value of \\(7\\times 10^{2}\\) RNA copies was adopted based on the thermodynamic-equilibrium dose-response model developed by Gale (2020).
The individual risk of infection (\\(R\\)) of the exposed person was then calculated by integrating, for all the possible \\(c_{v}\\) values, the product between the conditional probability of the infection for each \\(c_{v}\\) (\\(P_{l}(c_{v})\\)) and the probability of occurrence of each \\(c_{v}\\) value (P\\({}_{cv}\\)):
\\[R=\\int_{c_{v}}\\big{(}P_{l}(c_{v})\\cdot P_{cv}\\big{)}dc_{v}\\hskip 36.135pt(\\%) \\tag{8}\\]
The authors point out that in the present analysis an equal amount of RNA copies received by inhalation of airborne droplets or by deposition of large droplets was considered to cause the same effect in terms of infection.
## 3 Results and discussion
### Particle image velocimetry measurements and numerical results
As mentioned in the methodology section, PIV measurement results for a mouth breathing case study provided the required information to choose the velocity boundary conditions employed in the computational fluid dynamics (CFD) numerical simulations summarized in section 2.2.
In particular, the adopted boundary condition for the mouth-breathing receiver was verified by comparing numerical results with PIV data in terms of velocity profiles obtained at different distances from the mouth of the emitter. This comparison also provided a rough confirmation of the numerical velocity field. To this end, experimental (PIV) and numerical (CFD) velocity contours obtained in the sagittal plane, by synchronizing the instant of time for breathing at which the maximum velocity values are reached, are presented in Figure 6, whereas PIV and CFD vertical velocity profiles in sagittal plane at a distance from the emitter mouth equal to 0.10 m and 0.32 m are compared in Figure 7. The peak numerical and experimental peak velocities differ by 6% and 7% at interpersonal distances of 0.10 m and 0.32 m, respectively, thus validating the numerical solutions obtained through the CFD analyses.
The emitter velocity peaks of 5 m s-1 adopted in the simulations were confirmed by the experimental analysis related to the speaking expiratory activity. Among the 500 recorded images obtained in the 50 s of the experiment (see section 2.3), the time instant giving the maximum \\(u\\)-velocity and \\(v\\)-velocity values were selected and illustrated: \\(u\\)-velocity peaks of 5 m s-1 are clearly recognizable in Figure 8a.
### Droplet dose received by the susceptible subject
As an illustrative example of the droplet trajectories and flow fields obtained from the simulations, Figure 9 shows the velocity contours and droplet positions for a 5-s breathing period (at computational times of 5, 5.5, 6.5, 7.5, 8.5, and 10 s) in the case of an interpersonal distance of 0.76 m between the injector and receiver mouths. For this distance, large droplets fall to the ground without reaching the susceptible surfaces of the receiver, while the airborne droplets are partly inhaled by the receiver. Indeed, airborne droplets are transported by the air velocity field, reach the receiver, and
Figure 8: Instantaneous \\(u\\)-velocity (a) and \\(v\\)-velocity (b) contours obtained by particle image velocimetry during reading of the excerpt from the rainbow passage.
Figure 7: Experimental (particle image velocimetry, dotted lines) and CFD (solid lines) velocity profile comparison obtained in a sagittal plane at a distance from the emitter mouth equal to 0.10 m (a) and 0.32 m (b).
then are spread while rising in a vertical direction due to the effect of buoyancy forces. In fact, from the analysis of the three dimensional transient air velocity field shown in Figure 9, it can be observed that when the air velocity from the emitter is low (Figure 9a, d, e, f), the effect of buoyancy is evident, while forced convection dominates when the air velocity from the emitter is high (Figure 9b, c).
Figure 10 shows airborne and large droplet doses (i.e. \\(V_{d\\text{-airborne-pre}}\\), \\(V_{d\\text{-airborne-post}}\\), and \\(V_{d\\text{-large}}\\), \\(\\mu\\)L) as a function of distance for an exposure time of 1 min. Data were obtained by performing numerical simulations of 15 min and averaging the obtained volumes over an observation time equal to 1 min. The trends show that the large droplet dose dominates up to a distance of \\(<0.6\\) m but, beyond this distance, a step-decrease is observed because the large droplets cannot reach the deposition surfaces of the susceptible subject due to their inertial trajectories. Figure 10 also shows the dose of non-evaporated airborne droplets (\\(V_{d\\text{-airborne-pre}}\\)); this information is useful because infectivity (and therefore the risk) is directly related to this metric. For distances \\(>0.6\\) m, only the airborne droplet contribution to the total dose received by the infected subject is observed. To summarize, the interpersonal distance is a key parameter in evaluating the close contact risk because the susceptible subject could fall within the highly concentrated droplet-laden flow exhaled by the infected subject.
Figure 9: Numerical velocity contours during a single breath at a distance of 0.76 m between people: six selected computational times (5, 5.5, 6.5, 7.5, 8.5, and 10 s) are shown.
While the trajectory of large droplets is mostly affected by their inertia, and the related effect is negligible for distances \\(>0.6\\) m, the spread of airborne droplets is affected by the spread angle of the exhaled flow. In particular, we recognized that at short interpersonal distances (roughly \\(<0.76\\) m) from the emission point, where the exhaled air flow angle is still narrow, the dose of airborne droplets decays following the 1/L rule (with L representing the interpersonal distance), whereas for interpersonal distances in the range of 0.76-1.75 m, where the exhaled air flow angle becomes wider, the dose of airborne droplets decays following the 1/L\\({}^{2}\\) rule as recognized for passive tracer-gas decay and reported by Li (2021b).
### Risk assessment at close contact
Figure 11 shows the infection risk (R) as a function of the interpersonal distance between the speaking infected subject and the susceptible subject for different exposure times (10 s, 1 min, 15 min). The infection risk is close to 100% for 15-min exposures when the distance is less than 0.6 m, and is extremely high (\\(>30\\%\\)) even for short exposures (10-s exposure). In fact, as shown in the previous section, for such short distances the dilution is not sufficient to reduce the deposited and inhaled doses due to large droplets and airborne droplets. Beyond 0.6 m, because there is only the contribution of airborne droplets (inhaled dose), a sharp risk decay is observed in particular for short exposures. Thus, once again, the exposure duration and the interpersonal distance can reduce the infectious risk by several orders of magnitude. As an example, Figure 11 shows that the distance to be adopted to achieve a risk lower than 0.1% in the case of an exposure time of 15 min is around 1.5 m, which is reduced to 1 m for an exposure time of 1 min and 0.75 m for a 10-s exposure. Once again, we point
Figure 10: 1-min large (\\(V_{d\\_large}\\)) and airborne droplet doses (\\(V_{d\\_improper}\\) and \\(V_{d\\_improper}\\)) received by the susceptible subject (by deposition and inhalation, respectively) as a function of the distance between the two subjects.
out that the estimated infection risk value is related to an outdoor environment with stagnant air or to an indoor environment without the contribution due to the accumulation of viral load in the environment itself. Nonetheless, when people share indoor air maintaining distancing because of airborne droplets, an infection risk can still be present. For example, in Figure 11 the infection risk is estimated when sharing room air and maintaining distancing on the basis of the zero-dimensional well-mixed approach reported in (Buonanno et al., 2020, 2020) and (Mikszewski et al., 2021) in small (60 m\\({}^{3}\\), e.g. offices, classrooms) and large volumes (400 m\\({}^{3}\\), e.g. restaurants, conference rooms). The simulations were performed for an exposure time of 15 min for typical ventilation rates (0.2-3.0 h\\({}^{-1}\\)) occurring in indoor environments (Frattolillo et al., 2021; Stabile et al., 2019). The 15-min close contact (spatially-dependent) infection risk merges into the constant (not spatially dependent) sharing room air infection risks at interpersonal distances of about 1.4-1.6 m (depending on the volume and the ventilation rate) as also indicated in Li (2021); this distance is representative of the boundary of simplified well-mixed model applicability. Thus, for the investigated scenario, infection risk assessments through complex three-dimensional and transient CFD models are essential for interpersonal short distances (\\(<\\) 1.4-1.6 m), whereas simplified zero-dimensional well-mixed models can be applied for longer distances.
When applying these findings, an obvious question arises regarding the typical exposure duration and distance data. To address this issue, Zhang et al. (2020) monitored and analyzed indoor human behavior in a graduate student office using automatic devices. They measured a median duration of close contact of 15 s and an average interpersonal distance of 0.81 m during such close contacts. Adopting such median exposure durations, the corresponding infection risk is negligible (i.e. less than 0.1%, adopted as the threshold value by Buonanno et al. (2020), reaching 0.3% for exposure times of 1 min. Only with long exposure times (15 min) would the risk become significantly higher than 1%. Therefore, even though Zhang et al. (2020) verified that 9.7% of employees' time in offices was in close contact (with 4.0 close contacts h\\({}^{-1}\\)), an interpersonal distance of 0.81 cm is sufficient to have a limited risk for the measured exposure times (\\(<\\) 1 min) in the analyzed common occupational scenario. Apart from workplace scenarios, the interpersonal distance is an essential feature of individuals' social behavior more broadly in relation to their physical environment and social interactions (Hall, 1966). On the basis of the classical proxemic theory (Hall, 1966), interpersonal distances are classified as (i) _public distance_ (\\(>\\) 2 m), (ii) _social distance_, during more formal interactions, (iii) _personal distance_, during interactions with friends, and (iv) _intinate distance_, maintained in close relationships, with Southern European, Latin American, and Arabian countries being the so-called \"contact cultures\", and North America, Northern Europe, and Asian populations being the \"noncontact cultures\" (Hall, 1966). Sorokowska et al. (2017) reported significant variability in preferred interpersonal distances across countries as a function of certain characteristics of interacting individuals (such as gender or age), cultural differences, and environmental and sociopsychological factors. They reported worldwide interpersonal distance distributions of \\(0.56\\pm 0.13\\) m, \\(0.81\\pm 0.12\\) m, and \\(1.06\\pm 0.14\\) m for intimate, personal and social distance, respectively, which are included in Figure 11 for discussion. In the case of intimate distance, the average infection risk (i.e. the risk corresponding to the average distance value, i.e. \\(0.56\\) m) is extremely high, starting from exposure times of \\(1\\) min, but even in the case of short exposures (\\(10\\) s), it is not negligible (\\(>1\\%\\)). For personal distances, the average infection risk is negligible for short exposures (\\(<0.1\\%\\)), limited for \\(1\\)-min exposures (\\(<1\\%\\)), and high for \\(15\\)-min exposures. Finally, in the case of social distances, the average infection risk becomes significant only for \\(15\\)-min exposures.
The authors highlight that the approach and the results presented here provide an important insight into potential virus transmission over short distances that could help regul
Figure 11: Infection risk (R, %) of a susceptible subject as a function of the time of exposure and interpersonal distance from the infected subject; infection risk trends at short and long distances are highlighted as well as the modeling approaches to be applied.
quality experts in implementing and imposing proper mitigation solutions as a function of the microenvironment and the type of contact expected. The paper also provides information about choosing the proper modeling approach for infection risk assessments. Despite the value of these findings, the authors acknowledge the simplified hypothesis and limitations of this study that should be addressed in future development of this research. First, the results obtained through the CFD approach were compared to the experimental data only in terms of the velocity field and only one subject; second, the effect of turbulence should be investigated for velocity peaks associated to speaking activity; third, the infection was referred to the received droplet dose considering a homogeneous viral load over the entire droplet size range; fourth, the study was limited to mouth-breather subjects; and finally, transmission through surfaces (fomites) was not considered.
## 4 Conclusions
To the authors' knowledge, this is the first study in which an integrated risk assessment is developed for SARS-CoV-2 combining thermo-fluid dynamic modeling and infectious risk assessment to investigate close contact exposure scenarios. The integrated approach relies upon: i) an Eulerian-Lagrangian numerical model for the description of droplets spread once emitted by a speaking person, ii) PIV measurements for the definition of the boundary conditions and to validate the numerical model; iii) definition of a droplet emission model; and iv) infectious SARS-CoV-2 risk assessment. The approach presented here was applied to a close contact between a speaking infected subject and a susceptible subject in a face-to-face orientation with stagnant air conditions.
The results show that the contribution of large droplets (\\(>100\\)\\(\\,\\mathrm{\\SIUnitSymbolMicro m}\\)) to the dose and risk received by the susceptible subject is dominant for an interpersonal distance \\(<\\) 0.6\\(\\,\\mathrm{m}\\), which means highly influential on the infection risk only at intimate interpersonal distances (average distance of 0.56\\(\\,\\mathrm{m}\\)). In fact, in the case of personal (0.81\\(\\,\\mathrm{m}\\)) and social (1.06\\(\\,\\mathrm{m}\\)) distancing, the only contribution to the risk of infection comes from airborne droplets. In addition to the distance, the exposure time plays a key role in the risk of infection; in fact, the average infection risk is not negligible even for short exposures (10\\(\\,\\mathrm{s}\\)) in the case of intimate distance, whereas in the case of social distances only long exposures (15\\(\\,\\mathrm{min}\\)) can lead to a not negligible risk.
A possible threshold value to be adopted as a safe distance in close contact is around 1.5\\(\\,\\mathrm{m}\\), which lowers the infection risk to 0.1% order of magnitude even with prolonged exposure times (15\\(\\,\\mathrm{min}\\)). Such a threshold value also represents the boundary distance beyond which simplified well-mixed approaches can be adopted instead of complex spatially dependent three-dimensional transient CFD analyses. Indeed, we have shown that the same infection risk values for short (close contact) and long distances (sharing room air while maintaining distancing) can be obtained for a typical indoor environment for interpersonal distances in the range of 1.4-1.6 m, thus confirming that 1.5 m can be adopted as the typical distance for close contact.
## References
* Abkarian et al. (2020) Abkarian, M., Mendez, S., Xue, N., Yang, F., Stone, H.A., 2020. Speech can produce jet-like transport relevant to asymptomatic spreading of virus. Proc. Natl. Acad. Sci. 117, 25237. [https://doi.org/10.1073/pnas.2012156117](https://doi.org/10.1073/pnas.2012156117)
* Ai et al. (2019) Ai, Z., Hashimoto, K., Melikov, A.K., 2019. Influence of pulmonary ventilation rate and breathing cycle period on the risk of cross-infection. Indoor Air 29, 993-1004. [https://doi.org/10.1111/ima.12589](https://doi.org/10.1111/ima.12589)
* Ai and Melikov (2018) Ai, Z.T., Melikov, A.K., 2018. Airborne spread of expiratory droplet nuclei between the occupants of indoor environments: A review. Indoor Air 28, 500-524. [https://doi.org/10.1111/ina.12465](https://doi.org/10.1111/ina.12465)
* Arpino et al. (2014) Arpino, F., Cortellessa, G., Dell'Isola, M., Massarotti, N., Mauro, A., 2014. High order explicit solutions for the transient natural convection of incompressible fluids in tall cavities. Numer. Heat Transf. Part Appl. 66, 839-862. [https://doi.org/10.1080/10407782.2014.892389](https://doi.org/10.1080/10407782.2014.892389)
* Balachandar et al. (2020) Balachandar, S., Zaleski, S., Soldati, A., Ahmadi, G., Bourouiba, L., 2020. Host-to-host airborne transmission as a multiphase flow problem for science-based social distance guidelines. Int. J. Multiph. Flow 132, 103439. [https://doi.org/10.1016/j.ijmultiphaseflow.2020.103439](https://doi.org/10.1016/j.ijmultiphaseflow.2020.103439)
* Buonanno et al. (2020a) Buonanno, G., Morawska, L., Stabile, L., 2020a. Quantitative assessment of the risk of airborne transmission of SARS-CoV-2 infection: Prospective and retrospective applications. Environ. Int. 145, 106112.
* Buonanno et al. (2020) Buonanno, G., Stabile, L., Morawska, L., 2020. Estimation of airborne viral emission: Quanta emission rate of SARS-CoV-2 for infection risk assessment. Environ. Int. 141, 105794. [https://doi.org/10.1016/j.envint.2020.105794](https://doi.org/10.1016/j.envint.2020.105794)
* Chao et al. (2009) Chao, C.Y.H., Wan, M.P., Monawska, L., Johnson, G.R., Ristovski, Z.D., Hargreaves, M., Mengersen, K., Corbett, S., Li, Y., Xie, X., Katoshevski, D., 2009. Characterization of expiration air jets and droplet size distributions immediately at the mouth opening. J. Aerosol Sci. 40, 122-133. [https://doi.org/10.1016/j.jaerosci.2008.10.003](https://doi.org/10.1016/j.jaerosci.2008.10.003)
* Chapin (1912) Chapin, C.V., 1912. The sources and modes of infection. J. Wiley & sons; [etc., etc.], New York.
* Chen et al. (2020) Chen, W., Zhang, N., Wei, J., Yen, H.-L., Li, Y., 2020. Short-range airborne route dominates exposure of respiratory infection during close contact. Build. Environ. 176, 106859. [https://doi.org/10.1016/j.buildenv.2020.106859](https://doi.org/10.1016/j.buildenv.2020.106859)
* Crowe (2011) Crowe, T.C., 2011. Multiphase ows with droplets and particles. CRC Press.
* Fairbanks (1941) Fairbanks, G., 1941. Voice and Articulation Drillbook. The Laryngoscope, Harper and Brothers 51, 1141-1141. [https://doi.org/10.1288/00005537-194112000-00007](https://doi.org/10.1288/00005537-194112000-00007)
* Fajnzylber et al. (2018) Fajnzylber, J., Regan, J., Coxen, K., Corry, H., Wong, C., Rosenthal, A., Worrall, D., Giguel, F., Piechocka-Trocha, A., Atyeo, C., Fischinger, S., Chan, A., Flaherty, K.T., Hall, K., Dougan, M., Ryan, E.T., Gillespie, E., Chishti, R., Li, Y., Jilg, N., Hanidziar, D., Baron, R.M., Baden, L., Tsibris, A.M., Armstrong, K.A., Kuritzkes, D.R., Alter, G., Walker, B.D., Yu, X., Li, J.Z., Abayneh, B.A. (Betty), Allen, P., Antille, D., Balazs, A., Bals, J., Barbash, M., Bartsch, Y., Boucau, J., Boyce, S., Braley, J., Branch, K., Broderick, K., Carney, J., Chevalier, J., Choudhary, M.C., Chowdhury, N., Cordwell, T., Daley, G., Davidson, S., Desjardins, M., Donahue, L., Drew, D., Einkaut, K., Elizabeth, S., Elliman, A., Etemad, B., Fallon, J., Fedirko, L., Finn, K., Flannery, J., Forde, P., Garcia-Broncano, P., Gettings, E., Golan, D., Goodman, K., Griffin, A., Grimmel, S., Grinke, K., Hartana, C.A., Healy, M., Heller, H., Henault, D., Holland, G., Jiang, C., Jordan, H., Kaplonek, P., Karlson, E.W., Karpell, M., Kayitesi, C., Lam, E.C., LaValle, V., Lefteri, K., Lian, X., Lichterfeld, M., Lingwood, D., Liu, H., Liu, J., Lopez, K., Lu, Y., Luthern, S., Ly, N.L., MacGowan, M., Magispo, K., Marchewka, J., Martino, B., McNamara, R., Michell, A., Millstrom, I., Miranda, N., Nambu, C., Nelson, S., Noone, M., Novack, L., O'Callaghan, C., Ommerborn, C., Osborn, M., Pacheco, L.C., Phan, N., Pillai, S., Porto, F.A., Rassadkina, Y., Reissis, A., Ruzicka, F., Seiger, K., Selleck, K., Sessa, L., Sharpe, A., Sharr, C., Shin, S., Singh, N., Slaughenhaupt, S., Sheppard, K.S., Sun, W., Sun, X., Suschana, E. (Lizzie), Talabi, O., Ticheli, H., Weiss, S.T., Wilson, Y., Zhu, A., The Massachusetts Consortium for Pathogen Readiness, 2020. SARS-CoV-2 viral load is associated with increased disease severity and mortality. Nat. Commun. 11, 5493. [https://doi.org/10.1038/s41467-020-19057-5](https://doi.org/10.1038/s41467-020-19057-5)
* Flugge (1897) Flugge, C., 1897. Ueber Luftinfection. Z. Fur Hyg. Infekt. 25, 179-224. [https://doi.org/10.1007/BF02220473](https://doi.org/10.1007/BF02220473)
* Frattolillo et al. (2021) Frattolillo, A., Stabile, L., Dell'Isola, M., 2021. Natural ventilation measurements in a multi-room dwelling: Critical aspects and comparability of pressurization and tracer gas decay tests. J. Build. Eng. 42, 102478. [https://doi.org/10.1016/j.jobe.2021.102478](https://doi.org/10.1016/j.jobe.2021.102478)
* Gale (2020) Gale, P., 2020. Thermodynamic equilibrium dose-response models for MERS-CoV infection reveal a potential protective role of human lung mucus but not for SARS-CoV-2. Microb. Risk Anal. 16, 100140-100140. [https://doi.org/10.1016/j.mram.2020.100140](https://doi.org/10.1016/j.mram.2020.100140)
* Gallo et al. (2021) Gallo, O., Locatello, L.G., Mazzoni, A., Novelli, L., Annuzzato, F., 2021. The central role of the nasal microenvironment in the transmission, modulation, and clinical progression of SARS-CoV-2 infection. Mucosal Immunol. 14, 305-316. [https://doi.org/10.1038/s41385-020-00359-2](https://doi.org/10.1038/s41385-020-00359-2)
* Haas (1983) Haas, C.N., 1983. Estimation of risk due to low doses of microorganisms: a comparison of alternative methodologies. Am. J. Epidemiol. 118, 573-582. [https://doi.org/10.1093/oxfordjournals.aje.a113662](https://doi.org/10.1093/oxfordjournals.aje.a113662)
* Hall (1966) Hall, E.T., 1966. The hidden dimension. New York : Doubleday, New York.
* Johnson et al. (2011) Johnson, G.R., Morawska, L., Ristovski, Z.D., Hargreaves, M., Mengersen, K., Chao, C.Y.H., Wan, M.P., Li, Y., Xie, X., Katoshevski, D., Corbett, S., 2011. Modality of human expired aerosol size distributions. J. Aerosol Sci. 42, 839-851. [https://doi.org/10.1016/j.jaerosci.2011.07.009](https://doi.org/10.1016/j.jaerosci.2011.07.009)
* HadjLi, Y., 2021a. Basic routes of transmission of respiratory pathogens--A new proposal for transmission categorization based on respiratory spray, inhalation, and touch. Indoor Air 31, 3-6. [https://doi.org/10.1111/ina.12786](https://doi.org/10.1111/ina.12786)
* Li and Y. (2021) Li, Y., 2021b. The respiratory infection inhalation route continuum. Indoor Air 31, 279-281. [https://doi.org/10.1111/ina.12806](https://doi.org/10.1111/ina.12806)
* Lu et al. (2020) Lu, C.-W., Liu, X.-F., Jia, Z.-F., 2020. 2019-nCoV transmission through the ocular surface must not be ignored. Lancet Lond. Engl. 395, e39-e39. [https://doi.org/10.1016/S0140-6736](https://doi.org/10.1016/S0140-6736)(20)30313-5
* Massarotti et al. (2006) Massarotti, N., Arpino, F., Lewis, R.W., Nithiarasu, P., 2006. Explicit and semi-implicit CBS procedures for incompressible viscous flows. Int. J. Numer. Methods Eng. 66, 1618-1640. [https://doi.org/10.1002/nme.1700](https://doi.org/10.1002/nme.1700)
* Mikszewski et al. (2021) Mikszewski, A., Stabile, L., Buonanno, G., Morawska, L., 2021. THE AIRBORNE CONTAGIOUSNESS OF RESPIRATORY VIRUSS: A COMPARATIVE ANALYSIS AND IMPLICATIONS FOR MITIGATION. medRxiv 2021.01.26.21250580. [https://doi.org/10.1101/2021.01.26.21250580](https://doi.org/10.1101/2021.01.26.21250580)
* Morawska and Cao (2020) Morawska, L., Cao, J., 2020. Airborne transmission of SARS-CoV-2: The world should face the reality. Environ. Int. 139, 105730. [https://doi.org/10.1016/j.envirn.2020.105730](https://doi.org/10.1016/j.envirn.2020.105730)
* Morawska et al. (2009) Morawska, L., Johnson, G.R., Ristovski, Z.D., Hargreaves, M., Mengersen, K., Corbett, S., Chao, C.Y.H., Li, Y., Katoshevski, D., 2009. Size distribution and sites of origin of droplets expelled from the human respiratory tract during expiratory activities. J. Aerosol Sci. 40, 256-269. [https://doi.org/10.1016/j.jaerosci.2008.11.002](https://doi.org/10.1016/j.jaerosci.2008.11.002)
* Pan et al. (2020) Pan, Y., Zang, D., Yang, P., Poon, L.M., Wang, Q., 2020. Viral load of SARS-CoV-2 in clinical samples Yang Pan Ditao Zhang Peng Yang Leo L M Poon Quanyi Wang. The Lancet.
* Raffel et al. (2018) Raffel, M., Willert, C.E., Scarano, F., Kahler, C., Wereley, S.T., Kompenhans, J., 2018. Particle image velocimetry. A practical guide, 3rd ed. Springer International Publishing.
* Scungio et al. (2013) Scungio, M., Arpino, F., Stabile, L., Buonanno, G., 2013. Numerical simulation of ultrafine particle dispersion in urban street aneurysms with the spatial-allmaras turbulence model. Aerosol Air Qual. Res. 13, 1423-1437. [https://doi.org/10.4209/aaqr.2012.11.0306](https://doi.org/10.4209/aaqr.2012.11.0306)
* Sorokowska et al. (2017) Sorokowska, A., Sorokowski, P., Hilpert, P., Cantarero, K., Frackowiak, T., Ahmadi, K., Alghraibeh, A.M., Aryeetey, R., Bertoni, A., Bettache, K., Blumen, S., Blazejewska, M., Bortolini, T., Butovskaya, M., Castro, F.N., Cetinkaya, H., Cunha, D., David, D., David, O.A., Dileym, F.A., Dominguez Espinosa, A. del C., Donato, S., Dronova, D., Dural, S., Fialova, J., Fisher, M., Gulbetekin, E., Hamamcioglu Akkaya, A., Hromatko, I., Iafrate, R., Iespy, M., James, B., Jaranovic, J., Jiang, F., Kimamo, C.O., Kjelvik, G., Koc, F., Laar, A., de Araujo Lopes, F., Machedt, G., Marcano, N.M., Martinez, R., Mesko, N., Molodovskaya, N., Moradi, K., Motahari, Z., Muhlhauser, A., Natividade, J.C., Nustiayi, J., Oberazucher, E., Ojedokun, O., Omar-Fauzee, M.S.B., Onyishi, I.E., Paluszak, A., Portugal, A., Razumiejczyk, E., Realo, A., Relvas, A.P., Rivas, M., Rizwan, M., Salkicevic, S., Sarmany-Schuller, I., Schmehl, S., Senyk, O., Sinding, C., Stamkou, E., Stoyanova, S., Sukolova, D., Sutresna, N., Tadinac, M., Teras, A., Tinoco Ponciano, E.L., Tripathi, R., Tripathi, N., Tripathi, M., Uhryn, O., Yamamoto, M.E., Yoo, G., Pierce, J.D., 2017. Preferred Interpersonal Distances: A Global Comparison. J. Cross-Cult. Psychol. 48, 577-592. [https://doi.org/10.1177/0022022117698039](https://doi.org/10.1177/0022022117698039)
* Stabile et al. (2019) Stabile, L., Massimo, A., Canale, L., Russi, A., Andrade, A., Dell'Isola, M., 2019. The Effect of Ventilation Strategies on Indoor Air Quality and Energy Consumptions in Classrooms. Buildings 9. [https://doi.org/10.3390/buildings9050110](https://doi.org/10.3390/buildings9050110)
* Sze To and Chao (2010) Sze To, G.N., Chao, C.Y.H., 2010. Review and comparison between the Wells-Riley and dose-response approaches to risk assessment of infectious respiratory diseases. Indoor Air 20, 2-16. [https://doi.org/10.1111/j.1600-0668.2009.00621.x](https://doi.org/10.1111/j.1600-0668.2009.00621.x)
* Watanabe et al. (2010) Watanabe, T., Bartrand, T.A., Weir, M.H., Omura, T., Haas, C.N., 2010. Development of a dose-response model for SARS coronavirus. Risk Anal. Off. Publ. Soc. Risk Anal. 30, 1129-1138. [https://doi.org/10.1111/j.1539-6924.2010.01427.x](https://doi.org/10.1111/j.1539-6924.2010.01427.x)
* Wells (1934) Wells, W.F., 1934. On airborne infection: study II. Droplets and Droplet nuclei. Am. J. Epidemiol. 20, 611-618. [https://doi.org/10.1093/oxfordjournals.aje.a118097](https://doi.org/10.1093/oxfordjournals.aje.a118097)
* Wolfel et al. (2020) Wolfel, R., Corman, V.M., Guggemos, W., Seilmaier, M., Zange, S., Muller, M.A., Niemeyer, D., Jones, T.C., Vollmar, P., Rothe, C., Hoelscher, M., Bleicker, T., Brunink, S., Schneider, J., Ehmann, R., Zwirglmaier, K., Drosten, C., Wendtner, C., 2020. Virological assessment of hospitalized patients with COVID-2019. Nature 581, 465-469. [https://doi.org/10.1038/s41586-020-2196-x](https://doi.org/10.1038/s41586-020-2196-x)
* Xie et al. (2007) Xie, X., Li, Y., Chwang, A.T.Y., Ho, P.L., Seto, W.H., 2007. How far droplets can move in indoor environments-revisiting the Wells evaporation-falling curve. Indoor Air 17, 211-225. [https://doi.org/10.1111/j.1600-0668.2007.00469.x](https://doi.org/10.1111/j.1600-0668.2007.00469.x)
* Zhang et al. (2020a) Zhang, N., Chen, W., Chan, P.-T., Yen, H.-L., Tang, J.W.-T., Li, Y., 2020a. Close contact behavior in indoor environment and transmission of respiratory infection. Indoor Air 30, 645-661. [https://doi.org/10.1111/ina.12673](https://doi.org/10.1111/ina.12673)
* Zhang et al. (2020b) Zhang, N., Su, B., Chan, P.-T., Miao, T., Wang, P., Li, Y., 2020b. Infection Spread and High-Resolution Detection of Close Contact Behaviors. Int. J. Environ. Res. Public. Health 17. [https://doi.org/10.3390/ijerph17041445](https://doi.org/10.3390/ijerph17041445) | Although close contact represents an important contagion route, the mechanism of exposure to exhaled droplets remains insufficiently characterized. In this study, an integrated risk assessment is presented for SARS-CoV-2 close contact exposure between a speaking infectious subject and a susceptible subject. It is based on a three-dimensional transient numerical model for the description of exhaled droplet spread once emitted by a speaking person, coupled with a recently proposed SARS-CoV-2 emission approach. Particle image velocimetry measurements were conducted to validate the numerical model.
The contribution of large droplets to infection risk is dominant for distances \\(<0.6\\) m, whereas for longer distances, the exposure risk depends only on airborne droplets. In particular, for short exposures (10 s) a minimum safety distance of 0.75 m should be maintained to lower the risk below 0.1%; for exposures of 1 and 15 min this distance increases to about 1.0 and 1.5 m, respectively. Based on the interpersonal distances across countries reported as a function of interacting individuals, cultural differences, and environmental and sociopsychological factors, the approach presented here revealed that, in addition to intimate and personal distances, particular attention must be paid to exposures longer than 1 min within social distances (of about 1 m).
**Keywords**: CFD analysis; virus transmission; close contact; PIV; SARS-CoV-2; droplets | Give a concise overview of the text below. | 295 |
cambridge_university_press/4f9bca5f_314c_4030_b338_91bcee08280d.md | # An investigation into the use of strain rosettes for the measurement of propagating cyclic strains
Vernon A. Squire
(Scott Polar Research Institute, Cambridge CB2 1ER, England)
######
Une recherche sur l'utilisation de rosette de deformation pour la mesure de la propagation de deformations cycliques. On a conduit une recherche sur la faisabilite de l'utilisation d'une rosette de deformation pour mesurer les principales deformations et pour siucer les directions principales d'un trace d'ondes en mouvement, dans le cas particulier d'une mode de flexion-gravitre dans une place de mer. On trouve que la separation des instruments dans la vacite est extremelment delicate et que, pour une rosette physiquement realisable, les erreurs sont inevitablyes 'il est necessaire d'afficher la deformation en continu. On propose une solution de recherange en employant l'analyse du domaine de frequences et particulier de la densite spectrale. On a essayie d'a methode avec des donnees obtenues par des capteurs de deformation sur la place fixee dans Notre Dame Bay, Newfoundland. Deux modes composantes distinctes ont ete trouves de periodes 6 s et 13
some unexpected results. Using an elastic sheet on a fluid foundation, a dispersion equation is derived which permits a propagating wave. It is this wave which is used to investigate the feasibility of using a strain rosette to obtain directional information for the wave.
The data used in the experimental section were obtained using strainmeters originally developed at the Department of Geophysics, University of Cambridge, for the measurement of earth tides. The instruments were modified for use on ice (Goodman and others, 1975) to a strainmeter of 2 m gauge length with decreased sensitivity. The experiments were carried out in Notre Dame Bay, Newfoundland as part of a joint venture between the Scott Polar Research Institute and C-CORE, Memorial University of Newfoundland.
Dispersion equation
To a first approximation the flexural oscillations of sea ice due to ocean swell may be modelled by a thin elastic sheet resting on a fluid foundation of infinite depth (Hendrickson and Webb, 1963).
With the coordinate system shown in Figure 1, the equation of the plate is (Hetenyi, 1946)
\\[D\
abla\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!\\!\\
and Bernoulli's equation \\[P(x,y,t)\\,=\\,-\\rho\\left[\\frac{\\partial\\phi}{\\partial t}\\Big{|}_{z=0}+gW(x,y,t) \\right],\\] (4) where \\(\\rho\\) is the density of the fluid.
Hence the equation of motion is
\\[D\
abla\\ast W+\\rho_{1}h\\,\\frac{\\partial^{2}W}{\\partial t^{2}}=-\\rho\\left[\\frac {\\partial\\phi}{\\partial t}\\Big{|}_{z=0}+gW\\right]. \\tag{5}\\]
Consider a plane-wave solution on \\(z=0\\) (Fig. 2)
\\[W=\\exp\\,\\mathrm{i}\\,\\{k(\\mathbf{n}\\cdot\\mathbf{r})-\\omega t\\}, \\tag{6}\\]
where \\(k\\) is the wave number, \\(\\mathbf{n}=(l,m)=(\\cos\\,\\alpha,\\,\\sin\\,\\alpha)\\) is the direction of propagation of the plane wave, \\(\\mathbf{r}\\) is the position of an arbitrary point on the plane wave front, and \\(\\omega\\) is the circular frequency.
Substitution into the equation of motion gives the required dispersion equation, a quintic polynomial in \\(k\\).
\\[Dk^{s}+(\\rho g-\\rho_{1}h\\omega^{2})\\,\\,k-\\rho\\omega^{2}=0. \\tag{7}\\]
We assume that the origin of coordinates is far from the ice edge so that any evanescent waves generated close to the edge (Squire and Allan, 1977) will have died away. \\(k\\) may therefore be regarded as real and positive.
## Principal axes
Now the infinitesimal strain on the surface of the sea ice is given by
\\[\\epsilon_{xx}\\,=\\,-\\frac{h}{2}\\frac{\\partial^{2}W}{\\partial x^{2}}\\,,\\qquad \\epsilon_{xy}\\,=\\,-\\frac{h}{2}\\frac{\\partial^{2}W}{\\partial x}\\,,\\qquad \\epsilon_{yy}\\,=\\,-\\frac{h}{2}\\frac{\\partial^{2}W}{\\partial y^{2}}\\,, \\tag{8}\\]
(Graff, 1975), so that
\\[\\left.\\begin{array}{l}\\epsilon_{xx}\\,=\\,\\frac{1}{4}hl^{2}k^{2}W,\\\\ \\epsilon_{xy}\\,=\\,\\frac{1}{4}hlmk^{2}W,\\\\ \\epsilon_{yy}\\,=\\,\\frac{1}{4}hm^{2}k^{2}W.\\end{array}\\right\\} \\tag{9}\\]
Consider the strain in some arbitrary direction \\(\\beta\\) to the \\(x\\)-axis,
\\[\\epsilon_{\\beta}=\\epsilon_{xx}\\,\\cos^{2}\\,\\beta+2\\epsilon_{xy}\\,\\sin\\,\\beta\\, \\cos\\,\\beta+\\epsilon_{yy}\\,\\sin^{2}\\,\\beta. \\tag{10}\\]
Figure 2: Plane wave front impinging on a delta rosette ABC with separation parameter \\(a\\).
Figure 4: Predicted angle \\(\\pi\\) as measured by delta rosette as a function of time increment for various angles of attack. Wave period is 10 s, separation parameter \\(a=\\) o.o m.
Figure 3: Percentage error of strain measured by delta rosette as compared with single strainmeter in direction of impinging wave as a function of time increment for various angles of attack. Wave period is 10 s, separation parameter \\(a=\\) o.o m.
Therefore,
\\[\\epsilon_{\\beta}=\\frac{1}{2}\\hbar k^{2}W\\cos^{2}{(\\alpha\\!-\\!\\beta)}.\\] ( 11)
Hence in directions A, B, C (Fig. 2), the respective infinitesimal strains are
\\[\\epsilon_{\\Delta}=\\frac{1}{2}\\hbar Wk^{2}\\cos^{2}{\\alpha},\\] ( 12)
Suppose that the directions A, B, C represent the axes of a 120\\({}^{\\circ}\\) strainmeter rosette. We choose a strainmeter of length 2 m, which corresponds to the instruments used in the experiments and allow a separation from the origin of \\(a\\) (Fig. 2).
Given three strains on a delta rosette (120\\({}^{\\circ}\\) rosette, as discussed above) we may locate the axes of principal strain and compute the principal strains \\(\\epsilon_{1}\\) and \\(\\epsilon_{2}\\) as follows
\\[\\epsilon_{\\rm A}=\\frac{1}{2}(\\epsilon_{1}\\!+\\!\\epsilon_{2})+\\frac{1}{2}( \\epsilon_{1}\\!-\\!\\epsilon_{2})\\,\\cos{2\\gamma},\\] ( 13)
where \\(\\gamma\\) is the angle between the axes of principal strain and one strainmeter of the strain rosette. After some algebra these equations lead to
\\[\\epsilon_{1}\\!-\\!\\epsilon_{2}=\\frac{2}{3}(\\epsilon_{\\rm A}\\!+\\!\\epsilon_{\\rm B }\\!+\\!\\epsilon_{\\rm C}),\\] ( 14)
The \\(\\omega\\)-condition \\(|\\epsilon_{\\rm r1}|\\geq|\\epsilon_{2}|\\) fixes the sign of \\(\\epsilon_{1}\\!-\\!\\epsilon_{2}\\) and any ambiguity in the angle \\(\\gamma\\).
Given an incident wave of particular period and angle of attack \\(\\alpha\\), it is therefore possible to compare the expected surface strain and angle \\(\\alpha\\) with that calculated from the delta rosette. The percentage error in surface strain as a function of incremental time for a 10 s wave at angles of attack of \\(\\rm o^{\\circ}\\) through \\(60^{\\circ}\\) in steps of 10\\({}^{\\circ}\\) is shown in Figure 3. The predicted angle \\(\\alpha\\) is shown for various angles of attack in Figure 4.
As the wave propagates past the strain rosette, each instrument will sample a different part of its cycle due to their physical separation. This will introduce a frequency-dependent phase difference between the strainmeters leading to the large errors (particularly where the strain goes through zero) shown in Figures 3 and 4. A choice of \\(a=-\\)1.0 m (half the physical length of the instrument) will eliminate the error completely, but it is not feasible experimentally to position the instrument in a star configuration with sufficient accuracy.
A monochromatic wave of known direction is not a realistic forcing and one might encounter a sea made up of waves and swell of many periods from several directions. The errors then become so unpredictable that continuous computation in the time domain is not a viable method for evaluating the direction of wave propagation and the associated principal strains.
Frequency-domain analysisAn alternative to time-series analysis is to compute a power spectral density for each strainmeter in the rosette every five minutes over several hours. From the power spectral density a root-mean-square strain about some centre frequency may be found by integration over a small frequency band-width and then taking the square root. A band-width of 5/320 Hz was used throughout. In this way the strain corresponding to discrete peaks in the power spectral density may be treated independently.
The power spectral densities were evaluated for data obtained over several hours on fast ice in Notre Dame Bay, Newfoundland, using a Hewlett-Packard 5451B Fourier Analyser system located at Memorial University of Newfoundland (Allan and Squire, 1977). There were clear peaks at approximately 6 s and 13 s and it was the strains corresponding to these peaks which were used in the principal strain calculation. Figures 5 and 6 show the variation of the orientation of the axes of principal strain and the principal strains as a function of elapsed time for the recorded 10 h of data.
The mean and standard deviation of the orientation angle weighted with respect to the larger principal strain was found for both the long- and short-period components. For 112 samples the means of the two components were \\(2.4^{\\circ}\\) and \\(-13.6^{\\circ}\\) respectively and the standard deviations, \\(7.8^{\\circ}\\) and \\(7.3^{\\circ}\\). A Student's \\(t\\)-test gave independent distributions with 99.9% confidence indicating that the waves were propagating from different directions and had
Figure 5: Variation of orientation of principal axes of strain and principal strains for long-period component over 10 h.
Figure 6: Variation of orientation of principal axes of strain and principal strains for short-period component over 10 h.
## Instruments and Methods
originated from different sources. The predicted directions were the same as those observed visually over the duration of the experiment.
The orientation of the axes of principal strain with time depends upon the interference of the individual components of the wave spectrum. A predominant swell normal to the ice edge will generate a flexural wave travelling in the same direction. Waves impinging from other directions will interfere with the basic wave to produce a swing in the principal axes. This mechanism is a possible explanation for the parallel cracks which often form at an angle to the apparent incident swell during the wave break-up of sea ice. It also hints that a relationship between crack spacing and incident wavelength may not be as simple as was first imagined.
### Discussion
In addition to the inaccuracies introduced by the physical length of a strain gauge we have found considerable errors due to the separation of the instruments in a strain rosette. Theoretically such errors may be removed by superposition of the instruments to form a star configuration but this is not feasible experimentally. Since most strain rosettes used at present in both mechanical and ice engineering do have their gauges separated by some small but finite distance it is very important to take this into account when measuring propagating strain waves.
In geophysical applications, such as that described here, the random nature of the forcing makes it impossible to eliminate errors if the principal-axes approach is used. We have thus shown that it is essential to employ frequency-domain analysis for the treatment of data obtained by strainmeter rosettes.
### Acknowledgements
This work was carried out while the author was in receipt of a research studentship from the Natural Environment Research Council of Great Britain. I am indebted to Dr Peter Wadhams and Alastair Allan for invaluable discussion. I also thank members of C-CORE and in particular the Twillingate field party whose cheerfulness and stimulating conversation led to a successful experiment.
### MS. received 28 October 1977
## References
* Allan and Squire (1965) Allan, A. J., _and_ Squire, V. A. In press. Naturally induced surface strain in fast ice. _C-CORE Publication_ (Memorial University of Newfoundland. Centre for Cold Ocean Resources Engineering).
* Dally and Riley (1965) Dally, J. W., _and_ Riley, W. F. 1965. _Experimental stress analysis_. New York, McGraw-Hill Book Co., Inc.
* Dove and Adams (1964) Dove, R. C., and Adams, P. H. 1964. _Experimental stress analysis and motion measurement_. (_Theory, instruments and circuits, techniques_. Columbus, Ohio, Charles E. Merrill Publishing Co.
* Goodman and others (1975) Goodman, D. J., _and others_. 1975. Wire strainometers on ice, [by] D. J. Goodman, A. J. Allan, R. G. Bilham. _Nature, Vol. 255. No. 5509._, p. 45-46.
* Graff (1975) Graff, K. F. 1975. _Wave motion in elastic solids_. Oxford, Clarendon Press. (Oxford Engineering Science Series.)
* Hendrickson and Webb (1963) Hendrickson, J. A., and Webb, L. M. 1963. _Theoretical investigation of semi-infinite ice flows in water of infinite depth_. Port Huencine, California, U.S. Naval Civil Engineering Laboratory. (NB-32232, Final Report.)
* Hetenyi (1946) Hetenyi, M. I. 1946. _Beams on elastic foundations; theory with applications in the fields of civil and mechanical engineering_. Ann Arbor, University of Michigan Press. (University of Michigan Studies. Scientific Series, Vol. 16.)
* Murphy and others (1957) Murphy, G., _and others_. 1957. Response of resistance strain grades to dynamic strains, by G. Murphy, A. H. Hausrath III and P. W. Peterson. _IX\\({}^{\\circ}\\) Congres International de Mecanique Applique. Ates. [v965] Tom. 8, p. 448-56.
* Squire and Allan (1977) Squire, V. A., _and_ Allan, A. J. 1977. Propagation of flexural gravity waves in sea ice. _A symposium on sea ice processes and models_, [_Seattle, U.S.A._]. _September 6-9, 1977._. _Propuits_, Vol. 2, p. 157-66.
* Stoker (1957) Stoker, J. J. 1957. _Water waves. The mathematical theory with applications_. New York, Interscience.
* Wadhams (1973) Wadhams, P. 1973. Attenuation of swell by sea ice. _Journal of Geophysical Research_, Vol. 78, No. 18, p. 3552-63 | An investigation into the feasibility of using a strain rosette to measure the principal strains and to locate the principal axes for a propagating strain wave is carried out for the particular application of a flexural-gravity wave in sea ice. It is found that the separation of the instruments in the rosette is extremely critical and, for a physically realizable rosette, errors are unavoidable if strain is to be monitored continuously. An alternative approach is proposed employing frequency-domain analysis and in particular a running-power spectral density. The method is demonstrated with data obtained from strainmeters on fast ice in Notre Dame Bay, Newfoundland. Two distinct wave components are found to be present of period 6 s and 13 s, and it is shown with 99.9\\(\\%\\) confidence that they are propagating in different directions. | Summarize the following text. | 170 |
arxiv-format/2401_15029v1.md | # Learning Neural Radiance Fields of Forest Structure for Scalable and Fine Monitoring
Juan Castorena
1 Los Alamos National Laboratory, Los Alamos, NM, USA, 48124
1
Footnote 1: email: [email protected]
## 1 Introduction
With approximately four billion hectares covering around \\(~{}31\\%\\) of the Earth's land area [7], forests play a vital role in our ecosystem. The increasing demand for tools that help maintain a balanced and healthy forest ecosystem is challenging due to the complex nature of various factors, including resilience against disease and fire, as well as overall forest health and biodiversity [25]. Active research focuses on the development of monitoring methods that synergistically collect comprehensive information about forest ecosystems and utilize it to analyze and generate predictive models of the characterizing factors. These methods should ideally be capable of effectively and efficiently cope with the dynamic changes over time and heterogeneity. The goal is to provide the tools with such properties for improved planning, management, analysis, and more effective decision-making processes [1]. Traditional tools for forest monitoring, such as national forest inventory (NFI) plots, utilize spatial sampling and estimation techniques to quantify forest cover, growing stock volume, biomass, carbon balance, and various tree metrics (e.g., diameter at breast height, crown width, height) [23]. However, these surveying methods consist of manual field sampling, which tends to introduce bias and poses challenges in terms of reproducibility. Moreover, this approach is economically costly and time-consuming, especially when dealing with large spatial extents.
Recent advancements, driven by the integration of remote sensing, geographic information and modern computational methods, have contributed to the development of more efficient, cost/time effective, and reproducible ecosystem characterizations. These advancements have unveiled the potential of highly refined and detailed models of 3Dforest structure. Traditionally, the metrics collected through standard forest inventory plot surveys have been utilized as critical inputs in applications in forest health [15], wood harvesting [13], habitat monitoring [24], and fire modeling [16]. The efficacy of these metrics relies in their ability to quantitatively represent the full forest's 3D structure including its vertical resolution: from the ground, sub-canopy to the canopy structure. Among the most popular remote sensing techniques, airborne LiDAR scanning (ALS) has gained widespread interest due to its ability to rapidly collect precise 3D structural information over large regional extents [6]. Airborne LiDAR, equipped with accurate position sensors like RTK (Real-Time Kinematic), enables large-scale mapping from high altitudes at spatial resolutions ranging from 5-20 points per square meter. It has proven effective in retrieving important factors in forest inventory plots [11]. However, it faces challenges in dense areas where the tree canopy obstructs the LiDAR signal, even with its advanced full-waveform-based technology. _In-situ_ terrestrial laser scanning (TLS) on the other hand provides detailed vertical 3D resolution from the ground, sub-canopy and canopy structure informing about individual trees, shrubs, ground surface, and near-ground vegetation at even higher spatial resolutions [10]. Recent work by [20] has demonstrated the efficiency and efficacy of ecosystem monitoring using single scan in-situ TLS. The technological advances of such models include new capabilities for rapidly extracting highly detailed quantifiable predictions of vegetation attributes and treatment effects in near surface, sub-canopy and canopy composition. However, these models have only been deployed across spatial domains of a few tens of meters in radius due to the existing inherited limitations of TLS spatial coverage [20]. On the other side of the spectrum, image based photogrammetry for 3D structure extraction offers the potential of being both scalable and the most cost efficient. Existing computational methods for the extraction of 3D structure in forest ecosystems, however, have not been as efficient. Aerial photogrammetry methods result in 3D structure that contains very limited structural information along the vertical dimension and have encountered output spatial resolutions that can be at most only on par with those from ALS [25].
Our contribution seeks to fuse the experimental findings across remote sensing domains in forestry; from broad-scale to in-situ sensing sources. The goal is the ability to achieve the performance quality of _in-situ_ sources (e.g., TLS) in the extraction of 3D forest structure at the scalability of broad sources (e.g., ALS, aerial-imagery). We propose the use of neural radiance field (NERF) representations [17] which account for the origin and direction of radiance to determine highly detailed 3D structure via view-consistency. We observe that such representations enable both the fine description of forest 3D structure and also the fusion of multi-view multi-modal sensing sources. Demonstrated experiments on real multi-view RGB imagery, ALS and TLS validate the fine resolution capabilities of such representations as applied to forests. In addition, the performance found in our experiments of 3D structure derived forest factor metrics demonstrate the potential of neural fields to improve upon the existing forest monitoring programs. To the best of our knowledge, the demonstrations conducted in this research, namely, the application of neural fields for 3D sensing in forestry, is novel and has not been shown previously. In the following, Sec. 2 provides a brief overview of neural fields. Sec. 3 includes experiments illustrating the feasibility of neural fields to represent fine 3D structure of forestry while Section 4 demonstrates the effectiveness of fusing NERF with LiDAR data by enforcing LiDAR point cloud priors. Finally, Section 5 presents results that show the efficacy of NERF extracted 3D structure for deriving forest factor metrics, which are of prime significance to forest managers for monitoring.
## 2 Background
### Neural Radiance Fields
The idea of neural radiance fields (NERF) is based on classical ray tracing of volume densities [12]. Under this framework, each pixel comprising an image is represented by a ray of light casted onto the scene. The ray of light is described by \\(\\mathbf{r}(t)=\\mathbf{o}+t\\mathbf{d}\\) with origin \\(\\mathbf{o}\\in\\mathbb{R}^{3}\\), unit \\(\\ell_{2}\\)-norm length direction \\(\\mathbf{d}\\in\\mathbb{R}^{3}\\) (i.e., \\(\\|\\mathbf{d}\\|_{2}=1\\)) and independent variable \\(t\\in\\mathbb{R}\\) representing a relative distance. The parameters of each ray can be computed through the camera intrinsic matrix \\(\\mathbf{K}\\) with inverse \\(\\mathbf{K}^{-1}\\), the 6D pose transformation matrix \\(\\mathbf{T}_{m\\to 0}\\) of image \\(m\\) as in Eq. (1)
\\[(\\mathbf{o},\\mathbf{d})=\\left(T_{m\\to 0}^{(4)},\\frac{ \\mathbf{d}^{\\prime}}{\\|\\mathbf{d}^{\\prime}\\|_{\\ell_{2}}}\\right)\\quad\\text{ with}\\] \\[\\mathbf{d}^{\\prime}=\\mathbf{T}_{m\\to 0}^{-1}\\mathbf{K}^{-1} \\begin{bmatrix}u^{\\prime}\\\\ v^{\\prime}\\\\ 1\\end{bmatrix}-T_{m\\to 0}^{(4)} \\tag{1}\\]
where \\(u^{\\prime},v^{\\prime}\\) are vertical and horizontal the pixel locations within the image and the subscript \\({}^{(i)}\\) denotes the \\(i\\)-th column of a matrix. Casting rays \\(\\mathbf{r}\\in\\mathcal{R}\\) into the scene from all pixels across all multi-view images provides information of intersecting rays that can be exploited to infer 3D scene structure. Such information consists on sampling along a ray at distance samples \\(\\{t_{i}\\}_{i=1}^{M}\\) and determine at each sample if the color \\(\\mathbf{c}_{i}\\in[0,..,255]^{3}\\) of the ray coincides with those from overlapping rays. If it does not coincide then it is likely that the medium found at that specific distance sample is transparent whereas the opposite means an opaque medium is present. With such information, compositing color can be expressed as a function of ray \\(\\mathbf{r}\\) as in Eq. (2) by:
\\[\\hat{C}(\\mathbf{r})=\\sum_{i=1}^{N}\\left[\\underbrace{\\left(\\prod_{j=1}^{i-1} \\exp(-\\sigma_{j}\\delta_{j})\\right)}_{\\text{transparency so far}}\\underbrace{(1- \\exp(-\\sigma_{i}\\delta_{i}))}_{\\text{opacity}}\\mathbf{c}_{i}\\right] \\tag{2}\\]
where \\(\\sigma_{i}\\in\\mathbb{R}\\) and \\(\\delta_{i}=t_{i+1}-t_{i}\\) are the volume densities and differential time steps at sample indexed by \\(i\\), respectively. In Eq. (2) the first term in the summation represents the transparent samples so far while the second term is an opaque medium of color \\(\\mathbf{c}_{i}\\) present at sample \\(i\\). Reconstructing a scene in 3D can then be posed as the problem of finding the sample locations \\(t_{i}\\) where each ray intersects an opaque medium (i.e., where each ray stops ) for all rays casted into the scene. Those intersections are likely to occur at the sample locations where the volume densities are maximized; in other words,where \\(t_{i}=\\operatorname*{arg\\,max}_{i}\\{\\sigma\\}\\). Accumulating, all rays casted into the scene and estimating the locations \\(t_{i}\\)'s where volume density is maximized overall rays, renders the 3D geometry of the scene. The number of rays required per scene is an open question; the interested reader can go to [3] where a similar problem but for LiDAR sensing determines the number of pulses required for 3D reconstruction depending on a quantifiable measure of scene complexity.
The problem in Eq. (2) is solved by learning the volume densities that best explains image pixel color in a 3D consistent way. Learning can be done through a multilayer perceptron (MLP) by rewriting Eq. (2) as in Eq. (3) as:
\\[\\hat{C}(\\mathbf{r})=\\sum_{i=1}^{N}\\mathbf{w}_{i}\\mathbf{c}_{i} \\tag{3}\\]
where the weights \\(\\mathbf{w}\\in\\mathbb{R}^{N}\\) encode transparency or opacity of the \\(N\\) samples along a ray and \\(\\mathbf{c}_{i}\\) is its associated pixel color. Learning weights is performed in an unsupervised fashion through the optimization of a loss function using a training set of \\(M\\) pairs of multi-view RGB images and its corresponding 6D poses \\(\\{(\\mathbf{y}_{m},\\mathbf{T}_{m})\\}_{m=1}^{M}\\), respectively. This loss function \\(f:\\mathbb{R}^{L}\\rightarrow\\mathbb{R}\\) is the average \\(\\ell_{2}\\)-norm error between ground truth color and estimation by compositing described as in Eq. (4):
\\[\\mathcal{L}_{C}(\\mathbf{\\Theta})=\\sum_{\\mathbf{r}\\in\\mathcal{R}}\\left[\\|C( \\mathbf{r})-\\hat{C}(\\mathbf{r},\\mathbf{\\Theta})\\|_{\\ell_{2}}^{2}\\right] \\tag{4}\\]
Optimization by back-propagation yields the weights that gradually improves upon the estimation of the volume densities. Other important parameters of NERF are distance \\(\\hat{z}(\\mathbf{r})\\) which can be defined using the same weights from Eq.(2) but here expressed in terms of distance as:
\\[\\hat{z}(\\mathbf{r})=\\sum_{i=1}^{N}\\omega_{i}t_{i},\\hskip 42.679134pt\\hat{s}( \\mathbf{r})^{2}=\\sum_{i=1}^{N}\\omega_{i}(t_{i}-\\hat{z}(\\mathbf{r}))^{2} \\tag{5}\\]
and \\(\\hat{s}(\\mathbf{r})\\) defined as the standard deviation of distance. One key issue affecting 3D reconstruction resolution is on the way samples \\(\\{t_{i}\\}_{i=1}^{N}\\) for each ray \\(\\mathbf{r}\\in\\mathcal{R}\\) are drawn. A small number of samples \\(N\\) results in low resolution and erroneous ray intersection estimations while sampling vastly results in much higher computational complexities. To balance this trade-off, the work in [17] uses two networks one at low-resolution to coarsely sample the 3D scene and another fine-resolution one used subsequently to more finely sample only at locations likely containing the scene.
## 3 Are neural fields capable of extracting 3D structure in forestry?
The high capacity of deep learning (DL) models to express data distributions with high fidelity and diversity offers a promising avenue to model heterogenous 3D forest structures in fine detail. The specific configuration of the selected DL model aims to provide a representation that naturally allows the combination of data from multiple sensing modalities and view-points. Neural fields [17] under the DL rubric have proven to be a highly effective computational approach for addressing such problems. However, their application has been only demonstrated for indoor and urban environments.
### Terrestrial Imagery.
Expanding on the findings of neural fields in man-made environments, we conducted additional experiments to demonstrate its effectiveness in representing fine 3D structure details in forest ecosystems. Figs. 1 and 2 shows the extracted 3D structure of a Ponderosa pine tree in New Mexico, captured using standard 12-megapixel camera phone images collected along an elliptical trajectory around the tree. Fig. 0(a) shows a few of the input example terrestrial multi-view RGB images collected. Figs. 0(b) and0(c) presents the image snapshot trajectory represented as red rectangles, along with two 3D structure views derived from a traditional structure from motion (SFM) method [22] applied to the multi-view input images. Note that the level of spatial variability detail provided by this SFM method is significantly low considering the resolution provided by the set of input images.
solved as shown in Figs.2e-2f in contrast to the result of traditional SFM in Fig.1. Note that even points coming from images degraded by sun-glare as shown in Fig.1a landed in the tree within reasonable distances as shown in Fig.2a, this is significant specially considering the severity of the glare effects present in the 2D RGB images. In general, terrestrial multi-view imagery based NERF can be used to extract fine 3D spatial resolution along the vertical dimension of a tree stand with a level of detail similar to TLS and with the additional advantage of providing color for every 3D point estimate.
Figure 2: Neural field models are capable of extracting fine 3D structure from terrestrial multi-view images in forestry. Reconstructions demonstrate their potential to represent fine scale variability in heterogeneous forest ecosystems.
## 4 Neural Radiance Fields: A framework for remote sensing fusion in forestry
Neural fields, have also demonstrated their ability to provide representations suitable for combining data from multiple sensing modalities in as long as these are co-registered or aligned. The neural fields framework, which extracts 3D structure from multi-view images, enables direct fusion of information with 3D point cloud sources through point cloud prior constraints [21]. Here, we consider the case of fusing multi-view images from an RGB camera and point clouds from LiDAR. The difficulty in fusing camera and LiDAR information is that camera measures color radiance while LiDAR measures distance [5]. Fortunately, the framework of neural radiance fields can be used to extract 3D structure from images thus enabling direct fusion of information from LiDAR. This can be done though a learning function that extracts a 3D structure promoting consistency between the multi-view images as leveraged by standard NERF [17] subject to LiDAR point cloud priors [21] as:
\\[\\mathcal{L}(\\mathbf{\\Theta})=\\underbrace{\\sum_{\\mathbf{r}\\in\\mathcal{R}}\\left[ \\|C(\\mathbf{r})-\\hat{C}(\\mathbf{r},\\mathbf{\\Theta})\\|_{\\ell_{2}}^{2}\\right]}_{ \\mathcal{L}_{C}(\\mathbf{\\Theta})}+\\lambda\\underbrace{\\sum_{\\mathbf{r}\\in \\mathcal{R}}\\left[\\|z(\\mathbf{r})-\\hat{z}(\\mathbf{r},\\mathbf{\\Theta})\\|_{\\ell_ {2}}^{2}\\right]}_{\\mathcal{L}_{D}(\\mathbf{\\Theta})} \\tag{6}\\]
where the first term \\(\\mathcal{L}_{C}(\\mathbf{\\Theta})\\) is the standard NERF learning function promoting a 3D structure with consistency between image views while the second term \\(\\mathcal{L}_{D}(\\mathbf{\\Theta})\\) enforces the LiDAR point cloud priors with \\(\\hat{z}(\\mathbf{r},\\mathbf{\\Theta})\\) given as in Eq.(5). The benefit of imposing point cloud priors into neural fields is two-fold: (1) it enables expressing relative distances obtained from standard 3D reconstruction of multi-view 2D images in terms of real metrics (e.g., meters), and (2) neural fields tend to face challenges in accurately estimating 3D structures at high distances (typically in the order of several tens of meters), where the LiDAR point cloud priors can serve as a supervisory signal to guide accurate estimation, especially at greater distances. This can be beneficial, as distances in aerial imagery are generally distributed around large distances, which may pose challenges for 3D structure extraction methods.
### Filing in the missing below-canopy structure in ALS data with TLS
_In-situ_ terrestrial laser scanning (TLS) has been demonstrated as a powerful tool for rapid assessment of forest structure in ecosystem monitoring and characterization. It is capable of very fine resolution including the vertical direction: surface, sub-canopy and canopy structure. However, its utility and application is restricted by limited spatial coverage. Aerial laser scanning (ALS) on the other hand, has the ability to rapidly survey broad scale areas at the landscape level, but is limited as it sparsely samples the scene providing only coarse spatial variability details and it also cannot penetrate the tree canopy. Fig. 3a shows a point cloud example collected using a full-waveform ALS system which collects \\(\\approx\\) 10 points per meter square. In Fig. 3a note that the sub-canopy structure is not spatially resolved. In contrast, TLS is finely resolved below the canopy as observed in Fig.3b.
Fortunately, the drawbacks of TLS and ALS scans can be resolved by co-registration which transforms the data to enable direct fusion. Here, we use the automatic and target-less based approach of [4]. This was demonstrated to outperform standard methods [2], [19], [8] in natural ecosystems and to be robust to resolution scales, view-points, scan area overlap, vegetation heterogeneity, topography and to ecosystem changes induced by pre/post low-intensity fire effects. It is also fully automatic, capable of self-correcting in cases of noisy GPS measurements and does not require any manually placed targets [9] while performing at the same levels of accuracy. All TLS scans where co-registered into the coordinate system of ALS. Once scans have been co-registered they can be projected into a common coordinate system. Illustrative example results for two forest plots where included in Fig.4 where the two sources: ALS and TLS have been color coded differently, with the sparser point cloud being that of the ALS. Throughout all cases the co-registration produced finely aligned point clouds. In general, the error produced by this co-registration method is \\(<\\)6 cm for the translation and \\(<\\)0.1\\({}^{\\circ}\\) for the rotation parameters. The translation error in mainly due to the resolution of ALS at 10 points/meter square.
### Aerial Imagery
Experiments performed on broader forest areas were also conducted. Aerial RGB imagery was collected with a DJI Mavic2 Pro drone at 30Hz and a 3840 \\(\\times\\) 2160 pixel resolution. Figs. 4(a)-4(f) show examples of multi-view aerial image inputs used by the SFM and neural fields models. The forest 3D structure resulting from running conventional SFM [22] on these images is in Figs.4(i)-4(k) illustrating different perspective views. Again, the sequence of rectangles in red illustrate the drone flight path and the snapshot image locations. Note that SFM was capable of resolving 3D structure for the entire scene.
Figure 3: Forest structure from TLS and ALS: ALS provides sparse spatial information and is not capable of resolving sub-canopy detail. TLS on the other hand, provides fine spatial variability and resolution along full 3D vertical stands.
Applying NERF directly into the RGB imagery dataset, did not result in comparable performance as in the case of the Ponderosa pine tree shown in Section 3.1. Without point cloud constraints, the 3D structure extracted by the neural fields in Fig. 4(h) shows the presence of artifacts at large distances. The main reason for these artifacts is that NERF had difficulties in recovering 3D structures from images with objects distributed at far distances (e.g., ground surface in aerial scanning). Imposing LiDAR point cloud priors we hypothesize can help to alleviate this issue. Here, we follow the methodology of [21] and conduct experiments for fusing camera and LiDAR information through the learning function in Eq.(6). The LiDAR point cloud uses both co-registered TLS and ALS data which provides information to constrain both distances in the mid-story below the canopy and those between the ground surface and the tree canopy. The co-registration approach used to align ALS and TLS point clouds is the one described in Section 4.1. Note that TLS information is not available throughout the entire tested forest area; rather, only one TLS scan was collected. We found the information provided by just one single scan was enough to constraint the relative distances in sub-canopy areas throughout the entire scene. Imposing additional constraints through consistency with the input point cloud shown in Fig.4(g), results in the extracted 3D structure shown in Figs. 4(h)-4(h). In this case, the point cloud prior imposes constraints that resolve the associated difficulties at large distances. Note that this reconstruction is significantly less sparser than those shown in Figs.4(i)-4(j) obtained from conventional SFM. NERF+LIDAR results in improved resolution which in turn enables the detection of a much finer spatial variability, specially important for current existing demands in forest monitoring at broad scale. This illustrates the capacity of neural fields models not only to represent highly detailed 3D forest structure from aerial multi-view data but also the possibility of combining multi-source remotely sensed data (i.e., imagery and LiDAR).
Figure 4: TLS to ALS co-registration: Forest features are well aligned qualitatively between both ALS and TLS sensing.
Figure 5: AI-based extraction of 3D structures from aerial multi-view 2D images + 3D point cloud data inputs. Imposing point cloud priors into 3D structure extraction improves distance ambiguities in structure and resolves artifact issues likely at far ranges.
## 5 Prediction of forest factor metrics
Demonstration of the described capabilities of neural fields on forest monitoring programs consists here in performance evaluations of 3D forest structure derived metrics. These can include for example number of trees, species composition, tree height, diameter at breast height (DBH), age on a given geo-referenced area. However, since our focus is to demonstrate the usefulness of neural radiance fields for representing 3D forest structure, we only illustrate its potential in prediction of the number of trees and DBH along geo-referenced areas. The data used includes overlapping TLS+ALS+GPS+aerial imagery multi-view multi-modal data collected over forest plot units. Each of these plots represents a location area of a varying size: some of size 20 x 50 m and others at 15 m radius. The sites in which data was collected is in northern New Mexico, USA (the NM dataset). The vegetation heterogeneity and topography variability of the landscape is significantly diverse. The NM site contains high elevation ponderosa pine and mixed-conifer forest: white fir, limber pine, aspen, Douglas fir and Gambel oak and topography is at high elevation and of high-variation (between 5,000-10,200 ft). The TLS data was collected using a LiDAR sensor mounted on a static tripod placed at the center of each plot. The ALS data was collected by a Galaxy T2000 LiDAR sensor mounted on a fixed-wing aircraft. The number of LiDAR point returns per volume depend on the sensor and scanning protocol settings (e.g., TLS or ALS, range distribution, number of scans) and these vary across plots depending on the heterogeneity of the site. Ground truth number of trees per plot was obtained through standard forest plot field surveying techniques involving actual physical measurements of live/dead vegetation composition. Data from a total of 250 plots where collected in the NM dataset. In every forest plot overlapping ALS, GPS, TLS and multi-view aerial imagery data was collected along with the corresponding field measured ground truth. Prediction of the number of trees \\(y_{1}\\) per plot given point cloud \\(\\mathbf{X}\\), was performed following the approach of the GRNet [27][26]. In general, the methodology consists in computing 2D bounding boxes each corresponding to a tree detection from a birds eye view (BEV). A refinement segmentation approach then follows which projects each 2D bounding box into 3D space. The resulting points inside each 3D bounding box are then segmented by foliage, upper stem and lower stem and empty space and this information is used to improve estimates over the number of trees. This methodology is used independently on several case scenarios comparing the performance of a combination of remote sensing approaches: (1) neural fields (NF) from aerial RGB Images, (2) ALS as in Fig.6b, (3) TLS as in Fig.6a, (4) ALS+TLS, (5) NF-RGB images + ALS, (6) NF-RGB images + TLS, (7) NF-RGB Images + TLS + ALS. Note that the TLS, ALS and TLS+ALS prediction results does not make any use of neural fields. Rather, their performance was included only for comparison purposes. Table 1 summarizes the root mean squared error (RMSE) results for each of the tested cases.
The results in Table 1 corroborate some of the trade-offs between the sensing modalities and in addition some of the advantages gained through the use of neural fields in forestry. First, the superiority of TLS over ALS data on the number of trees metric is mainly due to the presence of information in sub-canopy which is characteristic of in-situ TLS. This in alignment with current demonstrations in the literature which have motivated the widespread usage of in-situ TLS in forestry applications even thoughit is not as spatially scalable as ALS is [20]. We would have seen the opposite relationships between TLS and ALS, however, in cases when the plot size is significantly higher than the range of a single in-situ TLS scan. A problem which can be resolved by adding multiple view co-registered TLS scans per plot. This limitation is caused as the sensor remains static at collection time which makes it more susceptible to occlusions, specially in dense forest areas where trees can significantly reduce the view of TLS at higher ranges. TLS+ALS overcomes, on the other hand, the limitations of the individual LiDAR platforms by filling in the missing information characteristic of each platform. Structure from neural fields using only multi-view RGB images performed slightly worst than both ALS and TLS. This may be due to the limited number of multi-view images collected per plot, the performance for deriving structure from NERF or to the joint performance of NERF in conjunction with the GRNet. Fortunately, fusing neural fields from multi-view imagery with LiDAR shows a significant improvement overall fused cases (i.e., NF+ALS, NF+TLS and NF+ALS+TLS). We see that the prior
\\begin{table}
\\begin{tabular}{l|c|c|c|c|c|c|c|} \\hline
**Method** & NF-RGB & ALS & TLS & ALS+TLS & NF-RGB+ALS & NF-RGB+TLS & NF-RGB+ALS+TLS \\\\ \\hline
**RMSE** & 10.61 & 8.44 & 1.77 & 1.67 & 1.41 & 1.39 & 1.32 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: RMSE Prediction performance of number of trees per plot in NM dataset.
Figure 6: LiDAR derived vegetation attribute estimation for single TLS and ALS.
supervisory signal imposed by the LiDAR point cloud helps on guiding the resulting 3D structure from NERF to alleviate the artifacts arising at far distances when using multi-view imagery only. We would like to finalize this discussion by highlighting the performance of the NF-RGB+ALS method which is marginally similar to the best performing method (i.e., NF-RGB+ALS+TLS). The benefit of using NF-RGB+ALS is that being both airborne makes the data collection of these two modalities time and cost efficient, in contrast, to in-situ remote sensing methods such as TLS. This has significant implications towards achieving both scalable and highly performing forest monitoring programs. In general, one has to resort to a balance between scalability and performance performance depending on needs. Our work instead, offers a method which can potentially achieve similar performance as in-situ methods with the benefits of scalability over the landscape scales through computational methods.
Additional experiments were conducted to explore the ability of neural fields from terrestrial based multi-view imagery to achieve a performance near that of TLS in metrics that depend on sub-canopy information. In this case, we evaluated performance on the DBH metric for a total of 200 trees. Ground truth DBH was manually measured in the field for each tree's stem diameter at a height of 1.3m. A total of 5 co-registered TLS scans where used per tree, each collected from a different location and viewing each tree from a different perspective to reduce the effects of occlusion and to remove the degrading effects of lower point LiDAR return densities at farther ranges. Multi-view TLS co-registration was obtained using the method of [4]. Terrestrial multi-view RGB imagery data for NERF was collected around an oblique trajectory around each tree as exemplified in Fig.1 with \\(10-15\\) snapshot images per tree. Algorithmic performance for estimating DBH was compared against TLS, ALS, TLS+ALS and NF-RGB. The estimation approach of [26] relying on stem geometric circular shape fitting at a height of 1.3m over the ground was used following their implementation. Performance is measured as the average error as a percentage of the actual field measured DBH ground truth, following the work of [26]. Comparison results are reported in Table 2.
In table 2 ALS performs the worst DBH estimation due to its inherited limited sub-canopy resolution. Multi-view TLS on the other hand, performs the best at 1.3\\(\\%\\) error consistent with TLS superiority findings in [26] for metrics relying on sub-canopy information. However, our neural fields approach from terrestrial imagery performs marginally on par with multi-view TLS, with the additional advantage that RGB camera sensors are simpler to access commercially and significantly cheaper than LiDAR.
In terms of computational specifications, neural radiance fields were trained using a set of overlapping 10-50 multi-view images per scene. The fast implementation of [18] was used with training on the terrestrial and aerial multi-view imagery taking from
\\begin{table}
\\begin{tabular}{l|c|c|c|c|} \\hline \\hline
**Method** & NF-RGB & ALS & TLS & ALS+TLS \\\\ \\hline
**Avg. error**\\(\\%\\) & 1.7 \\(\\%\\) & 32.7\\(\\%\\) & 1.3\\(\\%\\) & 3.3\\(\\%\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Comparison of sensing modalities on average error DBH estimation.
30-60 secs per 3D structure extraction (e.g., per plot in the aerial imagery case, per tree in the terrestrial imagery case). Adding the LiDAR constraints was done following the implementation from [21]. The neural radiance architecture is a multilayer perceptron (MLP) with two hidden layers and a ReLU layer per hidden layer and a linear output layer as in [18]. Training was performed using the ADAM optimizer [14] with parameters \\(\\beta_{1}\\) = 0.9, \\(\\beta_{2}\\) = 0.99, \\(\\epsilon=10^{-15}\\) using NVIDIA Tesla V100.
The main limitation of neural fields from aerial multi-view imagery is the presence of occlusion of sub-canopy structure, specially in densely forested areas. In our case, fusion with TLS data can resolve this problem as terrestrial data provides highly detailed sub-canopy information. Additionally, when TLS is unavailable, terrestrial imagery can be used instead. Our 3D structure experiments from terrestrial multi-view information in Sec.3.1 and the DBH estimation performance results demonstrate that highly detailed structure along the entire vertical stand direction can be extracted by neural fields when image information is available. In the absence of multi-view image data, however, neural fields are not capable of generating synthetic information behind occluded areas and performance on metrics affected by occlusion are expected to yield large errors. This problem can be alleviated through multi-view images capturing the desired areas of interest in the ecosystem.
## 6 Conclusion
In this work, we proposed neural radiance fields as representations that can finely express the 3D structure of forests both in the _in-situ_ and at the broad landscape scale. In addition, the properties of neural radiance fields; in particular, the fact that they account for both the origin and direction of radiance to define 3D structure enables the fusion of data coming from multiple locations and modalities; more specifically those from multi-view LiDAR's and cameras. Finally, we evaluated the performance of 3D structure derived metrics typically used in forest monitoring programs and demonstrated the potential of neural fields to improve performance of scalable methods at near the level of _in-situ_ methods. This not only represents a benefit on sampling time efficiency but also has powerful implications on reducing monitoring costs.
## Acknowledgements
Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number GRROCSRN.
## References
* [1] Atchley, A., Linn, Rodman, J.A., Hoffman, C., Hyman, J.D., Pimont, F., Sieg, C., Middleton, R.S.: Effects of fuel spatial distribution on wildland fire behaviour. International Journal of Wildland Fire **30**(3), 179-189 (2021)* [2] Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence **14**(2), 239-256 (Feb 1992). [https://doi.org/10.1109/34.121791](https://doi.org/10.1109/34.121791)
* [3] Castorena, J., Creusere, C.D., Voelz, D.: Modeling lidar scene sparsity using compressive sensing. In: 2010 IEEE International Geoscience and Remote Sensing Symposium. pp. 2186-2189. IEEE (2010)
* [4] Castorena, J., Dickman, L.T., Killebrew, A.J., Gattiker, J.R., Linn, R., Loudermilk, E.L.: Automated structural-level alignment of multi-view tls and als point clouds in forestry (2023)
* [5] Castorena, J., Puskorius, G.V., Pandey, G.: Motion guided lidar-camera self-calibration and accelerated depth upsampling for autonomous vehicles. Journal of Intelligent & Robotic Systems **100**(3), 1129-1138 (2020)
* [6] Dubayah, R.O., Drake, J.B.: Lidar remote sensing for forestry. Journal of forestry **98**(6), 44-46 (2000)
* [7] FAO, U.: The state of the world's forests 2020. In: Forests, biodiversity and people. p. 214. Rome, Italy (2020). [https://doi.org/https://doi.org/10.4060/ca8642en](https://doi.org/https://doi.org/10.4060/ca8642en)
* [8] Gao, W., Tedrake, R.: Filterreg: Robust and efficient probabilistic point-set registration using gaussian filter and twist parameterization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11095-11104 (2019)
* [9] Ge, X., Zhu, Q.: Target-based automated matching of multiple terrestrial laser scans for complex forest scenes. ISPRS Journal of Photogrammetry and Remote Sensing **179**, 1-13 (2021)
* [10] Hilker, T., van Leeuwen, M., Coops, N.C., Wulder, M.A., Newnham, G.J., Jupp, D.L., Culvenor, D.S.: Comparing canopy metrics derived from terrestrial and airborne laser scanning in a douglas-fr dominated forest stand. Trees **24**(5), 819-832 (2010)
* [11] Hyyppa, J., Yu, X., Hyyppa, H., Vastaranta, M., Holopainen, M., Kukko, A., Kaartinen, H., Jaakkola, A., Vaaja, M., Koskinen, J., et al.: Advances in forest inventory using airborne laser scanning. Remote sensing **4**(5), 1190-1207 (2012)
* [12] Kajiya, J.T., Von Herzen, B.P.: Ray tracing volume densities. ACM SIGGRAPH computer graphics **18**(3), 165-174 (1984)
* [13] Kankare, V., Joensuu, M., Vauhkonen, J., Holopainen, M., Tanhuanpaa, T., Vastaranta, M., Hyyppa, J., Hyyppa, H., Alho, P., Rikala, J., et al.: Estimation of the timber quality of scots pine with terrestrial laser scanning. Forests **5**(8), 1879-1895 (2014)
* [14] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* [15] Lausch, A., Erasmi, S., King, D.J., Magdon, P., Heurich, M.: Understanding forest health with remote sensing-part ii--a review of approaches and data models. Remote Sensing **9**(2), 129 (2017)
* [16] Linn, R., Reisner, J., Colman, J.J., Winterkamp, J.: Studying wildfire behavior using firetec. International journal of wildland fire **11(4)**, 233-246. (2002)
* [17] Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. arXiv preprint arXiv:2003.08934 (2020)
* [18] Muller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. **41**(4), 102:1-102:15 (Jul 2022). [https://doi.org/10.1145/3528223.3530127](https://doi.org/10.1145/3528223.3530127)
* [19] Myronenko, A., Song, X.: Point set registration: Coherent point drift. IEEE transactions on pattern analysis and machine intelligence **32**(12), 2262-2275 (2010)
* [20] Pokswinski, S., Gallagher, M.R., Skowronski, N.S., Loudermilk, E.L., Hawley, C., Wallace, D., Everland, A., Wallace, J., Hiers, J.K.: A simplified and affordable approach to forest monitoring using single terrestrial laser scans and transect sampling. MethodsX **8**, 101484 (2021)* [21] Roessle, B., Barron, J.T., Mildenhall, B., Srinivasan, P.P., Niebner, M.: Dense depth priors for neural radiance fields from sparse input views. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 12892-12901 (2022)
* [22] Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4104-4113 (2016)
* [23] Tomppo, E., Gschwantner, T., Lawrence, M., McRoberts, R.E., Gabler, K., Schadauer, K., Vidal, C., Lanz, A., Staahl, G., Cienciala, E.: National forest inventories. Pathways for Common Reporting. European Science Foundation **1**, 541-553 (2010)
* [24] Vierling, K.T., Vierling, L.A., Gould, W.A., Martinuzzi, S., Clawges, R.M.: Lidar: shedding new light on habitat characterization and modeling. Frontiers in Ecology and the Environment **6**(2), 90-98 (2008)
* [25] White, J.C., Coops, N.C., Wulder, M.A., Vastaranta, M., Hilker, T., Tompalski, P.: Remote sensing technologies for enhancing forest inventories: A review. Canadian Journal of Remote Sensing **42**(5), 619-641 (2016)
* [26] Windrim, L., Bryson, M.: Detection, segmentation, and model fitting of individual tree stems from airborne laser scanning of forests using deep learning. Remote Sensing **12**(9), 1469 (2020)
* [27] Xie, H., Yao, H., Zhou, S., Mao, J., Zhang, S., Sun, W.: Grnet: Gridding residual network for dense point cloud completion. In: ECCV (2020) | This work leverages neural radiance fields and remote sensing for forestry applications. Here, we show neural radiance fields offer a wide range of possibilities to improve upon existing remote sensing methods in forest monitoring. We present experiments that demonstrate their potential to: (1) express fine features of forest 3D structure, (2) fuse available remote sensing modalities and (3), improve upon 3D structure derived forest metrics. Altogether, these properties make neural fields an attractive computational tool with great potential to further advance the scalability and accuracy of forest monitoring programs.
Keywords:Neural radiance fields Remote Sensing LiDAR ALS TLS Photogrammetry Forestry | Provide a brief summary of the text. | 133 |
arxiv-format/2105_00782v1.md | # Improving Landslide Detection on SAR Data through Deep Learning
Lorenzo Nava
Department of Earth Sciences
University of Florence
Florence 50121, Italy
[email protected]
&Oriol Monserrat
Department of Remote Sensing
Centre Tecnologic Telecomunicacions Catalunya
Castelldefels 08860, Spain
[email protected]
Filippo Catani
Department of Geosciences
University of Padua
Padua 35131, Italy
[email protected]
## 1 Introduction
In the world, several natural phenomena, like earthquakes and intense rainfalls, sometimes combined with windstorms, can trigger multiple landslide events, that can occur in groups of hundreds or even thousands in a region [1-5]. Those events can cause noticeable damages to natural and human infrastructures and can cause physical damages and heavy economic and social impacts [6]. Therefore, there is a growing need to intervene quickly and efficiently in the affected areas. Numerous techniques have been elaborate to produce susceptibility maps [7], [8], and mitigation strategies [9], [10]. Undoubtedly, in the case of multiple occurrences of landslides over a large area, the most common mapping method relies on remote sensing data because of its potential to quickly mapping geological features of whole regions, without physical contact with the areas investigated. A vast amount of research exists that has been performed to this end with knowledge-based methods, multiple regression, analytic hierarchy process [11], and machine learning (ML) techniques [12-15]. However, as time is a key factor when mapping natural disaster effects [16], usage of optical data alone can have serious limitations in presence of cloud cover. A possible solution, offered by the usage of SAR satellites,has been exploited only scantly, so far, because landslide mapping algorithms have been mainly developed to run on optical data together with ancillary geological and topographic stationary information and because SAR data require specific analysis methods and classification algorithms, not yet fully developed for mass movements on natural slopes. Therefore, a research gap is still present in the lack of reliable image analysis methods to extract landslide mapping information from SAR images. Furthermore, it is not available, in the bibliography, an automatic method that employs the combination of CNNs and SAR data to extract spatial landslide information, probably due to the characteristics of the SAR data, which provides different information compared to the mainly exploited optical data, making essential the development of innovative techniques. Recently, deep learning (DL) approaches and, mainly, convolutional neural networks (CNNs) have been used in various remote sensing tasks on VHR imagery, such as classification, segmentation [17], and object detection [18]. Nevertheless, few are the studies that use CNNs for landslide detection. Ding et al. [19] carried out an automatic recognition of landslide at pixel scale based on CNNs on GF-1 images with four spectral bands (blue, green, red, and near-infrared) and a spatial resolution of 8 m, achieving a detection rate of 72.5%, a false positives rate of 10.2%, and overall accuracy of 67%. Ghorbanzadeh et al. [20] evaluated various ML algorithms on VHR optical data from the Rapid Wye satellite and topographic factors achieving 78.26% mIOU for a small window-size CNN. Another interesting study was carried out by Catani [21], in which the author evaluates different state-of-art CNNs on non-nadiral and crowdsourced optical images of landslides to classify them, achieving overall accuracies between 87% and 90%. Indeed, optical images are great tools, but they present limitations due to the cloud cover that, in regions as the Congo River basin, Equatorial South America, and Southeast Asia, can display annual cloud frequencies higher than 80% [22]. In some cases, the first cloudless image available in the service after multiple landslide events occurred about a month later, as happened in Chile in December 2017, in Nepal in 2015 [23], and in Ecuador in 2016. Despite numerous methods and techniques have been applied on SAR data to detect landslides, as can be seen in the review of Mondini et al. [24], our study is the first that provides, to this end, a DL-based method employing the combination of CNNs and SAR data. Furthermore, the accuracy achieved is comparable to optical change detection.
In the remainder of this letter, we show and discuss a deep-learning-based method to detect landslides also in case of illumination or atmospheric conditions not favorable for landform mapping. We used convolutional neural networks (CNNs) on various combinations of Sentinel-1 data and topographic factors. We apply CNN methods to landslide detection based on ground range detected (GRD) SAR imagery from the Sentinel-1 satellite, which, in a study carried out by Mondini et al. [20], showed unambiguous changes of amplitude caused by landslides in about eighty-four percent of the cases. Lastly, we compare the results obtained on eight different SAR-based datasets with an RGB dataset, and we provide the mapping of the study area by using the best models.
## 2 Study Area and Materials
Study AreaOur case study area lies in the Iburi sub-prefecture of Hokkaido, Japan. It comprehends the Atsuma, Mukawa, and Abira towns, which present a specific population of less than 10,000 and a low population density of 17 inhabitants/km2. The morphology of the area is composed mainly of hills. The maximum height is fewer than 800 m while the average elevation is 160 m. The basement complex of the region is mainly composed of sedimentary rocks of the Neogene tertiary system: layers of sandstone and mudstone, sandstone, conglomerate, and diatomaceous sistlone [26, 27]. In the study area, at 03.08 local time (JST) on September 6, 2018, struck an Mw 6.56 earthquake (HEIE). It was activated by a rupture of a low-activity blind fault with the epicenter located at 42.690 N 142.007 E [28]. The event triggered about 7837 coseismic landslides most of which occurred in locations where the elevation is less than 300 m [4]. The majority of the landslides slid down over the air-fall pumice and ash layers are shallow and are classified as planar type and spoon type [29].
MaterialsThe Sentinel-1 mission encompasses two polar-orbiting satellites that perform C-band synthetic aperture radar imaging [30] while the Copernicus Sentinel-2 mission comprises two polar-orbiting satellites that sample 13 spectral bands: four bands at 10 m, six bands at 20 m, and three bands at 60 m spatial resolution [31]. We downloaded Sentinel-1 and Sentinel-2 images from the Copernicus Open Access Hub [32]. The first images (5 x 20 m) were acquired in Level-1 Ground Range Detected (GRD) mode, with VV and VH polarization, and Interferometric Wide (IW) acquisition mode. GRD products are focused SAR data that has been detected, multi-looked, and projected to ground range using an Earth ellipsoid model [33]. Images were acquired from three different days: a) 01 September 2018, b) 13 September 2018, c) 25 September 2018. Lastly, a Sentinel-2 VHR RGB image (10 x 10 m) was obtained from the first cloud-free day after the multiple landslide event (20 October 2018).
Surface topography is one of the most influential elements regarding landslides in hilly and mountainous areas and, above all, the slope angle is considered an essential component of slope stability analysis [34]. Therefore, in this study, we used a 30 m resolution digital elevation model (DEM) acquired from USGS Earth Explorer [35] and slope angle to evaluate their impact on the SAR data. As quoted, we designed an RGB dataset and eight SAR-based datasets, each composed of different combinations of the data in Table 2, as described in Table 2.
## 3 Methodology
The radar information from Sentinel-1 is exploited in combination with topographic factors, to evaluate the performance of a CNN to detect landslides. The workflow of this study is as follow:
* Designing nine different datasets, one composed of RGB images, four composed by SAR data and a topographic factor as third band, of which two with slope and two with DEM, and four constituted only by SAR data.
* Training the CNN on each training dataset and validating the performance on the corresponding test dataset sampled in the study area.
* Visualizing the predictions of the best models on the entire study area.
In the following sections, we give a description and the results of this workflow. Additional information and considerations are provided in the conclusion section.
### Dataset Pool
To create the datasets, each of the composite images in Table 2 is sampled in the GIS environment using the '_Export training data for deep learning_' tool, that, after designing numerous polygons on known landslides, samples various clips per polygon with a predefined shape, stride, and resolution. The polygons position is chosen concerning the high variability of both classes, being careful to include various landslide shapes, orientations, and dimensions. Besides, we include different land covers, such as urban, wooded, and agricultural areas in the _Non-landslide_ class. In all the datasets, the spatial resolution of the images is 25 x 25 pixels, which is equivalent to a spatial extension of 250 x 250 m for optical RGB images and 125 x 420 m for SAR products. We select this image size as the best size based on a cross-validation for our study area.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline
**Region and Year** & **Sentinel-2** & **Sentinel-1** \\\\ \\hline & & 01-09 \\\\ Iburi, Hokkaido 2018 & 20-10 & 13-09 \\\\ & & 25-09 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Dates of Acquisition of Sentinel-2 and Sentinel-1 for the Study Area in Hokkaido in 2018.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Dataset Name** & **Band1** & **Band2** & **Band3** \\\\ \\hline RGB & Red & Green & Blue \\\\ SSD & VV after-event & VH after-event & DEM \\\\ SSS & VV after-event & VH after-event & Slope \\\\ BAD & VV before-event & VV after-event & DEM \\\\ BAS & VV before-event & VV after-event & Slope \\\\ HHH & VH before-event & VH after-event & VH after-event \\\\ BAA & VV before-event & VV after-event & VV after-event \\\\ BAC & VV before-event & VV after-event & VV after-event - \\\\ BAH & VV before-event & VV after-event & VH after-event \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Composite images created with corresponding dataset name.
### Supervised Classification
Recently, deep-learning methods and, above all, CNNs have accomplished reliable results in various tasks in computer vision, proving to be the state-of-art of this field [36, 37]. A CNN can autonomously learn hierarchical feature representations of an image, enabling it to perform classification tasks directly from images, e.g., recognizing a specific image without using human-designed features [38]. The peculiarity of a CNN is its architecture, which is composed of tens or hundreds of so-called hidden layers that exercise convolutional and pooling operations. During the training process, an input image flows through the network that scans it with a set of trainable kernels, resulting in a group of feature maps (forward phase), then gradients are back-propagated, and parameters are updated (backward phase). The output of each convolutional layer is filtered by an activation function (e.g., sigmoid, ReLU, Softmax, hyperbolic tangent) that performs non-linear transformations [39]. The pooling layer, usually placed after the convolutional layer, aims to simplify the output by performing non-linear down-sampling to reduce the parameters.
In this letter, we modify a network provided by the TensorFlow Core _Image Classification_ library [40]. The model consists of three convolution blocks of 3x3 filters with a max pooling layer in each of them. Dropout and connected layer are activated by a ReLU activation function:
\\[R(z)=max(0,z) \\tag{1}\\]
The last fully connected layer has 2 units, as the number of classes we defined, and it is activated by a Softmax activation function:
\\[\\sigma(x)_{j}=\\frac{e^{z_{j}}}{\\sum_{k=1}^{K}e^{z_{k}}}\\text{ for }j=1,\\ldots,K\\text{ and }z\\in R \\tag{2}\\]
As loss function optimization, we used a stochastic gradient descent method based on an adaptive estimation of first-order and second-order moments (Adam), a method for problems with very noisy and/or sparse gradients [41]. Sparse Categorical Crossentropy (SCC) function is used as a loss function, which is an integer-based version of the Categorical Crossentropy function:
\\[Loss=-{\\sum_{i=1}^{M}y_{i}}\\cdot\\log\\hat{y}_{i} \\tag{3}\\]
where \\(\\hat{y}_{i}\\) is the \\(i\\)-th scalar value in the model output, \\(y_{i}\\) is the corresponding target value, and M, the output size is the number of scalar values in the model output. The SCC function computes the cross-entropy loss between the labels and predictions providing labels as integers [42]. The CNN is implemented using the Google's library TensorFlow [43]. Furthermore, many augmentation techniques are applied on the dataset, such as vertical and horizontal _Random Flip_, _Random Rotation_, _Random Zoom_, and _Random Translation_. Those are chosen considering that flipping, rotating, zooming, and translating a satellite landslide image results in new images of landslides with a possible real dimension and orientation. The trained model is used to predict classes on a labeled test dataset composed of images with the same number and type of bands of the training dataset. The output contains two classes, and accuracy (4), precision (5), and
Figure 1: CNN architecture.
recall (6) are calculated using these values according to the following formulas, where TP, TN, FP, and FN are true and false positives and negatives, respectively.
\\[\\text{Accuracy = }\\frac{TP+TN}{TP+FP+TN+FN} \\tag{4}\\]
\\[\\text{Precision = }\\frac{TP}{TP+FP} \\tag{5}\\]
\\[\\text{Recall = }\\frac{TP}{TP+FN} \\tag{6}\\]
### Object Detection
The study area is analyzed through the sliding window [44] method to detect landslides. The maps created are shown with superimposed the shapefile of the pre-mapped landslides (yellow) and a red point in correspondence to the center of the patches predicted as _Landslide_. The step of the window is 2 pixels. Patches are extracted with 25 x 25 pixels resolution and then resampled to match the input size of the network (\\(n\\) x 32 x 32 x 3).
All experiments were executed on a mac-OS operating system computer with a 2.2 GHz Intel Core i7 with 6 cores, a 256 GB SSD for quick access to applications and datasets, and RAM 16 Gb.
## 4 Results
### Landslide Classification
We trained each model for the optimal number of epochs and using the best learning rate that fitted each specific problem. Table 4.1 shows the results for different datasets. As expected, the model trained and tested on RGB optical images performs well, achieving the best results. Focusing on SAR datasets, the BAH and BAA achieved the highest overall accuracies, with 93.33% and 94.17%, respectively. Besides, the BAH, achieved also the highest recall, with a value of 91.25%. The two datasets enclosing the slope as a third band, SSS and BAS, generally achieved the lowest accuracies, with 80.84% and 88.55%, respectively. Lastly, considering the HHH and the BAA, which are composed of the same combination of bands but different polarization, results show that the VV polarization is more discriminating to detect landslides than the VH.
To accomplish a visual evaluation of the predictions of the best models trained in this study, we analyzed the entire study area through the process explained in Section 3.3. Figure 2 shows the resulting mapping obtained with the models trained on the BAA and BAH datasets.
\\begin{table}
\\begin{tabular}{c c c c} \\hline \\hline
**Dataset Name** & **Accuracy(\\%)** & **Precision(\\%)** & **Recall(\\%)** \\\\ \\hline RGB & 99.20 & 99.60 & 98.81 \\\\ SSD & 86.63 & 92.24 & 80.16 \\\\ SSS & 80.84 & 82.77 & 78.17 \\\\ BAD & 89.96 & 95.83 & 83.47 \\\\ BAS & 88.55 & 96.58 & 79.84 \\\\ HHH & 81.87 & 93.22 & 68.75 \\\\ BAA & **94.17** & **97.32** & 90.83 \\\\ BAC & 90.83 & 93.36 & 87.92 \\\\ BAH & 93.33 & 95.22 & **91.25** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Accuracy, precision and recall of the CNN tested on eight different test datasets explained in Table 2. The best results achieved with SAR data are pointed out.
## 5 Conclusion
In this letter, we analyzed and discussed various combinations of SAR imagery and topographic factors and we proposed a DL-based method for landslide detection to locate landslides in a short time, also during the night and in presence of cloud cover. The best overall accuracy was accomplished by the BAA and BAH datasets, composed just by three SAR bands. The accuracy of the latter was 94.17% and 93.33%, respectively, 5.03% and 5.87% less than with the RGB dataset, but with the advantage of being applicable also during storms or night. Therefore, it is possible to make a landslide detection without VHR optical images with similar accuracy values, using the combination of SAR data of the BAA or the BAH datasets. Moreover, VV and VH amplitude imagery from Sentinel-1 is an open-source product available in almost all parts of the globe. The results achieved so far in this letter are promising and suggest that SAR Sentinel-1 images and deep learning models are a trustworthy combination for locating rapid landslides. SAR images permit us to obtain information regarding landslides also during a rain event or during the night when optical imagery, instead, is unusable or not available because of the presence of cloud cover. Therefore, the method could involve also various benefits in terms of emergency management civil protection operations, such as significantly decreasing the time of the emergency response in various emergency scenarios by increasing the quality of hazard mapping and risk assessments.
Figure 2: Predictions of the entire study area by the models trained on the BAA and BAH datasets. (BAA: Accuracy = 94.17%; Precision = 97.32%; Recall = 90.83%. BAH: Accuracy = 93.33%; Precision = 95.22%; Recall = 91.25%.)
## References
* [1] Serey, A., Pinero-Feliciangeli, L., Sepalveda, S.A. et al. \"Landslides induced by the 2010 Chile megathrust earthquake: a comprehensive inventory and correlations with geological and seismic factors.\" _Landslides_ vol. 16, pp. 1153-1165, 2019.
* [2] Song, K., Wang, F., Dai, Z. et al. \"Geological characteristics of landslides triggered by the 2016 Kumamoto earthquake in Mt. Aso volcano.\" _Japan. Bull Eng Geol Environ_ vol. 78, pp. 167-176, 2019.
* [3] Chunga, K., Livio, F.A.,Martillo, C., Lara-Saavedra, H., Ferrario, M.F., Zevallos, I., Michetti, A.M. \"Landslides Triggered by the 2016 Mw 7.8 Pedernales, Ecuador Earthquake: Correlations with ESI-07 Intensity, Lithology, Slope and PGA-h.\" _Geosciences_, vol. 9, pp. 371, 2019.
* [4] Wang, F., Fan, X., Yunus, A.P. et al. \"Coseismic landslides triggered by the 2018 Hokkaido, Japan (Mw 6.6), earthquake: spatial distribution, controlling factors, and possible failure mechanism.\" _Landslides_ vol. 16, pp. 1551-1566, 2019.
* [5] Ferrario, M.F. \"Landslides triggered by multiple earthquakes: insights from the 2018 Lombok (Indonesia) events.\" _Nat Hazards_ vol. 98, pp. 575-592, 2019.
* [6] H. Hong, W. Chen, C. Xu, A. M Youssef, B. Pradhan, D. Tien Bui. \"Rainfall-induced landslide susceptibility assessment at the Chongren area (China) using frequency ratio, certainty factor, and index of entropy.\" _Geocarto Int._, vol. 32, pp. 139-154, 2017.
* [7] F. Catani,D. Lagomarsino, S. Segoni, V. Tofani, \"Landslide susceptibility estimation by random forests technique: sensitivity and scaling issues.\" _Nat. Hazards Earth Syst. Sci._, vol. 13, 2815-2831, 2013.
* [8] B. Pradhan, \"A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS.\" _Comput. Geosci._, 51, 350-365, 2013.
* [9] L. Solway, \"Socio-economic perspective of developing country megacities vulnerable to flood and landslide hazards.\" _Floods and Landslides: Integrated Risk Assessment_, Springer: Berlin, Heidelberg, pp. 245-260, 1999.
* [10] V. Svalova, \"Landslide risk management for urbanized territories.\" _Risk Management Treatise for Engineering Practitioners_, Oduoza, C.F., Ed., IntechOpen: London, UK, pp. 1-19, 2018.
* [11] D. Myronidis, C. Papageorgiou, S. Theophanous, \"Landslide susceptibility mapping based on landslide history and analytic hierarchy process (AHP).\" _Natural Hazards_, vol. 81, pp. 245-263, 2016.
* [12] B. Feizizadeh, O. Ghorbanzadeh, \"GIS-based interval pairwise comparison matrices as a novel approach for optimizing an analytical hierarchy process and multiple criteria weighting.\" _GI Forum_, vol. 1, pp. 27-35, 2017.
* [13] W. Chen, X. Xie, J. Wang, B. Pradhan, H. Hong, D. T. Bui, Z. Duan, J. Ma, \"A comparative study of logistic model tree, random forest, and classification and regression tree models for spatial prediction of landslide susceptibility.\" _Catena_, vol. 151, pp. 147-160, 2017.
* [14] Y. Aimaiti, W: Liu, F. Yamazaki, Y. Maruyama, \"Earthquake-induced landslide mapping for the 2018 Hokkaido Eastern Iburi Earthquake Using PALSAR-2 data.\" _Remote Sensing_, vol. 11, p. 2351, 2019.
* [15] Reichenbach, P., Rossi, M., Malamud, B. D., Mihir, M., and Guzzetti, F. \"A review of statistically-based landslide susceptibility models.\" _Earth-Science Reviews_, vol. 180, pp. 60-91, 2018.
* [16] S. Voigt, T. Kemper, T. Riedlinger, R. Kiefl, K. Scholte, H. Mehl, \"Satellite image analysis for disaster and crisis-management support.\" _IEEE Trans. Geosci. Remote Sens._, vol. 45, pp. 1520-1528, May 2007.
* [17] M. Langkvist, A. Kiselev, M. Alirezaie, A. Loutfi, \"Classification and segmentation of satellite orthoimagery using convolutional neural networks.\" _Remote Sens._, vol. 8, p. 329, 2016.
* [18] M. Radovic, O. Adarkwa, Q. Wang, \"Object recognition in aerial images using convolutional neural networks.\" _J. Imaging_, vol. 3, p. 21, 2017.
* [19] A. Ding, Q. Zhang, X. Zhou, B. Dai, \"Automatic recognition of landslide based on CNN and texture change detection.\" _Proceedings of the Chinese Association of Automation (YAC)_, Youth Academic Annual Conference, Wuhan, China, 11-13 November 2016; IEEE: Wuhan, China, pp. 444-448, 2016.
* [20] O. Ghorbanzadeh, T. Blaschke, K. Gholamnia, S. R. Meena, D. Tiede, J. Aryal, \"Evaluation of Different Machine Learning Methods and Deep-Learning Convolutional Neural Networks for Landslide Detection.\" _Remote Sens._, vol. 11, p. 196, 2019.
* [21] F. Catani, \"Landslide detection by deep learning of non-nadiral and crowdsourced optical images.\" _Landslides_, vol. 18, pp. 1025-1044, 2020.
* [22] A. M. Wilson, W. Jetz, \"Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions.\" _PLoS Biology_, vol. 14, p.e1002415, 2016.
* [23] J. G. Williams, N. J. Rosser, M. E. Kincey, J. Benjamin, K. J. Owen, A. L. Densmore, D. G. Milledge, T. R. Robinson, C. A. Jordan, T. A. Dijkstra, \"Satellite-based emergency mapping using optical imagery: Experience and reflections from the 2015 Nepal earthquakes.\" _Nat. Hazards Earth Syst. Sci._, vol. 18, pp. 185-205, 2018.
* [24] A. C. Mondini, M. Santangelo, M. Rocchetti, E. Rossetto, A. Manconi, O. Monserrat, \"Sentinel-1 SAR Amplitude Imagery for Rapid Landslide Detection.\" _Remote Sens._, vol. 11, p. 760, 2019.
* [25] A. C. Mondini, F. Guzzetti, K. T. Chang, O. Monserrat, T. R. Martha, A. Manconi, \"Landslide failures detection and mapping using Synthetic Aperture Radar: Past, present and future.\" _Earth-Science Reviews_, p. 103574, 2021.
* [26] K. Matsuno, M. Ishida, \"Geological map of Hayakita in scale of 50,000.\" _Geological survey of Japan_, Tokio, Japan, 1960.
* [27] N. Osanai, T. Yamada, S. Hayashi, S. Kastura, T. Furuichi, S. Yanai, Y. Murakami, T. Miyazaki, Y. Tanioka, S. Takiguchi, M. Miyazaki, \"Characteristics of landslides caused by the 2018 Hokkaido Eastern Iburi Earthquake.\" _Landslides_, vol. 16, pp. 1517-1528, 2019.
* [28] \"The 2018 Hokkaido eastern Iburi earthquake: fault model (preliminary).\" Geospatial Information Authority of Japan, _GSI_ 2018. [Online]. Available: [https://www.gsi.go.jp/cais/topic180912-index-e.html](https://www.gsi.go.jp/cais/topic180912-index-e.html)
* [29] H. Yamagishi, F. Yamazaki, \"Landslides by the 2018 Hokkaido Iburi-Tobu Earthquake on September 6.\" _Landslides_, vol. 15, pp. 2521-2524, 2018.
* [30] Sentinel-1, Sentinel Online, The European Space Agency. Available online: [https://sentinel.esa.int/web/sentinel/missions/sentinel-1/overview](https://sentinel.esa.int/web/sentinel/missions/sentinel-1/overview) (accessed on 16 February 2021).
* [31] Sentinel-2, Sentinel Online, The European Space Agency. [Online]. Available: [https://sentinel.esa.int/web/sentinel/missions/sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
* [32] ESA. Copernicus Open Access Hub. [Online]. Available: [https://scihub.copernicus.eu/dhus/#/home](https://scihub.copernicus.eu/dhus/#/home)
* [33] Level-1 GRD Products, The European Space Agency. [Online]. Available: [https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-1-sar/products-algorithms/level-1-algorithms/ground-range-detected](https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-1-sar/products-algorithms/level-1-algorithms/ground-range-detected)
* [34] J. A. Coe, J. W. Godt, R. L. Baum, R. C. Bucknam, J. A. Michael, \"Landslide susceptibility from topography in Guatem.\" _Landslides: Evaluation and Stabilization_,vol. 1, pp. 69-78, 2004.
* [35] USGS. Earth Explorer. [Online]. Available: [https://earthexplorer.usgs.gov/](https://earthexplorer.usgs.gov/)
* [36] A. Krizhevsky, I. Sutskever, G.E. Hinton, \"Imagenet classification with deep convolutional neural networks.\" _Advances in Neural Information Processing Systems_, vol. 25, pp.1097-1105, 2012.
* [37] H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, R. M. Summers, \"Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning.\" _IEEE Trans. Med. Imaging_, vol. 35, pp. 1285-1298, February 2016.
* [38] P. Marcelino, \"Transfer learning from pre-trained models.\" _Towards Data Science_, October 2018.
* [39] D. Strigl, K. Kofler, S. Podlipnig, \"Performance and scalability of GPU-based convolutional neural networks.\" _2010 18th Euromicro International Conference on Parallel, Distributed and Network-Based Processing. IEEE_, pp. 317-324, 17 February 2010.
* [40] Image Classification, Tensorflow Core Tutorial, Tensorflow. [Online]. Available: [https://www.tensorflow.org/tutorials/images/classification](https://www.tensorflow.org/tutorials/images/classification)
* [41] D. P. Kingma and J. Ba. \"Adam: A method for stochastic optimization.\" Unpublished paper, 2014. [Online]. Available: [https://arxiv.org/abs/1412.6980](https://arxiv.org/abs/1412.6980)
* [42] Sparse Categorical Crossentropy, Tensorflow Core v2.4.1, Tensorflow. [Online]. Available: [https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy)
* [43] M. Abadi et al. \"TensorFlow: Large-scale machine learning on heterogeneous distributed systems.\" Unpublished paper, 2016. [Online]. Available: [https://arxiv.org/abs/1603.04467](https://arxiv.org/abs/1603.04467).
* [44] C. H. Lee, C. R. Lin, M. S. Chen, \"Sliding-Window Filtering: An Efficient Algorithm for Incremental Mining\". _Proceedings of the tenth international conference on Information and knowledge management_, pp. 263-270, October 2001. | In this letter, we use deep-learning convolution neural networks (CNNs) to assess the landslide mapping and classification performances on optical images (from Sentinel-2) and SAR images (from Sentinel-1). The training and test zones used to independently evaluate the performance of the CNNs on different datasets are located in the eastern Iburi subpredecture in Hokkaido, where, at 03.08 local time (JST) on September 6, 2018, an Mw 6.6 earthquake triggered about 8000 coseismic landslides. We analyzed the conditions before and after the earthquake exploiting multi-polarization SAR as well as optical data by means of a CNN implemented in TensorFlow that points out the locations where the Landslide class is predicted as more likely. As expected, the CNN run on optical images proved itself excellent for the landslide detection task, achieving an overall accuracy of 99.20% while CNNs based on the combination of ground range detected (GRD) SAR data reached overall accuracies beyond 94%. Our findings show that the integrated use of SAR data may also allow for rapid mapping even during storms and under dense cloud cover and seems to provide comparable accuracy to classical optical change detection in landslide recognition and mapping.
Landslides deep learning (DL) convolution neural networks(CNNs) image classification remote sensing (RS) Sentinel-1 Sentinel-2 landslide detection synthetic aperture radar (SAR) Tensorflow. | Write a summary of the passage below. | 296 |
arxiv-format/1912_05844v1.md | **A Bilateral River Bargaining Problem with Negative Externality**
Shivshanker Singh Patel1 Parthasarathy Ramachandran2
Footnote 1: Indian Institute of Management Visakhapatnam, Andhra Pradesh, INDIA
Email: [email protected]
Footnote 2: Department of Management Studies
Indian Institute of Science Bangalore, INDIA
# Introduction
A large number of rivers flows across political boundaries with conflict arising out of the need to share the water flowing across the boundary in a fair and equitable manner. It has been estimated that there are some 267 water bodies (lakes and rivers) that spans across political borders, with Nile, Ganges, and Danube being some prominent examples (Giordano and Wolf, 2003). The increasing water scarcity makes the water in these rivers a source of conflict. Even rivers within a single national border is also contested by different cities, states, and use groups. Examples of such shared river courses that ended up as disputes include the Mekong, Indus and Nile. This problem is studied in the economics literature as the _river water sharing problem_ (Ambec and Sprumont, 2002). A negotiated settlement has been widely advocated for resolving the conflict arising out of the river water sharing problem. Some examples of settlement arising out of established treaties or tribunals are:
\\(\\bullet\\) Rio Grande: an interstate water sharing treaty between the states of Colorado, New Mexico, and Texas in the United States (Rio Grande compact)\\(\\bullet\\) Cauvery river: national tribunal (by Government of India) to adjudicate the water sharing conflict between Karnataka and Tamilnadu state.
\\(\\bullet\\) Indus River: World Bank brokered treaty between India and Pakistan (Treaty for Common development of river basin)
Some other case studies about inter-national river treaties are also discussed by Barret (1994). Though these agreements and tribunal awards seek to resolve the disputes, they are not effective in removing the conflict completely. The conflict situations arise whenever there is a drought like situation leading to reduced flow and associated negative externalities. Hence, Kilgour and Dinar (1995) developed a water allocation model using the notion of transferable utility. This approach to the water sharing problem might overcome the often political nature of conflict as the agents might find it in their best interest to agree and implement the allocations. This notion of transferable utility and its use in the context of river water sharing problem was expanded later by others including Ambec and Sprumont (2002), Wang (2011), Dinar et al. (2010), and Brink et al.. (2012).
Pollution and floods are two major sources of negative externalities that originates from one agent but also impacts the other agents along the river stream caused by upstream agent to downstream agent in a river sharing scenario. In this article negative externality associated with pollution has been considered for analysis. We have underpinned the past literature in two aspects. Firstly, including the literature from the river sharing problem with respect to benefit maximization of agents. Later, we include the river sharing problem with induced negative externalities because of pollution etc. Next section provides brief literature review about the river sharing problem and associated negative externalities.
## 2 Related Literature
The claims of agents along a trans-boundary watercourses normally follows two principles,
\\(\\bullet\\)_Absolute Territorial Sovereignty (ATS) or The Harmon Doctrine:_ It gives a riparian state full control over all waters generated within its territory, and can utilize those waters without considering dependent livelihood or claims of the other co-riparian agents. This principle is typically favored by the upper riparian agents.
\\(\\bullet\\)_Absolute Territorial Integrity (ATI):_ This theory allows a downstream riparian state to demand the the full flow of the river from an upper riparian state without compromising the quantity or quality. Absolute territorial integrity logically make more sense to lower riparian agents.
These principles appeal to individual agents they contradict each other and hence may not be usable in practice. In general an agent's claim is always larger than their endowment. Agents' overlapping claims to river water make water a contested resource (Ansink and Weikard, 2009). The upstream agents who prefer the ATS disagree on the amount of water to be shared with downstream agents. Ambec and Sprumont (2002) proposed a compromise between the two doctrines by treating the ATS doctrine preferred by the upper riparian agent as a \"core-like\" constraint and the ATI doctrine favored by the lower riparian agent as \"legitimate aspirations\". The ATS and ATI principles addresses the right to use the river water without considering the responsibilities of the agents towards each other. Ni and Wang (2007) and Dong et al. (2012) later re-interpreted ATS and ATI to include the responsibilities of the agents towards each other.
The existing literature in the context of _river sharing problem_ is influenced largely by research of Ambec and Sprumont (2002) and Ambec and Ehlers (2008). Their framework was described as a Transferable Utility (TU) game and considering a benefit function which is strictly concave with a single peak. Later, that motivated Wang (2011) to develop a bargaining framework to provide a mechanism for the transferable utility game of the river sharing problem. The bargaining framework discussed by Wang (2011) recognizes the existence of negative externalities but does not incorporate them in the developed framework.
The _River Sharing Problem_ becomes more critical in the presence of negative externalities. In this context the negative externality is mostly in the form of pollution caused by an agent (upstream). Alcalde-Unzu et al. (2015) consider that when negative externalities of upstream agent is unknown, then the proposed clean-up cost vector gives an useful information for estimating limits in regard to the responsibility of each agent. Their results claim a cost allocation rule as Upstream Responsibility Rule (URR), which is claimed to be \"fair\". On the other hand, a non-cooperative water allocation between heterogeneous communities in an acyclic network of water sources is studied by Rebille and Richefort (2012). The solution is to impose a tax (optimal) on an agent that reflects the marginal damages and the marginal benefits that one agent transfer to the others.
Nevertheless, in case of trans-boundary rivers there are often no higher institutions that can enforce taxes on agents.
Therefore, it requires a cooperation and implementation of solution concepts that can provide optimal outcomes that is against free riding (Chander and Tulkens 1997). Ni and Wang (2007) takes the issue of river pollution and negative cost, and incorporated the principles of ATS and ATI in their analysis. They propose two methods to deal with the negative externalities due to pollution, namely the Local Responsibility sharing (LRS) method and Upstream Equal Sharing (UES) method. They provide an axiomatic characterization of these methods and claim that both the approach coincide with the Shapley value solution to the respective games.
Further, Dong et al. (2012) extended the work and proposed a new method of _Downstream Equal Sharing_ (DES). These analysis are similar analytical explanation with respect to allocating negative cost as Ambec and Ehlers (2008) analyzed with respect to the water sharing and associated welfare allocation. Dong et al. (2012) also proposed a solution of _Shapley value_ and the _core_ within the framework of cooperative game theory. The VCG (Vickrey-Clarke-Groves) mechanism and Polluter Pay rules for allocating negative cost among the agents is also discussed in previous literature (Ambec and Ehlers, 2014).
In this paper, we have addressed the issue of negative externality that is imposed by an upstream agent on a downstream agents as pollution. The characterization of negative externality can be comprehended by the variables such as inflow of water, benefit associated with water usage, and a negative cost to mitigate the pollution. The Pigovian tax approach will not be easy to implement in the context of negative externality in a trans-boundary river sharing problem. This is due to the fact that there may not be a superior institutional or regulatory body that can enforce tax regime. Coase (1960) showed that when agents are affected by negative externalities efficient/equilibrium outcomes are possible through market mechanisms irrespective of their initial property rights allocations. In the case of two-agent River sharing problem the water inflow to an agent's territory defines their initial property rights according to the ATS doctrine. If we assume that this assignment of property right requires that they take the responsibility for the externalities caused by them, then the application of a bargaining framework between the two agents incorporating the negative externalities would result in an efficient outcome.
Hence, we propose that the market based negotiated treaty that accommodates negative externalities is appropriate. The upstream agent agrees to incorporate the negative cost in her benefit function to pollute the river and also gets opportunity of trading (selling) surplus water to downstream agent. On the other hand, downstream agent incorporates the cost of cleaning polluted water in her benefit function and also trading (buying) extra water from upstream. Both agents try to maximize their utility to reach Pareto optimality. The utility from consuming water incorporates negative externalities in order to account for the agents' behavior. We identify individually rational bargaining strategies for the two agents. Section 4 explains the characterization of negative externalities in the context of river sharing. Section 5 develops the 2-agent river bargaining problem with induced negative externalities followed by solutions discussion in Section 6-8.
## 4 Negative Externality
For a given pollution level the cost of extracting water of a certain stated quality for the lower riparian state decreases with the increase discharge from the upper riparian state. But, the increase in discharge could result in flood damages beyond a certain level. From the perspective of the lower riparian state the total cost need to be minimized. In figure 1 the total cost curve due to negative externalities is represented. It is a convex curve with the trough being the region preferred by the lower riparian states. As the pollution level increases it can be surmised that the total cost curve will move upwards (dashed curves).
Even in the absence of any external pollution added to the stream by the upper riparian state, the lower riparian state would incur a certain cost for extracting water due the fact of decreasing flow to the downstream will increase the pollution density. In Fig. 1 this is represented by the solid line. \\(C^{*}\\) represent the absolute minimum cost necessary to extract water from the river by the lower riparian state. Any increase in cost beyond this level can be attributed as the negative externality imposed by the upper riparian agent on the lower riparian agent.
Applying the rights with responsibility principle, the lower riparian agent would expect to be compensated for this negative externality. The following sections introduces these negative externalities in the bargaining framework for the river sharing problem.
## 5 Bargaining Framework and Modeling
In this section, we have used a general model for the river sharing problem between two agents (see Fig.2) with negative externalities. Consider a pair of agents {1,2} where 1 is upstream from 2 as depicted in Fig. 2. Let \\(e_{i},i\\in\\{1,2\\}\\) be the endowment of agent \\(i\\) to the river based on the water originating within their territory. It is assumed that \\(e_{1}\\) and \\(e_{2}\\) are spatially independent of each other. Let \\(x_{1}\\) and \\(x_{2}\\) be the amount of water consumed by the two agents.
Figure 1: Negative externalities put on downstream agent in river sharing problem
Figure 2: River sharing problem for 2-agents (arrow shows the direction of water flowIn the general river sharing problem discussed by Ambec and Ehlers (2008) the case of satiable agents where marginal benefit of downstream agent is higher than the upstream agent is studied. Without loss of generality, both agents (1 and 2) try to maximize their individual benefit. In order to do that either they will follow ATS or ATI. ATS is the core lower bound for the agents Ambec and Ehlers (2008). However, they can maximize the total benefit by transferring the water from lower marginal benefit to higher marginal benefit agent (upstream to downstream). And, with an appropriate mechanism the downstream agent can transfer utility to upstream agent against the traded water by the upstream agent. This approach will motivate upstream agent not to follow the ATS and look for some alternative negotiated treaty by which an upstream agent can trade water with the downstream agent in order to maximize her individual benefit. In such a case, for a negotiated treaty and water trading it is necessary to have a market for cooperation where agents can trade to maximize their total and individual utilities with a bargaining mechanism for the two agents to trade (Wang, 2011).
When there is water traded between the agents, let \\(t_{1}\\) be the money transferred by agent 2 to agent 1 in exchange for the transferred water quantity \\(e_{1}-x_{1}\\). The utility functions for both the agents by consuming water is as given below.
\\[u_{1}(x_{1},t_{1})=b_{1}(x_{1})+t_{1} \\tag{1}\\]
\\[u_{2}(x_{2},t_{2})=b_{2}(x_{2})+t_{2} \\tag{2}\\]
In the above given equations, the functions \\(b_{1},b_{2}\\colon\\Re_{+}\\to\\Re\\) are the benefit functions for agent 1 and 2. It is assumed to be strictly concave and differentiable at every point \\(x_{1},x_{2}>0\\) and further \\(x_{1}\\leq e_{1},x_{2}\\leq e_{1}-x_{1}+e_{2}\\). If agent 1 receives any payment \\(t_{1}\\) then for agent 2, then for agent 2, \\(t_{2}=-t_{1}\\).
A \\(2-agent\\)_river sharing problem_ with trading can be represented by tuple \\(<2,e,b>\\), where \\(e=(e_{1},e_{2})\\) and \\(b=(b_{1},b_{2})\\). The allocation vector \\((x_{1},x_{2},t_{1})\\in\\Re^{3}\\) specifies the amount of water allocated to the two agents and the money transferred from agent 2 to agent 1 such that \\(x_{1}+x_{2}\\leq e_{1}+e_{2}\\). Also, \\(t_{2}=-t_{1}\\) implies that there is no transaction cost in this model. Then (Wang, 2011) provide the following equilibrium condition
\\[b_{1},(x_{1})=b_{2},(x_{2})\\]for bilateral trading to happen. This model does not incorporate the impact of the negative externalities discussed earlier.
Our aim is to incorporate the notion of negative externalities in the river sharing problem in a bargaining model. The negative externality due to pollution caused by any upstream agent to the downstream agent is considered here. The downstream agent faces an increased cost for extracting some stated quality of water. As we have discussed earlier ATS principle is usually preferred by the upstream agent with the belief that it will maximize her utility. This claimed right and the responsibility for not imposing any negative externality on the downstream agent, it can induce an environment for trading to maximize the individual and total utility.
By assigning initial property rights using the ATS doctrine and invoking the _Coase theorem_ it is known that a bargaining model (market mechanism) would lead to efficient outcomes, provided that there are no private pieces of information (Patrick, 2001). The information elements of the river sharing problem are the individual endowments \\(e_{1}\\) and \\(e_{2}\\) and the pollution caused by the economic activities of the upstream agent. The endowments \\(e_{1}\\), and \\(e_{2}\\) are assumed to be the initial property rights of agents \\(1\\) and \\(2\\) respectively. It can be assumed to be public information as monitoring stations can be established and jointly operated or by other neutral bodies. Similarly the pollution levels can be monitored and measured using established standards. Hence, the \\(2-agent\\)_river sharing problem with negative externalities_ has clear initial property rights and with perfect information on endowments and pollution levels.
In order to encourage agents to participate in a negotiation and trading the individual utilities of the agents should be more than what they can achieve individually. There should be _individual rationality_ for the agents. That will motivate any agent to participate in trading and bargaining.
It is necessary to incorporate a cost component to upstream agent for her responsibility for imposing negative externalities on downstream agent. This cost (penalty) possibly formalized through treaty agreements could take the form of additional discharge to the downstream agent (viewed as negative water for the upstream agent). This approach introduces incentive and threat to both the agents. The incentive part is the transferable utility for trading additional discharge from downstream agent to the upstream agent. And, threat as negative externalities in the form of compensatory additional discharge from the upstream agent to the downstream agent.
Aforementioned framework of addressing negative externality in 2- _agent river sharing problem_ can be expressed by following mathematics structure.
The agents \\(i\\in\\{1,2\\}\\) have benefit functions \\(b_{i}\\colon\\Re_{+}\\to\\Re\\) such that \\(b_{i}(0)=0\\) and it is differentiable for all \\(x_{i}>0\\). Also there exists a satiation point \\(x_{i}^{*}\\) such that \\(b_{i}(x_{i}^{*})>b_{i}(x_{i})\\), for all \\(x_{i}\
eq x_{i}^{*}\\), \\({b^{\\prime}}_{i}(x_{i}^{*})=0\\) and \\({b^{\\prime\\prime}}_{i}(x_{i}^{*})<0\\).
For agent 2, let \\(c_{2}(E_{2},x_{2})\\colon\\Re_{+}^{2}\\to\\Re_{+}\\) be the cost of accessing water quantity \\(x_{2}\\) when \\(E_{2}\\) is the water available for consumption. The negative externality introduced by the first agent can be measured in water equivalent terms (negative water) that can be the equivalent of the cost to be incurred by the second agent for extracting water of a certain stated quality. This cost imposed by the first agent on the second agent will be a function of the quantity allocated and consumed by the first agent and her total endowment. We make the following assumption about the nature of this cost.
For given level of pollution added by agent 1 to the stream, the negative externality cost imposed on the second agent \\(c_{1}^{w}(e_{1},x_{1})\\colon\\Re_{+}^{2}\\to\\Re_{+}\\) and \\(0\\leq c_{1}^{w}<e_{1}\\). The nature of this function is such that for a given level of endowment \\(e_{1}=e\\), \\(c_{1}^{w^{\\prime}}(e,x_{1})=c_{1w}\\) and \\(c_{1}^{w}(e,0)=0\\).
This assumption introduces a direct negative cost (penalty) in terms of \"negative water\" for the negative externality caused for every unit of freshwater consumed by agent 1. This negative water is a penalty which cuts her upper bound for consuming maximum amount of water.
In previous mechanisms given in literature, an agent 1 could consume maximum upto \\(e_{1}\\) but with new mechanism she can only consume \\(e_{1}-c_{1}^{w}(e_{1})\\). However, if agent 1 consumes zero amount of water in such situation she can sell whole of \\(e_{1}\\) to the downstream agent. But, if she consumes \\(e_{1}-c_{1}^{w}(e_{1})\\) she cannot trade any water. The penalty of \\(c_{1}^{w}(e_{1})\\) is the compensation to the downstream agent due to the negative externality imposed on her.
Next, in section 6 we define the 2-person bargaining problem that explains incorporating the notion of negative water in to utility function of the two agents.
## 6 Formulation
The river sharing problem with negative externalities can be modeled with transferable utility (TU) market based mechanism as bargaining problem. The bargaining model incorporating the notionsof transferable utility, negative cost and also the associated opportunity cost for transferring a negative externality from one agent to another is explained below.
The two agent river bargaining problem is represented by the 4-tuple \\(<2,e,c_{w},\\alpha>\\). In a 2 agent bilateral river bargaining problem, \\(e=\\{e_{1},e_{2}\\}\\) are the water endowments for the agents within her territory, \\(c_{w}=\\{c_{1w}\\}\\) is the negative water penalty on upstream agent (1) for generating negative externalities for the downstream agent, and \\(\\alpha\\) is the transferable side payment from agent 2 to agent \\(1\\) for every unit water traded.
The river sharing problem has widely accepted the assumption (2) that the benefit function is strictly concave. The Eqs. 4 and 5 shows the utility function3 of agents 1 and 2 respectively that gives rationality to the agents to bargain or not in evaluating the allocation vector \\(\\mathbf{x}\\) and the transferable utility \\(\\alpha\\).
Footnote 3: As we have mentioned earlier through this river bargaining problem we would like to incorporate the negotiated treaty of sustainable nature that capture the negative externalities of upstream agent and motivates both the agents to participate in the trade to be better off. It has been noticed that in two person bargaining problem sometimes one player’s payoff increases as the disagreement payoff to other player decreases. So it gives an incentive for a player to have more favorable disagreement payoffs. In a river bargaining problem with negative externalities it appears that the upstream agent does have more favorable disagreement points due to the penalty of negative water against her generated negative cost to downstream flow. This notions are taken in to account while writing the utility function of the agent
If agent 1 is consuming \\(x_{1}\\) units of water, then her utility is given by
\\[z_{1}(x_{1})=\\overbrace{a_{1}x_{1}-\\frac{b_{1}}{2}x_{1}^{2}}^{benefit}+ \\overbrace{\\alpha_{1}(e_{1}-x_{1}(1+c_{1w}))}^{opportunity\\ cost}\\]
\\[z_{1}(x_{1})=\\overbrace{a_{1}x_{1}-\\frac{b_{1}}{2}x_{1}^{2}}^{benefit}+ \\overbrace{\\alpha_{1}(e_{1}-x_{1}(1+c_{1w}))}^{opportunity\\ cost}\\]
\\[\\widehat{\\beta_{1}x_{1}c_{1w}}\\quad.\\]
In the Eq. 4, \\(c_{1w}\\) is the penalty negative water with respect to negative externality caused by agent 1 on agent 2. The \\((e_{1}-x_{1}(1+c_{1w}))\\) is _surplus water_ or conserved water and that can be traded by agent 1 to agent 2 for generating extra revenue. If \\(\\alpha_{1}\\) is the value associated by agent 1 per unit of water then the revenue expected from the trade is \\(\\alpha_{1}(e_{1}-x_{1}(1+c_{1w})\\). The other benefit enjoyed by agent 1 is the cost of not having to bring the water quality to acceptable levels by the downstream agent. This is an opportunity cost not incurred by agent 1 and hence is an additional benefit to her and is given by \\(\\beta_{1}x_{1}c_{1w}\\). If agent 1 follows ATS then she has to bear the cost of pollution. However, in a bilateral trading of water she can release some amount of polluted water and enjoy the savings from associated opportunity cost. Similarly, the utility function for agent 2 represented as,\\[z_{2}(x_{2})=a_{2}x_{2}-\\frac{b_{2}}{2}x_{2}^{2}-\\alpha_{2}(x_{2}-e_{2}-x_{1}c_{1w })-\\beta_{2}x_{1}c_{1w}. \\tag{5}\\]
In Eq. 5, the term \\(\\alpha_{2}(x_{2}-e_{2}-x_{1}c_{1w})\\) represents a transferable utility and term \\(\\beta_{2}x_{1}c_{1w}\\) is the cost incurred by agent 2 in extracting quality water. The _transferable utility_ and _opportunity cost_ have negative signs because these are actual expenses incurred by agent 2. Being rational agents the agents will seek to maximize their utilities giving raise to a decision problem. The agents maximize their utility by choosing appropriate consumption levels \\(x_{1}\\) and \\(x_{2}\\) and agreeing on the transferable utility \\(\\alpha\\). The analysis of this bargaining between agents over \\(\\alpha=\\alpha_{1}=\\alpha_{2}\\) is discussed in following section (SS 7).
## 7 Solution
The two agent river bargaining problem is a form of game with transferable utility. This two person bargaining problem characterized by three parameters: disagreement payoffs of agent 1, disagreement payoff of agent 2 and the transferable utility on which the agents will agree upon to trade [Myerson, 1991].
Over the structure of the two-person river bargaining problem both the agents will be faced with the following decision problem (Eq. 6 and Eq. 7 respectively for agents 1 and 2).
\\[\\max z_{1} \\tag{6}\\]
s. t. \\(x_{1}\\leq e_{1}\\)
at the same time agent 2 solves Eq. 7
\\[\\max z_{2} \\tag{7}\\]
s. t. \\(x_{2}\\geq e_{2}\\)
### Characterization
Axiom1 [Individual Rationality]An agent will take part in a bargaining for trading any economic commodity if only if when she is better off in participation.
Proposition 1: In the bilateral river sharing problem \\(<2,e,c^{w},a>\\) with negative externalities if the utility function of the upstream agent is expressed by Eq. 4, the upstream agent will participate in the trade only if \\(\\frac{a_{1}+\\beta c_{1w}}{b_{1}}\\geq\\frac{e_{1}}{1+c_{1w}}\\).
Proof: By _Axiom1_ of individual rationality, agent 1 will try to maximize her utility by maximizing the utility function given in 4. The first order necessary condition for maximization of Eq. 4 is as follows.
\\[\\frac{\\partial z_{1}}{\\partial x_{1}}=a_{1}-b_{1}x_{1}-\\alpha_{1}(1+c_{1w})+ \\beta_{1}c_{1w}=0\\]
\\[x_{1}=\\frac{a_{1}-\\alpha(1+c_{1w})+\\beta_{1}c_{1w}}{b_{1}}\\]
By assumption 2 the above stationary point will maximize her utility function. The constraint in the decision problem (6) gives raise to the following (eq. 8) optimal consumption level of the first agent.
\\[x_{1}^{*}=\\min\\left\\{\\frac{a_{1}-\\alpha_{1}(1+c_{1w})+\\beta_{1}c_{1w}}{b_{1}}, \\frac{e_{1}}{1+c_{1w}}\\right\\} \\tag{8}\\]
The agent 1 will only participate in any trading and not follow ATS only when (due to axiom 7.1),
\\[z_{1}(x_{1})\\geq z_{1}(e1/(1+c_{1w})) \\tag{9}\\]
Substituting the consumption levels in the utility function (eq. 4)
\\[\\begin{array}{l}\\alpha_{1}x_{1}-\\frac{b_{1}}{2}x_{1}^{2}+\\alpha_{1}(e_{1}-x_ {1}(1+c_{1w}))+\\beta_{1}x_{1}c_{1w}\\geq a_{1}\\left(\\frac{e_{1}}{1+c_{1w}} \\right)-\\frac{b_{1}}{2}(e_{1}/(1+c_{1w}))^{2}+\\\\ \\beta_{1}(e_{1}/(1+c_{1w}))c_{1w}\\end{array} \\tag{10}\\]
Simplifying the expression yields
\\[a_{1}\\left[x_{1}-\\frac{e_{1}}{1+c_{1w}}\\right]-\\frac{b_{1}}{2}\\left[x_{1}^{2}- \\left(\\frac{e_{1}}{1+c_{1w}}\\right)^{2}\\right]+\\alpha_{1}(e_{1}-x_{1}(1+c_{1w }))+\\beta_{1}c_{1w}\\left[x_{1}-\\frac{e_{1}}{1+c_{1w}}\\right]\\]
\\[\\geq 0\\]\\[a_{1}-\\frac{b_{1}}{2}\\Big{[}x_{1}+\\frac{e_{1}}{1+c_{1w}}\\Big{]}-\\alpha_{1}(1+c_{1 w})+\\beta_{1}c_{1w}\\geq 0\\]
\\[\\frac{a_{1}-\\alpha_{1}(1+c_{1w})+\\beta_{1}c_{1w}}{b_{1}}\\geq\\frac{1}{2}\\Big{[}x _{1}+\\frac{e_{1}}{1+c_{1w}}\\Big{]}\\]
substitution of \\(x_{1}^{*}=\\frac{a_{1}-\\alpha_{1}(1+c_{1w})+\\beta_{1}c_{1w}}{b_{1}}\\) from Eq.8 gives
\\[\\frac{a_{1}-\\alpha_{1}(1+c_{1w}+\\beta_{1}c_{1w})}{b_{1}}\\leq\\frac{e_{1}}{1+c_{ 1w}} \\tag{11}\\]
Hence,
\\[\\alpha_{1}\\geq-\\frac{b_{1}e_{1}}{(1+c_{1w})^{2}}+\\frac{\\beta_{1}c_{1w}}{1+c_{1 w}}+\\frac{a_{1}}{1+c_{1w}} \\tag{12}\\]
This is the lower limit (\\(\\alpha_{1}^{1}\\)) of the TU that is acceptable to agent 1. Since \\(\\alpha_{1}\\geq 0\\)
\\[\\frac{a_{1}+\\beta c_{1w}}{b_{1}}\\geq\\frac{e_{1}}{1+c_{1w}} \\tag{13}\\]
_Proposition 2._ In the bilateral river sharing problem \\(<2,e,c^{w},\\alpha>\\) with negative externalities if the utility function of the downstream agent is expressed by Eq. 5, the downstream agent will participate in trade only when \\(\\frac{a_{2}+\\beta_{2}c_{1w}}{b_{2}}\\geq e_{2}+\\frac{e_{1}c_{1w}}{1+c_{1w}}\\).
_Proof._ The agent 2 will also follow her utility function (Eq. 7) and try to maximize it. Given the endowments, if agent 2 is allocated \\(x_{2}\\), agent 1 will be allocated \\(x_{1}=e_{1}+e_{2}-x_{2}\\). Then the first order necessary condition for optimality gives raise to,
\\[\\frac{\\partial z_{2}}{\\partial x_{2}}=a_{2}-b_{2}x_{2}-\\alpha_{2}(1+c_{1w})+ \\beta_{2}c_{1w}=0\\]
\\[x_{2}=\\frac{a_{2}-\\alpha_{2}(1+c_{1w})+\\beta_{2}c_{1w})}{b_{2}} \\tag{14}\\]
The constraint on the decision problem of the second agent gives raise to
\\[x_{2}^{*}=\\max\\Big{\\{}e_{2}+c_{1w}x_{1},\\frac{a_{2}-\\alpha_{2}(1+c_{1w})+\\beta _{2}c_{1w})}{b_{2}}\\Big{\\}} \\tag{15}\\]
By _Axiom1 of individual rationality,_ the agent 2 will also participate in any trading only if \\[z_{2}(x_{2})\\geq z_{2}(e_{2}+c_{1w}x_{1}) \\tag{16}\\]
Substituting the consumption levels in the utility function Eq. 5
\\[a_{2}x_{2}-\\frac{b_{2}}{2}x_{2}^{2}-\\alpha_{2}(x_{2}-e_{2}-x_{1}c_{1w})-\\beta_{2 }c_{1w}(x_{1})\\]
\\[\\geq a_{2}(e_{2}+c_{1w}x_{1})-\\frac{b_{2}}{2}(e_{2}+c_{1w}x_{1})^{2}-\\beta_{2 }c_{1w}(x_{1})\\]
\\[a_{2}(x_{2}-(e_{2}+c_{1w}x_{1}))-\\frac{b_{2}}{2}(x_{2}^{2}-(e_{2}+c_{1w}x_{1})^ {2})-\\alpha_{2}(x_{2}-(e_{2}+x_{1}c_{1w}))\\geq 0.\\]
Simplifying yields
\\[\\frac{a_{2}-\\alpha_{2}}{b_{2}}\\geq\\frac{1}{2}(x_{2}+e_{2}+c_{1w}x_{1}).\\]
Substitution of \\(x_{2}^{*}=\\frac{a_{2}-a_{2}(1+c_{1w})+\\beta_{2}c_{1w})}{b_{2}}\\) from Eq. 15 gives
\\[\\frac{a_{2}-\\alpha_{2}(1+c_{1w})+\\beta_{2}c_{1w}}{b_{2}}\\geq e_{2}+c_{1w}x_{1}\\]
Substituting \\(x_{1}=e_{1}+e_{2}-x_{2}\\) in the above equation and replacing \\(x_{2}\\) by \\(x_{2}^{*}=\\frac{a_{2}-\\alpha_{2}(1+c_{1w})+\\beta_{2}c_{1w})}{b_{2}}\\) yields
\\[\\frac{a_{2}-\\alpha_{2}(1+c_{1w})+\\beta_{2}c_{1w}}{b_{2}}\\geq\\frac{e_{2}(1+c_{1 w})+e_{1}c_{1w}}{(1+c_{1w})}\\]
\\[\\alpha_{2}\\leq\\frac{a_{2}+\\beta_{2}c_{1w}}{1+c_{1w}}-\\frac{b_{2}}{1+c_{1w}} \\Big{(}e_{2}+\\frac{e_{1}c_{1w}}{1+c_{1w}}\\Big{)} \\tag{17}\\]
This is the upper limit (\\(\\alpha_{2}^{u}\\)) of the TU that is acceptable to agent 2. Since \\(\\alpha_{2}\\geq 0\\), hence
\\[\\frac{a_{2}+\\beta_{2}c_{1w}}{b_{2}}\\geq e_{2}+\\frac{e_{1}c_{1w}}{1+c_{1w}} \\tag{18}\\]
### Feasibility of agreement
_Lemma 3_. In the bilateral river sharing problem \\(<2,e,c^{w},\\alpha>\\) with negative externalities with the agent utilities expressed by Eqs. 4, 5 the two agents will have an agreement point only if \\[\\frac{a_{1}+a_{2}}{1+c_{1w}}+c_{1w}(\\beta_{1}+\\beta_{2})\\geq\\frac{e_{2}+e_{1}}{1+ c_{1w}}/(\\frac{1}{b_{1}}+\\frac{1}{b_{2}}) \\tag{19}\\]
_Proof._ The utility transferred by agent 2 to agent 1 is a sufficient condition for trading must appear when
\\[\\alpha_{2}(x_{2}-e_{2}-c_{1w}x_{1})=\\alpha_{1}(e_{1}-x_{1}-c_{1w}x_{1})\\]
For agreement between the agents \\(\\alpha_{1}=\\alpha_{2}\\), and hence
\\[x_{2}-e_{2}-\\frac{e_{1}c_{1w}}{1+c_{1w}}=\\frac{e_{1}}{1+c_{1w}}-x_{1}.\\]
Substituting \\({x_{2}}^{*}\\) and \\({x_{1}}^{*}\\) and \\(\\alpha_{1}=\\alpha_{2}\\) yields
\\[\\frac{a_{2}-\\alpha(1+c_{1w}+\\beta_{2}c_{1w})}{b_{2}}-e_{2}-\\frac{e_{1}c_{1w}}{1 +c_{1w}}=\\frac{e_{1}}{1+c_{1w}}-\\frac{a_{1}-\\alpha(1+c_{1w}+\\beta_{1}c_{1w})}{ b_{1}}\\]
Simplifying the above expression yields
\\[\\alpha^{*}=\\frac{a_{1}+a_{2}-(e_{1}+e_{2})/(\\frac{1}{b_{1}}+\\frac{1}{b_{2}})+c _{1w}(\\beta_{1}+\\beta_{2})}{1+c_{1w}} \\tag{20}\\]
since \\(\\alpha\\geq 0\\), hence
\\[\\frac{a_{1}+a_{2}}{1+c_{1w}}+c_{1w}(\\beta_{1}+\\beta_{2})\\geq\\left(\\frac{e_{2}+ e_{1}}{1+c_{1w}}\\right)/\\left(\\frac{1}{b_{1}}+\\frac{1}{b_{2}}\\right) \\tag{21}\\]
**7.3. Sufficiency for agreement**
_Corollary 1._ In the bilateral river sharing problem \\(<2,e,c^{w},\\alpha>\\) with negative externalities the agents will arrive at an agreement iff \\(\\alpha_{1}^{l}\\leq\\alpha^{*}\\leq\\alpha_{2}^{u}\\).
_Proof._ The proof is rather direct by the application of Propositions 1, 2 and Lemma 1.
**8. Numerical Illustration**
The bargaining model is numerically illustrated by assuming the following parameters: \\(a_{1}=4\\)\\(b_{1}=.02\\), \\(a_{2}=2\\), \\(b_{2}=.04\\), \\(\\beta_{1}=.02\\), \\(\\beta_{2}=.2\\), \\(c^{w}=4\\)\\(e_{1}=\\delta e_{2}\\). Where \\(\\delta\\in\\{0,1,2, \\,30\\}\\). The lower (\\(\\alpha_{1}^{l}\\)) and upper (\\(\\alpha_{2}^{u}\\)) bounds of the TU and the agreement point (\\(\\alpha^{*}\\)) derived earlier are plotted in Fig. 3. The specific case of \\(\\delta=30\\) is represented in Fig. 4. As the figures show, there exists a specific specific region of the endowment in which the agents are able to agree on the TU value and engage in trade. Disagreement Lower values of the endowment Fig. 3 shows the feasible region for bargaining between two agents for agreements and Fig. 4 represents the \\(\\delta=30\\). If we see Fig. 3, it is found that for \\(\\delta=20\\) to \\(\\delta=30\\) there is solution is existing which translates a bargaining solution for two agents. It can be also understood that for a given \\(e\\) and \\(\\delta\\) there is a unique bargaining solution. The Fig. 4 is more enlarged view of Fig. 3 for \\(\\delta=30\\).
In Fig. 4, \\(A\\) represents the \\((\\alpha_{2}^{u})\\) and \\(B\\) depicts the value of \\((\\alpha_{1}^{1})\\) for a feasible along the increasing value of \\(e_{2}\\) along the \\(x-axis\\). The \\((\\alpha^{*})\\) is represented by \\(C\\) and it can be obtained a straight line passing through \\(e_{2}=k\\) where, \\(k\\) is any value for which \\((\\alpha^{*})\\) lie between the \\(\\alpha_{2}(A^{\\prime})\\) and \\(\\alpha_{1}(B^{\\prime})\\).
Fig. 4: Representation of agreement points and disagreement points for \\(\\delta=30\\)
## References
* Alcalde-Unzu et al. (2015) Alcalde-Unzu J, Gomez-Ra M, Molis E (2015) Sharing the costs of cleaning a river: the upstream responsibility rule. _Games and Economic Behavior_ 90:134-150.
* Ambec and Ehlers (2008) Ambec S, Ehlers L (2008) Sharing a river among satiable agents. _Games and Economic Behavior_ 64:35-50.
* Ambec and Ehlers (2014) Ambec S, Ehlers, L (2014) Regulation via the polluter-pays principle. _The Economic Journal_ 126 (593):884-906
* Ambec and Sprumont (2002) Ambec S, Sprumont Y (2002) Sharing a river. _Journal of Economic Theory_ 107:453-462.
* Ansink and Weikard (2009) Ansink E, Weikard HP (2009) Contested water rights. _European Journal of Political Economy_ 25:247-260.
* Barret (1994) Barret S (1994) Conflict and cooperation in managing international water resources. _Policy Research Working Paper 1303, World Bank, Washington_.
* Brink et al. (2012) Brink RVD, Laan GVD, Moes N (2012) Fair agreements for sharing international rivers with multiple springs and externalities. _Journal of Environmental Economics and Management_ 63:388-403.
* Chander and Tulkens (1997) Chander P, Tulkens H (1997) The core of an economy with multilateral environmental externalities. _International Journal of Game Theory_ 26:379-401.
* Coase (1960) Coase R (1960) The problem of social cost. _The Journal of Law & Economics_ 1:1-44.
* Dong et al. (2012) Dong B, Ni D, Wang Y (2012). Sharing a polluted river network. _Environmental and Resource Economics_ 53:367-387.
* Giordano and Wolf (2003) Giordano MA, Wolf AT (2003) Sharing waters: Post-rio international water management.
* Kilgour and Dinar (1995) Kilgour DM, Dinar D (1995) Are stable agreements for sharing Inter-national River water now possible? _Policy Research Working Paper, World Bank, Washington_, (1474).
* Kittel et al. (2009)Myerson RB (1991) _Game Theory: Analysis of Conflict_. Cambridge, MA: Harvard UP.
* Ni and Wang (2007) Ni D, Wang Y (2007). Sharing a polluted river. _Games and Economic Behavior_, 60:176-186.
* Patrick (2001) Patrick WS (2001) The Coase theorem, private information and the benefits of not assigning property rights. _European Journal of Law and Economics_, 11:23-28.
* Rebille and Richefort (2012) Rebille Y, Richefort L (2012) Sharing water from many rivers. _hal-00678997, HAL_, Working Paper.
* Wang (2011) Wang Y (2011) Trading water along a river. _Mathematical Social Science_, 61:124-130. | This article is addressing the problem of river sharing between two agents along a river in the presence of negative externalities. Where, each agent claims river water based on the hydrological characteristics of the territories. The claims can be characterized by some international framework (principles) of entitlement. These international principles are appears to be inequitable by the other agents in the presence of negative externalities. The negotiated treaties address sharing water along with the issue of negative externalities imposed by the upstream agent on the downstream agents. The market based bargaining mechanism is used for modeling and for characterization of agreement points.
**Keyword:** river sharing problem, bargaining, externality, ATS, ATI.
1. | Provide a brief summary of the text. | 138 |
arxiv-format/1312_6615v1.md | # Automated Coin Recognition System using ANN
Shatrughan Modi
Dept. of Computer Science and Engineering
Thapar University
Patiala-147004, India. Dr. Seema Bawa
Dept. of Computer Science and Engineering
Thapar University
Patiala-147004, India.
## 1 Introduction
We can not imagine our life without coins. We use coins in our daily life almost everywhere like in banks, supermarkets, grocery stores etc. They have been the integral part of our day to day life. So there is basic need of highly accurate and efficient automatic coin recognition system. In-spite of daily uses coin recognition systems can also be used for the research purpose by the institutes or organizations that deal with the ancient coins. There are three types of coin recognition systems available in the market based on different methods:
* Mechanical method based systems
* Electromagnetic method based systems
* Image processing based systems
The mechanical method based systems use parameters like diameter or radius, thickness, weight and magnetism of the coin to differentiate between the coins. But these parameters can not be used to differentiate between the different materials of the coins. It means if we provide two coins one original and other fake having same diameter, thickness, weight and magnetism but with different materials to mechanical method based coin recognition system then it will treat both the coins as original coin so these systems can be fooled easily.
The electromagnetic method based systems can differentiate between different materials because in these systems the coins are passed through an oscillating coil at a certain frequency and different materials bring different changes in the amplitude and direction of frequency. So these changes and the other parameters like diameter, thickness, weight and magnetism can be used to differentiate between coins. The electromagnetic method based coin recognition systems improve the accuracy of recognition but still they can be fooled by some game coins.
In the recent years coin recognition systems based on images have also come into picture. In these systems first of all the image of the coin to be recognized is taken either by camera or by some scanning. Then these images are processed by using various techniques of image processing like FFT [1, 2], Gabor Wavelets [3], DCT, edge detection, segmentation, image subtraction [4], decision trees [5] etc and various features are extracted from the images. Then based on these features different coins are recognized.
## 2 Related Work
In 1992 [6]_Minoru Fukumi et al._ presented a rotational invariant neural pattern recognition system for coin recognition. They performed experiments using 500 yen coin and 500 won coin. In this work they have created a multilayered neural network and a preprocessor consisting of many slabs of neurons to provide rotation invariance. They further extended their work in 1993 [7] and tried to achieve 100% accuracy for coins. In this work they have used BP (Back Propagation) and GA (Genetic Algorithm) to design neural network for coin recognition. _Adnan Khashman et al._[8] presented an Intelligent Coin Identification System (ICIS) in 2006. ICIS uses neural network and pattern averaging for recognizing rotated coins at various degrees. It shows 96.3% correct identification i.e. 77 out of 80 variably rotated coin images were correctly identified. Mohamed Roushdy [9] had used Generalized Hough Transform to detect coins in image.
In our work we have combined Hough Transform and Pattern Averaging to extract features from image. Then, these features are used to recognize the coins. In section 3 implementation details are given. In section 4 we have presented training and testing data. Then, in section 5 the experimental results are provided. Then, in section 6 we have concluded the work.
## 3 Implementation Details
Coin recognition process has been divided into seven steps. The architecture of Automated Coin Recognition System is shown in Fig. 1.
### Acquire RGB Coin Image
This is the first step of coin recognition process. In this step the RGB coin image is acquired. Indian coins of denominations $1, $2, $5 and $10 were scanned from both sides at 300 dpi (dots per inch) using color scanner as shown in Fig 2. Five coins of each denomination were scanned.
### Convert RGB Coin Image to Grayscale
From the first step the image we got is a 24-bit RGB image. Image processing of colored images takes more time than the grayscale images. So, to reduce the time required for processing of images in further steps it is good to convert the 24-bit RGB image to 8-bit Grayscale image.
### Remove Shadow of Coin from Image
In this step, shadow of the coin from the Grayscale image is removed. As all the coins have circular boundary. So, for removing shadow Hough Transform for Circle Detection [9] is used. For this first of all edge of the coin is detected using Sobel Edge Detection. Following is the pseudo code for Hough Transform:
* Define a 3-dimensional Hough Matrix of (M \\(\\times\\) N \\(\\times\\) R), where M, N is the height and width of the Grayscale image and R is the no. of radii for which we want to search.
* For each edge pixel (x, y) and for particular radius r, search circle center coordinates (u, v) that satisfy the equation (x-u)\\({}^{2}\\)+(y-v)\\({}^{2}\\)-r\\({}^{2}\\) and increase count in Hough Matrix at (u, v, r) by 1.
* Repeat step 2 for other radii.
* Find the maximum value from the Hough M\\(\\star\\)rix. The corresponding indices give the center coordinates and radius of coin.
Now based on the center coordinates and radius, the coin is extracted from the background. So, in this way the shadow of the coin is removed. Fig. 3 shows a coin with shadow and Fig. 4 shows the coin without shadow after applying Hough Transform.
### Crop and Trim the Image
After shadow removal the image is cropped so that we just have the coin in the image. Then after cropping, coin image is trimmed to make it of equal dimension of \\(100\\times 100\\).
### Generate Pattern Averaged Image
The 100\\(\\times\\)100 trimmed coin images become the input for the trained neural network. But to reduce the computation and complexity in the neural network these images are further reduced to size 20\\(\\times\\)20 by segmenting the image using segments of size 5\\(\\times\\)5 pixels, and then taking the average of pixel values within the segment. This can be represented by mathematical equations, as shown in (1) and (2):
\\[Sum_{i}=\\sum_{j=1}^{5}\\sum_{k=1}^{5}\\ P_{ijk}\\] (1)
\\[SegAvg_{i}=Sum_{i}/25\\
### Generate Feature Vector and pass it as Input to Trained NN
In this step, a feature vector is generated from the pattern averaged coin image. The 20\\(\\times\\)20 image generates a feature vector of dimension 400\\(\\times\\)1 i.e. all the pixel values are put into a vector of 1 column. Then, this feature vector of 400 features is passed as input to trained neural network. Fig. 7 gives the architecture of Trained Neural Network.
### Give Appropriate Result according to the Output of Neural Network
Coins are classified into 14 categories as shown in Fig. 2. The neural network classifies the given coin image into one of these class and based on the classification the results get generated that to which denomination the given coin belongs _i.e._ if coin gets classified in one of the class from (i) to (iv) then we say it is a \\(\\mathfrak{K}1\\) coin. Similarly, for other classes we give appropriate result. In Fig. 8 snapshot of the tool developed is given in which a \\(\\mathfrak{K}10\\) coin is recognized.
## 4 Training and Testing Data
Five samples of each denomination of Indian coins are scanned from both sides as shown in Fig. 2. So, it results to 10 images for each coin. But for \\(\\mathfrak{K}1,\\mathfrak{K}2\\) and \\(\\mathfrak{K}5\\) two types of coins are used. So for each of these denominations there are 20 images from which 10 (5 for head and 5 for tail) are of 1\\({}^{\\text{st}}\\) type and other 10 (5 for head and 5 for tail) are of 2\\({}^{\\text{nd}}\\) type. Then after preprocessing when we get images of 100\\(\\times\\)100 then these images were rotated to 5\\({}^{\\text{o}}\\), 10\\({}^{\\text{o}}\\), 15\\({}^{\\text{o}}\\), ,355\\({}^{\\text{o}}\\)_i.e._ total 72 rotated images get generated for each image. So there are 20*72=1440 images for each of \\(\\mathfrak{K}1,\\mathfrak{K}2\\) and \\(\\mathfrak{K}5\\) but 10*72=720 images for \\(\\mathfrak{K}10\\). So there are total 1440*3+720=5040 images. So we trained the neural network by randomly selecting images from these 5040 images. 90% of 5040 images were used for training, and then 5% images were used for testing and rest 5% were used for validation.
Figure 7: Architecture of Trained Neural Network
## 6 Conclusion
An ANN based automated coin recognition system has been developed using MATLAB. In this system, firstly preprocessing of the images is done and then these preprocessed images are fed to the trained neural network. Neural network has been trained, tested and validated using 5040 sample images of denominations $1, $2, $5 and $10 rotated at 5\\({}^{0}\\), 10\\({}^{0}\\), 15\\({}^{0}\\) , 355\\({}^{0}\\). Experiments show that the system provides 97.74% correct recognition rate from 5040 sample images, _i.e._, only 2.26% images get miss-recognized; the result is quite encouraging.
## References
* [1] Cai-ming Chen, Shi-qing Zhang, Yue-fen Chen, \"A Coin Recognition System with Rotation Invariance,\" 2010 International Conference on Machine Vision and Human-machine Interface, 2010, pp. 755-757.
* [2] Thumwarin, P., Malila, S., Janthawong, P. and Pibulwej, W., \"A Robust Coin Recognition Method with Rotation Invariance\", 2006 International Conference on Communications, Circuits and Systems Proceedings, 2006, pp. 520-523.
* [4] Gupta, V., Puri, R., Verma, M., \"Prompt Indian Coin Recognition with Rotation Invariance using Image Subtraction Technique\", International Conference on Devices and Communications (ICDeCom), 2011
* [5] P. Davidsson, \"Coin classification using a novel technique for learning characteristic decision trees by controlling the degree of generalization\", Ninth International Conference
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline
**Sr. No.** & **Coin Type** & **Images** & **Recognition** \\\\ & & **correctly** & **Rate** \\\\ & & **recognized** / & **(in \\%age)** \\\\ & & **Total no. of** \\\\ & & **images** & \\\\ \\hline
1 & $1 & 1412/1440 & 98.05 \\\\ \\hline
2 & $2 & 1426/1440 & 99.03 \\\\ \\hline
3 & $5 & 1368/1440 & 95 \\\\ \\hline
4 & $10 & 720/720 & 100 \\\\ \\hline \\multicolumn{2}{|c|}{**Total**} & 4926/5040 & 97.74 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Recognition Results
Figure 11: Confusion Matrix of NNon Industrial & Engineering Applications of Artificial Intelligence & Expert Systems, 1996.
* [6] Fukumi M. and Omatu S., \"Rotation-Invariant Neural Pattern Recognition System with Application to Coin Recognition\", IEEE Trans. Neural Networks, Vol.3, No. 2, pp. 272-279, March, 1992.
* [7] Fukumi M. and Omatu S., \"Designing A Neural Network For Coin Recognition By A Genetic Algorithm\", Proceedings of 1993 International Joint Conference on Neural Networks, Vol. 3, pp. 2109-2112, Oct, 1993.
* [8] Khashman A., Sekeroglu B. and Dimiller K., \"Intelligent Coin Identification System\", Proceedings of the IEEE International Symposium on Intelligent Control ( ISIC'06 ), Munich, Germany, 4-6 October 2006, pp. 1226-1230.
* [9] Roushdy, M., \"Detecting Coins with Different Radii based on Hough Transform in Noisy and Deformed Image\", In the proceedings of GVIP Journal, Volume 7, Issue 1, April, 2007. | Coins are integral part of our day to day life. We use coins every where like grocery store, banks, buses, trains etc. So it becomes a basic need that coins can be sorted and counted automatically. For this it is necessary that coins can be recognized automatically. In this paper we have developed an ANN (Artificial Neural Network) based Automated Coin Recognition System for the recognition of Indian Coins of denomination $1, $2, $5 and $10 with rotation invariance. We have taken images from both sides of coin. So this system is capable of recognizing coins from both sides. Features are extracted from images using techniques of Hough Transformation, Pattern Averaging etc. Then, the extracted features are passed as input to a trained Neural Network. 97.74% recognition rate has been achieved during the experiments i.e. only 2.26% miss recognition, which is quite encouraging.
Pattern Averaging, Hough Transform for circle detection, Automated Coin Recognition. 26- No.4, July 2011 | Give a concise overview of the text below. | 211 |
mdpi/00954741_799b_462c_a752_19f7a90f6fc8.md | Evaluation of Mass Movement Hazard in the Shoreline of the Intertidal Complex of El Grove (Pontevedra, Galicia)
Joaquin Andres Valencia Ortiz
1Department of Geology, Faculty of Sciences, University of Salamanca, Plaza de los Caidos s/n, 37008 Salamanca, Spain; [email protected] (C.E.N.); [email protected] (A.M.M.-G.)1
Carlos Enrique Nieto
1Department of Geology, Faculty of Sciences, University of Salamanca, Plaza de los Caidos s/n, 37008 Salamanca, Spain; [email protected] (C.E.N.); [email protected] (A.M.M.-G.)1
Antonio Miguel Martinez-Grana
1Department of Geology, Faculty of Sciences, University of Salamanca, Plaza de los Caidos s/n, 37008 Salamanca, Spain; [email protected] (C.E.N.); [email protected] (A.M.M.-G.)1
Footnote 1: [https://www.mdpi.com/journal/remotesensing](https://www.mdpi.com/journal/remotesensing)
######
s 2024 16 2478 11.11.11.11.11.11.11.11.11.11.
of mass movements [9; 11; 12; 13]. This result approximates the spatial type of probability to generate a terrain instability process, which is manifested by the given displacement over a surface that will depend directly on the method employed and the scale of work.
For the construction of this type of correlation, different methods are used that are grouped according to their scale, availability, quality, data accuracy, susceptibility resolution and the type of subtraction to be obtained, including heuristic, stochastic, statistical, and deterministic methods [14; 15]. As Cascini [16] states, in order to obtain susceptibility cartographic models, a suitable working scale must be adopted according to the method proposed, and within the different scales described, to obtain cartography at an intermediate level of detail (scale 1:25,000 or 1:10,000), stochastic or statistical methods must be considered. This is the statistical method adopted for the present study since its basis focuses on establishing combinations between inherent factors and mass movements in a statistical way [14]. To perform this correlation, the present study will take as a basis the Bivariate algorithm, which estimates the densities of \"Weights of evidence\" between each of the evaluated attributes of the inherent factors established and the mass movements [9; 17; 18].
With the aspects described above, determining spatiotemporally the occurrence of a terrain instability process involves taking the concept of susceptibility to hazard, which encompasses temporary conditions associated with an external stimulus (trigger) due to factors such as heavy rainfalls, rapid thawing, changes in water level, volcanic eruptions or strong terrain shaking by the action of an earthquake [19; 20]. In the specific case of the present study, rainfall and seismic activity will be the reference points for the estimation of hazard conditions in the study area. Rainfall has been described as one of the most influential factors in the generation of mass movements [21; 22; 23; 24; 25]. This description refers especially to the action of intense precipitation episodes, of long or short duration, which generate an increase in interstitial pressure or loss of cohesion between soil particles [26; 27]. But this aspect is affected by local factors such as slope, soil type, vegetation, lithology, morphology, kinematics, and material involved, among others, which increase or decrease the action of rainfall on the soil [21; 28; 29]. The second trigger that is related to seismic activity is a factor that has been considered worldwide as one of the precursors of unstable terrain points, which has caused considerable human and infrastructure losses created by the same seismic activity and the activation of mass movement [30; 31]. Its evaluation for the present study is limited to the values of the Peak Ground Acceleration (PGA), however, seismic activity as a trigger of mass movements is a topic under study at present, because it has a local mode that can define the phenomenon that triggers the collapse of the slope, but at a more regional level this perspective changes due to inherent factors of the terrain and the physical properties given at the time of seismic activity [20; 32].
With the present information, the evaluation of mass movement hazard conditions is an important point in the knowledge of risk from a non-structural perspective. The creation of a susceptibility model, and subsequently a hazard model, can provide useful information for local or regional decision-makers, who can use this study as a reference in land use planning and the preservation of natural resources. For this reason, the development of this type of study is in line with the objectives set out in the Sendai Framework for Disaster Risk Reduction, 2015-2030 [33].
## 2 The Test Area
The study region is located in Northwestern Spain, in the Pontevedra region, and includes the municipalities of A Illa de Arousa, Cambados, Meano, O Grove, Ribadumia, Sanxenxo, Vilagarcia de Arousa and Vilanova de Arousa (Figure 1). The region has an area of 95.3 km\\({}^{2}\\), an altitudinal difference of 196 m (0-196 AMSL), with a slope between 0\\({}^{\\circ}\\) and 84\\({}^{\\circ}\\), an average of 6\\({}^{\\circ}\\) and a dominance between the range of 0\\({}^{\\circ}\\) to 5\\({}^{\\circ}\\), with surfaces that are mostly oriented towards the west (240\\({}^{\\circ}\\) to 300\\({}^{\\circ}\\)). Geologically (Figure 2A), the region presents rocks of origin between the Precambrian and Silurian periods corresponding to alkali feldspar granite (PcSGf), biotitic granodiorite with amphibbe (PcSGd), granodiorite with feldspar megacrystals (PcSGdf), coarse-grained biotite-amphibole late granodiorite (PcSGdt), paragneises with plagioclase and biotite, micaschists (PcSPG), micaschists, quartz schists and paragneises (PcSM), quartzites (PcSc), feldspar granite (PcSG), calcsilicate rocks and para-amphibolites (PcSC) and quartz dykes, apiltes and pegmatites (Sd) [34; 35; 36; 37].
Figure 1: Geographical location of the study area in the region of Pontevedra, Spain.
Figure 2: (**A**) Map of geological units for the study region at a scale of 1:50,000, modified from sheets 151, 152, 184, and 185 of the Geological Institute of Spain [34; 35; 36; 37]. (**B**) Map of geomorphological units for the study region, modified from Martínez-Grafa [38].
In turn, there are deposits from the Quaternary period such as the dejection cone (Qcd), old beaches and coastal rasa (Qpa), Pleistocene terrace deposits (Qtp), colluvial deposits (Qc), beach sands (Qap), dune sands (Qad), alluvial-colluvial deposits (Qac) and alluvial deposits (Qa) [34; 35; 36; 37]. From the geomorphological aspect (Figure 2B), the region is characterized by the development of marine environment units such as aeolian dunes with vegetation (Ad), abrasion platform (Ap), beach (B), sand bars (Sb) and sandstone (S); of the denudational environment presents the residual relief (Rr), colluvial (C), glacis (G), Piedemonte (F); and of the fluvial and eolian environment we find the alluvial (A), alluvial fan (Af) and current aeolian dunes (Cad), respectively [38].
## 3 Methodology and Materials
The evaluation of mass movement hazard as a non-structural measure is a fundamental element in disaster risk management, especially in land use planning and natural resource protection. These mass movements are considered as a displacement of a mass of rock, debris, or soil downslope that generates serious impacts on anthropic and environmental elements [39]. In order to establish the evaluation of the hazard condition due to mass movements in the study area, two procedures were carried out following the guidelines established by the United Nations Office for Disaster Risk Reduction [40]. On the one hand, we began with the calculation of the susceptibility to mass movements (spatial condition) where, by means of an algorithm (statistical method), the inherent or conditioning variables were correlated against the dependent variable. On the other hand, the result obtained from the mass movement susceptibility calculation was used as a basis and this was crossed by means of a combination matrix with the triggering factors (rainfall and earthquake--temporal condition), to take the expression to terms of mass movement hazard (spatial-temporal relationship). For the evaluation of each of these conditions, the present study followed the flow diagram described in Figure 3.
### Calculation of Susceptibility to Mass Movement
Based on the flow diagram designed (Figure 3), we begin by calculating the susceptibility to mass movements. This calculation was made based on the bivariate statistical method. This method is part of the statistical methods that make it possible to establish combinations between the inherent elements (conditioning factors) of the surfaces and the mass movements (dependent) that have historically been generated in a region, resulting
Figure 3: Flow diagram for the calculation of mass movement susceptibility and hazard. DEM: digital elevation model, ROC: receiver operating characteristic.
in a quantitative predictive model [14]. This model discriminates the regions with a spatial probability (susceptibility) in the generation of an event, a situation that shares similar conditions to the inherent elements that were entered for the calculation performed with the bivariate statistical method [14]. In order to establish these conditions between the inherent elements and the mass movements, an evaluation of the surfaces where mass movements occur was carried out. This evaluation considers all the variables associated with the process that are especially related to geology, geomorphology, land covers, and land uses, which are relevant factors in the generation of mass movements [6; 7; 9; 12].
The bivariate statistical method is based on the Bayesian theorem that indicates the probability that an event A occurred because an event B occurred [41; 42]. From this relationship, an association is sought by means of occurrence densities between inherent elements and mass movements, by means of a spatial correlation, this correlation generates numerical values that vary depending on the degree of association, values also called \"weights of evidence\" [9; 17; 18]. In other words, the aim is the catheterization and subsequent correlation of the geoenvironmental variables that are directly associated with ground stability problems, and thus determine the spatial probability of the regions susceptible to a displacement of a mass of rock, debris, or soil. For this calculation of the weights of evidence, a database with the historical record of mass movements was elaborated. These movements were obtained by a process of photointerpretation on images arranged in the Google Earth(r) platform, multitemporal images with an optimal resolution where it was possible to determine the characteristics of type, geometry, distribution, and material of the mass movements [43; 44]. For the characterization of the movements, the descriptions made by Varnes [10] were considered, and also the new aspects described by Hungr et al. [45], and Skempton and Hutchinson [46] for the movements of landslide and flow type, which can be classified into superficial and deep. In turn, for the construction of the geometric elements of the mass movements, as well as the inherent variables, cartographic inputs were acquired from the National Geographic Institute of Spain (IGN) at a scale of 1:25,000, and the Digital Elevation Model (DEM) with a resolution of 1 m [47]. These inputs provide within each model a good resolution for a scale of 1:25,000 on which the present work is based. The construction of the layers and the calculations will be developed under the ArcGIS V10.8 software platform.
Construction of Geoenvironmental Variables
In the construction of the inherent variables (9 variables), such as lithology, distance to faults, morphogenesis, slope, orientation, rugosity, relative relief, land cover, and land use were correlated with the mass movement inventory to generate the weights of evidence. The first two variables (lithology and distance to faults) correspond to the geological attribute of the study region that was acquired from the Geological Institute of Spain at a scale of 1:50,000 [34; 35; 36; 37]. From the geomorphological attribute, the morphogenetic variable was acquired [38], which describes the endogenous and exogenous factors characteristic of terrestrial dynamics, added to the degradation and aggradation of the terrain resulting from weathering, erosion, and transport processes over time [48]. In turn, for the geomorphological attribute, the morphometric variables of slope, orientation, rugosity, and relative relief were constructed through DEM processing. For the land cover attribute, in the study of susceptibility to mass movements, the variable of the current state of land cover was taken, which is developed with the CORINE land cover methodology [49]. This variable is an important factor, since, depending on the type of cover, it offers better soil protection and contributes to the dissipation of rainfall energy [50]. This input was acquired from the IGN and corresponds to the European CORINE Land Cover (CLC) project, which is composed of 44 classes and has a 2018 reference version [51]. For the present study and based on the reference levels created by the CORINE Land Cover methodology, the present study took level 3 and below as a variable in the calculation of susceptibility, since it is in accordance with the scale of work of this research. For the land use attribute, the cartography generated by the Spanish Land Use Information System (SIOSE), which has a high-resolution input based on the integration of high-detail geospatial sources (SIOSE AR), was used. This input corresponds to the characteristics of land occupation for the year 2017 (most current version) [52] and was taken as a variable in the calculation of susceptibility.
With the acquisition of the inputs and the construction of each variable, the calculation of susceptibility to mass movements was carried out. The final susceptibility model adopted five categories (very low, low, medium, high, and very high) for the classification of areas with a natural predisposition to generate an instability process according to the inherent variables that were evaluated. For the validation of the susceptibility calculation, two different tests were performed. The first test was performed by means of the receiver operating characteristic (ROC) curve where the area under the curve (AUC) was estimated. This test consists of the calculation and its graphical representation of sensitivity versus specificity for a binary classifier system as the discrimination threshold is varied, and from this, its area under the curve (ABC) is estimated as a statistical measure of the success and prediction rates of each model [53; 54]. The second test consisted of calculating accuracy, precision, recall, and harmonic mean (F1-score) values that are based on the degree of classification generated from a confusion matrix. The confusion matrix basically compares the modeled data against the actual data, in this sense it seeks to understand the model performance by combining by means of a table the data generated from the model prediction against the actual data [55; 56].
### Mass Movement Hazard Calculation
Based on the susceptibility model, the hazard map was constructed, which incorporates the triggering factors of mass movements. This condition is established since the susceptibility model determines the spatial aspects of the movements, but the triggering action is a phenomenon associated with rainfall, earthquakes, and anthropic activity, which takes the expression to spatiotemporal terms [20; 57; 58]. For the present study, only rainfall and earthquakes were taken as mass movement triggers. For the rainfall trigger, this input was acquired from the Centro de Estudios y Experimentacion de Obras Publicas (CEDEX), with its hydrographic studies center attached to the Spanish Ministry of Transport and Sustainable Mobility, where a model of the impact of climate change on maximum precipitation in Spain (2021, 2022) was generated. The respective result layers of this study present the rates of change in maximum annual daily precipitation generated with an SQRT-R model for Tr return periods of 10, 100, and 500 years [59]. For the seismic trigger, the Seismic Hazard Map of Spain 2015 was acquired, with Peak Ground Acceleration values (PGA), with a return period of 475 years, found in the portal of the National Geographic Institute of Spain [60]. For the final part of this study, the mass movement hazard map was constructed by means of a combination matrix between the susceptibility model and the rainfall and seismic triggers. The combination of these three elements describes the spatiotemporal instability conditions for the study region, which were classified into four hazard categories (low, medium, high, and very high).
## 4 Results
For the evaluation of the mass movement hazard, the starting point was the construction of a database of historical mass movements. For its construction, we first consulted the mass movements registered in the database of the Geological and Mining Institute of Spain (BDMOVES), in which, for the study region, there are no reported events. Based on this, the second step was a cartographic construction of the mass movements by means of a process of photointerpretation of satellite images contained in the Google Earth(r) platform. For the construction of each of these geometries where mass movement was evidenced, a total of 22 satellite images from 1985 to 2023 were analyzed. This sequence of images presented a good spatial resolution for the identification of the respective movements, but in some sectors due to atmospheric conditions, especially cloudiness, it was not possible to clearly appreciate the surfaces on the analyzed image. Another limitation found in the photointerpretation process is the size of the image, which for some sequences did not cover the study region, presenting an overlap with another image with a different date of acquisition, generating a bias on the region that does not present a coherent date. Taking into account these limitations, this process resulted in a record of 60 mass movements with a total area of 0.055 km\\({}^{2}\\), less than 1% of the total study area (Figure 4). From this inventory, a total of 35 surface landslide-type movements were described, with a total area of 0.024 km\\({}^{2}\\) (44% of the total area of movements), and a total of 25 surface flow-type movements with a total area of 0.031 km\\({}^{2}\\) (56% of the total area of movements). These movements are concentrated towards the region of Monte de Siradella (east zone--El Grove), which presents a total of 18 movements (8 landslides and 10 flows), and for the region of Monte Faro (south zone--Sanxenxo), which presents a total of 35 movements (24 landslides and 11 flows).
Based on the mass movement inventory, the calculation of the weights of evidence for each of the constructed variables was carried out. From the geological attribute and its lithological variable, it is observed that the rocks that contribute to the generation of mass movements are especially the granodiorite with feldspar megacrystals (PcSGdf), coarse-grained biotite-amphibolic late granodiorite (PcSGdt) and the micaschists, quartz schists and para-gneisses (PcSM). This is largely due to the weathering of these rocks by physical-mechanical processes associated with the trace of the surrounding faults, as well as to the changes produced by the weathering generated by the conditions of seasonality change.
Figure 4: Inventory of mass movements obtained from the photo interpretation process on satellite images contained in the Google Earth® (Pontevedra, Spain) platform. The red line corresponds to the study area.
The lithological units that do not contribute to the generation of mass movements are the alluvial deposits (Qa), alluvial-colluvial deposits (Qac) and the deposits of old beaches and littoral rasa (Qpa). This is largely due to the fact that these deposits (90% of the area of these units) are located on areas with slopes \\(<\\) 10\\({}^{\\circ}\\), which generates a certain degree of stability to the deposit in the face of mass movement processes, but it is susceptible to surface or laminar erosion processes (Figure 5A). For the variable of distance to faults, processing was first performed on the fault trace where this distance was calculated by means of an algorithm (Euclidean distance) in ArcMap V10.8, with a separation radius of 500 m, obtaining 7 ranges of separation. These ranges were correlated with the inventory of mass movements to obtain the respective weights of evidence. From this result it is observed that the range of 0-500 m contributes to the generation of movements and the range between 1500 and 2000 does not contribute to the generation of movements (Figure 5B).
For the geomorphological attribute and its morphogenetic variable, it is observed that the units of the denudational environment associated with plutonic and metamorphic rocks contribute to the generation of mass movements. The combined effect of seasonal change, physical-mechanical weathering produced by fault stroke, slope, and abrupt changes in the surface planes (rugosity) have an impact on the denudational processes of the surfaces, creating situations that are exploited for the development of degradational processes within the morphological aspect on this type of rocks. On the contrary, the units of the aeolian, anthropic, and marine environments, such as aeolian dunes with vegetation, roads, and sanding, respectively, do not contribute to the generation of mass movements. This is largely due to the location of these units since they are located in areas with a slope \\(<\\) 10\\({}^{\\circ}\\) (92% of the area of these units), and the processes that prevail in these regions, especially in the marine environment units, are due to accentuated erosion caused by wave action (Figure 6).
Figure 5: (**A**) Distribution of the weights of evidence calculated for the lithological variable. (**B**) Distribution of the weights of evidence calculated for the distance to faults variable.
For the morphometric variables, the correlation was first made with the slope, which was obtained from the DEM and was reclassified into 10\\({}^{\\circ}\\) ranges (nine ranges), from this reclassification it was correlated with the mass movement inventory to determine the weights of evidence. From this correlation, it was obtained that slopes between 60\\({}^{\\circ}\\)-70\\({}^{\\circ}\\) and 50\\({}^{\\circ}\\)-60\\({}^{\\circ}\\) contribute to the generation of mass movements (Figure 7A).
This characteristic would be somewhat in contradiction with respect to the descriptions that have been made at the international level, where it is described that the most susceptible regions in the generation of mass movements would be between 25\\({}^{\\circ}\\) and 35\\({}^{\\circ}\\)[61; 62; 63]. This condition would be more associated with the aspects described by Valencia Ortiz et al. [9] where he states that the lack of relationship is due to the way the bivariate method works, which is more a function of the relationship in areas. Performing a parallel exercise only correlating the slope with the mass movements, disregarding the area function (histogram), it is observed that 41% of the events occur on slopes between 20\\({}^{\\circ}\\) and 40\\({}^{\\circ}\\), which is consistent with what is described at the international level. On the other hand, slopes between 20\\({}^{\\circ}\\)-30\\({}^{\\circ}\\) and 0\\({}^{\\circ}\\)-10\\({}^{\\circ}\\) do not contribute to the generation of mass movements.
For the morphometric variable of orientation, it is observed that surfaces oriented between 45\\({}^{\\circ}\\)-90\\({}^{\\circ}\\) and 315\\({}^{\\circ}\\)-360\\({}^{\\circ}\\) contribute to the generation of mass movements, and surfaces oriented between 225\\({}^{\\circ}\\)-270\\({}^{\\circ}\\) and 270\\({}^{\\circ}\\)-315\\({}^{\\circ}\\) do not contribute to the generation of mass movements (Figure 7B). The relative relief variable was based on the descriptions made by van Zuidam [64] and was constructed from the processing of the DEM that was divided into 500 m squares, where a neighborhood operation was performed by ranges that were later reclassified into 30 m ranges, obtaining five ranges. The correlation of this
Figure 6: Distribution of the weights of evidence calculated for the morphogenetic variable.
variable with the inventory of movements to obtain the weights of evidence describes that the ranges between 90-120 and >120 m contribute to the generation of mass movements. On the contrary, the ranges between 0-30 and 30-60 do not contribute to the generation of mass movements (Figure 7C). The last morphometric variable calculated from the DEM was rugosities, which was constructed based on the aspects described by Felicisimo [65] and was classified into zones with very high rugosities to very low rugosities. With this classification and correlating it with the inventory of movements for the weights of evidence, it is observed that the zones with very high rugosities contribute to the generation of mass movements, and the zones with very low rugosities do not contribute to the generation of movements (Figure 7D).
For the land cover attribute and its variable of current land cover status, a layer was obtained that presents a total of 37 level 3 cover classes. From this layer, correlating it with
Figure 7: (**A**) Distribution of the weights of evidence calculated for the slope variable. (**B**) Distribution of the weights of evidence calculated for the orientation variable. (**C**) Distribution of the weights of evidence calculated for the relative relief variable. (**D**) Distribution of the weights of evidence calculated for the rugosities variable.
the inventory of mass movements to obtain the weights of evidence, it is observed that coverages such as mixed woodland, citrus fruit trees, evergreen hardwoods, and confers are regions that contribute to the generation of mass movements. On the other hand, coverages such as paved or sealed areas, buildings, artificial green areas urban trees, and vineyards are regions that do not contribute to the generation of mass movements (Figure 8A). Finally, the layer of current land uses was obtained, which has a total of 48 classes of uses. This layer was correlated with mass movements, obtaining a result where uses such as natural land areas, abandoned areas, electric power, gas and thermal energy distribution services and textile manufacturing regions contribute to the generation of mass movements. The regions that do not contribute to the generation of mass movements are associated especially with sports facilities, secondary production, transitory areas, and commercial agricultural production (Figure 8B).
With each of the variables constructed, and calculated the weights of evidence of each class, the sum of the weights calculated for the class to which the pixel of each of the selected factors belongs is made, thus obtaining the final susceptibility function or LSI (Landslide Susceptibility Index) [9; 17; 41]. Once the LSI is defined, a reclassification of the data into five categories (very low, low, medium, high, and very high) is performed using the geometric interval algorithm. This algorithm creates breaks of the classes into class intervals that have a geometric series, which ensures that each class range has approximately the same number of values in each class and that the change between intervals is fairly consistent, generating a visually appealing and cartographically understandable result [66]. From the present reclassification into five categories, it is observed that the very high susceptibility category has an area of 17.9 km\\({}^{2}\\) (18.8%) of the study area, and a total of 43 mass movements were grouped on this category with an affected area of 53943 m\\({}^{2}\\) (98.21%) of the total area of movements. The high susceptibility category presents an area of 14.4 km\\({}^{2}\\) (15.2%), with a total of 16 mass movements, with an area of 966 m\\({}^{2}\\) (1.76%); the medium susceptibility
Figure 8: (**A**) Distribution of the weights of evidence calculated for the current land cover status variable. (**B**) Distribution of the weights of evidence calculated for the land use variable.
category presents an area of 13.9 km\\({}^{2}\\) (14.6%), with a total of 1 (1) mass movement with a total area of 17 m\\({}^{2}\\) (0.03%). Finally, low, and very low susceptibility present an area of 48.9 km\\({}^{2}\\) (51.4%), these categories do not include any mass movement (Table 1). This mass movement susceptibility map shows the largest area in the very high category, which is especially sectored for the Siradella and Faro Mountains, and to a lesser extent in the northeastern region of the municipality of A Illa de Arousa and the southeastern region of the municipality of El Grove (Figure 9).
Based on the final susceptibility model, the different validation tests were carried out using the methods proposed. For the ROC curve method, a value equal to 0.945 was obtained, and for the confusion matrix, the values of accuracy = 0.9821, precision = 0.9824, recall = 0.9997, and F1-score = 0.991 were obtained. With the construction of the AUC curve and the analysis based on the confusion matrix and its results, it is observed that the calculation of susceptibility by means of bivariate statistical analysis presents a good
\\begin{table}
\\begin{tabular}{c c c c c} \\hline \\hline
**Category** & **Area [m\\({}^{2}\\)]** & **\\%** & **Movements Area [m\\({}^{2}\\)]** & **\\%** \\\\ \\hline Very low & 20,548,890 & 21.6 & 0 & 0 \\\\ Low & 28,314,904 & 29.8 & 0 & 0 \\\\ Medium & 13,891,375 & 14.6 & 17 & 0.03 \\\\ High & 14,413,417 & 15.2 & 966 & 1.76 \\\\ Very high & 17,917,573 & 18.8 & 53,943 & 98.21 \\\\ Total & 95,086,159 & 100 & 54,926 & 100 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Area distribution of mass movement susceptibility categories and their relationship to the mass movement inventory.
Figure 9: Mass movement susceptibility map reclassified into 5 categories and obtained by means of the bivariate method.
performance in the prediction of regions that have historically been unstable, giving a very high level of certainty about the behavior of susceptibility in the entire study area. All these results describe a good behavior of the susceptibility model (Figure 10).
With the validated mass movement susceptibility model, the mass movement hazard map was constructed, which integrates the triggering factors (rainfall and earthquake) of the study region. For the rainfall trigger obtained from CEDEX, two scenarios are described, RCP 4.5 and RCP 8.5 defined by the Intergovernmental Panel on Climate Change--IPCC [67]. The models generated by CEDEX describe that, for RCP 4.5 with a mid-century projection, rainfall can reach values between 11 and 17 mm (maximum annual daily precipitation) for a Tr of 10 years, values of 15-22 mm for a Tr of 100 years and 17-25 mm for a Tr of 500 years [59]. For the scenario with an RCP 8.5, values between 12 and 18 mm for a Tr of 10 years, between 10 and 24 mm for a Tr of 100 years, and between 13 and 28 mm for a Tr of 500 years are described [59]. From the respective models generated, the mid-century projection is taken, with data from the RCP 4.5 model and a Tr of 100 years, which would be more discrete data as a mass movement triggering factor for the study area (Figure 11A). For the triggering earthquake, the model generated in 2015 from the Peak Ground Acceleration (PGA) map present in the IGN portal [60] was acquired. This map describes acceleration values for the study region between 0.04 g \\(\\leq\\) PGA < 0.08 g (39.2 cm/s\\({}^{2}\\)\\(\\leq\\) PGA < 78.4 cm/s\\({}^{2}\\)) (Figure 11B).
With the present rainfall and acceleration map, which are qualified in five ranges, they were assigned a numerical value (1 to 5) in order to correlate the present values of the study area with the data obtained from the susceptibility model by means of a combination matrix (Table 2). This assignment of values is established to perform a concordant correlation between the values and results of the triggers and the mass movement susceptibility [57; 58]. For the correlation by means of the combination matrix, the susceptibility classification is taken as the abscissa axis and the sum by independent of the triggers with the susceptibility values on the ordinate axis, the crossing between axes describes the mass movement hazard [58]. This hazard map represents the spatiotemporal conditions of the study area in four categories (low, medium, high, and very high) (Figure 12).
Figure 10: ROC curve distribution, its AUC evaluation, and confusion matrix results.
The present mass movement hazard map shows a total area of 17.76 km\\({}^{2}\\) (18.89%) of the study area in the very high category, 14.30 km\\({}^{2}\\) (15.21%) in the high category, 41.50 km\\({}^{2}\\) (44.14%) in the medium category and 20.45 km\\({}^{2}\\) (21.75%) in the low category. These hazard conditions are associated with maximum daily rainfall between 15 and 22 mm, a condition that may change due to climate change (this aspect will be explained in the following chapter), and acceleration values with a PGA between 0.064 and 0.108 for the study region. On the other hand, indirectly, the seasonality of the region must be considered, in addition to the snow conditions that can create weathering and erosion conditions of a superficial type, contributing to a certain extent to the creation of instability planes. The municipalities with the largest area in the very high category of mass movement hazard are El Grove, A Illa de Arousa, and Sanxenxo (Table 3).
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline
**Rainfall Classification** & \\multicolumn{2}{c}{**Acceleration Classification**} & \\multicolumn{2}{c}{**Susceptibility**} \\\\ \\hline
**Range [mm]** & **Classification** & **Range [PGA]** & **Classification** & **Category** & **Classification** \\\\ \\hline \\(-\\)6.8–6 & 1 & 0.02–0.064 & 1 & Very low & 1 \\\\
6.1–12 & 2 & 0.064–0.108 & 2 & Low & 2 \\\\
13–18 & 3 & 0.108–0.152 & 3 & Medium & 3 \\\\
19–24 & 4 & 0.152–0.196 & 4 & High & 4 \\\\
25–30 & 5 & 0.196–0.24 & 5 & Very high & 5 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Ranking of rainfall and acceleration for the combination matrix with susceptibility values.
Figure 11: (**A**) Maximum annual daily precipitation map, RCP 4.5 model, and a Tr of 100 years for Spain. Modified map from CEDEX [59]. (**B**) Seismic hazard map, PGA values in gravity acceleration for Spain. Modified map from IGN [60].
## 5 Discussion
The development of this type of cartography, which is built on the basis of integrating in a mathematical function the inherent elements of the terrain correlated with historical processes to establish a probability that is subsequently crossed with the triggering factors to describe the spatiotemporal relationships, are key points to consider in the evaluation of risk management plans and especially in the planning of urban development and protection of natural resources. This aspect is discussed by Bathrellos et al. (2018), where he describes the correct application of a methodology that is appropriate for a specific environment where the use of a coherent scale of work will determine the potentially susceptible sites so
Figure 12: Mass movement hazard map based on the susceptibility model and the triggers of rainfall and earthquake; this map is reclassified into 4 categories.
\\begin{table}
\\begin{tabular}{c c c c c c c c c} \\hline \\hline
**Municipality** & **Area [m2]** & **Low** & **\\%** & **Medium** & **\\%** & **High** & **\\%** & **Very High** & **\\%** \\\\ \\hline A Illa de Arousa & 5,085,203 & 299,543 & 5.89 & 1,993,794 & 39.21 & 1,572,726 & 30.93 & 1,219,140 & 23.97
that people such as planners, engineers, and policy-makers can improve the quality of life of its inhabitants. In turn, this type of information generated is useful for governmental and non-governmental bodies to define an appropriate level of management and preparation of necessary measures based on the information obtained in combination with risk analysis, thus estimating the possible average costs associated with the damage generated by mass movement [69].
This type of evaluation is relevant in the aspects described above, but also, in the construction of a susceptibility model, the intrinsic value associated with the appraisals and descriptions made by the expert criteria must be taken into account, according to the scale of work, since the mathematical methods adopted work on the basis of the data entered and their corresponding correlation of the information [9]. This is largely due to the cause-effect relationship where the real conditions of an environment can be overestimated or underestimated, which, in this specific case, would be given by that spatial relationship (susceptibility) and its triggering factors. Conditions that are very focused on the aspects of a low complexity system of unidirectional nonlinear type, where the relationship of triggers, such as rainfall and earthquakes (for this study) has an increasing effect on mass movements, but the movements have no effect on the triggers [20; 70]. But it is also important to have designed inputs on the environmental conditions present in the evaluated region (site effect) since regional models can bias or homogenize their behavior, as is the case of the spatio-temporal relationship, where susceptibility is integrated with the triggers of a region [9; 20; 57]. In turn, within the cartographic conditions for the creation of a suitable susceptibility model that is in accordance with the level of interpretation and detail required within an urban planning and regional zoning scheme, it is required that the cartographic inputs be in accordance with the present level since it can lead to problems on anthropic elements [16].
A relevant aspect within these cartographic constructions, especially in the creation of the mass movement inventory, is the limitation of the record obtained, both by national or international databases and by the photointerpretation process, where the record, on some occasions, is somewhat limited. If we look only at the photo interpretation process, from aerial photographs, we only have very specific dates on which these photographs were acquired, or from satellite images, in which the acquisition is limited to the last decades. This means that we do not have a complete picture of the region under evaluation or the regions that could potentially be unstable. From these aspects arises the importance of acquiring first-hand information sources for a correct interpretation and evaluation of a region [9; 11].
In a parallel example to this study but involving somewhat similar conditions in terms of the scale of work, expert criteria, and level of detail, are the descriptions made by De Moel et al. [71], where he mentions that there are notable contrasts in the different levels of information described for the knowledge of irrigation associated with floods. From this perspective and only taking as a reference a local scale, shortcomings and urgent needs are described in the study and analysis of the effects generated by floods on critical infrastructures, given their importance for society, the economy, emergency management, and reconstruction [71]. These factors that imply further progress in the knowledge of the elements associated with risk can be extrapolated to the processes of mass movements since the conditions of generation of an event to a large extent are arranged by hydrometeorological effects. On the other hand, conditions at the level of mass movement processes, such as floods, are factors that should be given priority in studies at larger scales, due to the implications they have on anthropic environments. These aspects are becoming more and more relevant due to the changing climatic conditions we are currently experiencing.
An overview of these changes can be seen in the reports generated by the Research on the Epidemiology of Disasters (CREDS), where in its 2023 report it analyzes the number of disasters generated for the year 2022 and compared with the annual average between 2002 and 2021. This report describes that for the year 2022, a total of 387 different events were generated, and for the period from 2002 to 2021 370 were generated, of which 16 droughs, 27 earthquakes, 168 floods, 18 landslides, 104 storms, 6 volcanic activities, and 11 forest fires were recorded, and only for the year 2022 there were 22 droughs, 31 earthquakes, 176 floods, 17 landslides, 108 storms, 5 volcanic activities, and 15 forest fires [72]. The above describes a considerable increase in just one year in the number of events recorded worldwide. These conditions in the increase in natural disasters due to climate change in recent years are factors to be considered indirectly in the way of establishing relationships with future climate projections in the evaluated environment. If we look at the projections for the study region and only take the climatic factors of temperature (maximum in the year) and rainfall (maximum in 24 h and number of days with rainfall) described in the climate scenario viewer of the Ministry for Ecological Transition and the Demographic Challenge of Spain [73], we observe three scenarios for an RCP of 4.5. A near future scenario would have temperature values between 19\\({}^{\\circ}\\) to 21\\({}^{\\circ}\\) C, maximum 24 h rainfall between 51 to 67 mm, and a number of rainy days between 122 to 135 [73]. For a medium future scenario, temperature values would be between 20\\({}^{\\circ}\\) to 21\\({}^{\\circ}\\) C, maximum 24 h rainfall between 54 to 70 mm, and number of rainy days between 115 to 135 [73]. For a distant future scenario, temperature values would be between 20\\({}^{\\circ}\\) to 22\\({}^{\\circ}\\) C, maximum 24 h rainfall between 55 to 71 mm, and number of rainy days between 116 to 135 [73].
In order to better understand whether these scenarios present an increase or decrease over the base values for the study region, a comparison should be made with the historical data reported. In this sense, the information reported by the Spanish State Meteorological Agency (Aemet) is used, where average maximum temperature values are between 17.5\\({}^{\\circ}\\) to 20\\({}^{\\circ}\\) C, average maximum daily precipitation between 70 to 80 mm, and average number of days with rainfall greater than 1 mm between 100 to 125 days [74]. With this information, a variation of the data is observed, especially for the number of days with rain that present an increase. The effect of this type of change, where there is an increase in the number of days with rain, or the maximum rainfall of 24 h, can intensify the action as a trigger for mass movements. But it is also important to consider that, in order to define a complete scenario, detailed studies of rainfall as a trigger of mass movements must be carried out, where the threshold at which an event can be triggered, the probability of exceeding this threshold and its recurrence are estimated [20]. These aspects greatly improve the understanding of the phenomenon and its causality. This contrast indicates conditions of increase in each of the observed parameters, indicating a greater possibility of generating surface processes, especially mass movements. Hence the importance of expanding the knowledge of the risk associated with these types of hazards to improve from a structural and non-structural condition at a local level of detail, the actions, and measures to mitigate the impacts that may be generated. Thus, the characterization of an environment that is disturbed by climatic or internal land conditions is necessary to estimate the degree of impact that may occur on these natural and anthropic environments [75; 76].
## 6 Conclusions
The evaluation of the mass movement hazard constructed from the susceptibility model generated by the bivariate method, and combined with the trigger's rainfall and earthquake, describes in four categories (low, medium, high, and very high) the regions in danger condition. The susceptibility model is presented in five categories of which high and very high susceptibility represent 34% (32.33 km\\({}^{2}\\)) of the total area, with a greater extension on the areas of Mount Siradella and Mount Faro. This susceptibility model was created based on nine variables (lithology, distance to faults, morphogenesis, slope, orientation, rugosities, relative relief, land cover, and land use) that were correlated with the mass movement inventory, which has 60 records, thus obtaining the respective weights of evidence. The final susceptibility model according to the methods proposed for its validation obtained a value of 0.945 for the ROC curve, and for the confusion matrix the values of accuracy = 0.9821, precision = 0.9824, recall = 0.9997, and F1-score = 0.991 were obtained. This reclassified susceptibility model was combined in a matrix with the rainfall and earthquake triggers to obtain the hazard. The rainfall trigger for the study region presents maximum annual daily precipitation values between 15 and 22 mm for a 100-year Tr according to RCP 4.5 with projection to mid-century. For the earthquake trigger, it is described that the Peak Ground Acceleration (PGA) values in a 475-year return period are between 0.04 g \\(\\leq\\) PGA < 0.08 g (39.2 cm/s\\({}^{2}\\)\\(\\leq\\) PGA < 78.4 cm/s\\({}^{2}\\)). The mass movement hazard map in its high and very high category has an area of 32.06 km\\({}^{2}\\) (34.1%) of the study area, with the municipalities of El Grove, Sanxenxo, and A Illa de Arousa, respectively, having the greatest extent of hazard in these two categories. The study carried out here is an advance in the non-structural measures of the knowledge of the risk associated with these processes. In addition to this, the evaluation carried out and part of the inputs used come from public entities such as the Geological and Mining Institute of Spain (IGME), the National Geographic Institute (IGN), and the Center for Studies and Experimentation of Public Works (CEDEX), which present the thematic information freely available and in different formats so that they can be processed according to the study to be developed. Also, this type of processing to obtain the susceptibility and hazard map can be completed with spreadsheets and free format geographic information systems, which means this type of study can be generated at a low cost, but with products that are relevant at the time of concrete actions in urban planning and preservation of natural resources. At the same time, it is a tool that can be easily involved in actions concerning disaster risk management, due to the sectorization and characterization of regions in hazardous conditions. However, this type of study should be carried out continuously, since a periodic update with respect to new records of mass movements, inherent factors and analysis of triggers will improve the prediction and quality of the products obtained, as well as improve the aspects described above in a process of positive feedback of the system.
Conceptualization, data collection, data analysis, writing draft and final manuscript, calculation of susceptibility and hazard due to mass movements, J.A.V.O. Concept, writing draft manuscript, C.E.N. and A.M.M.-G. All authors have read and agreed to the published version of the manuscript. This research received no external funding. The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author. This research was assisted for Grant 131874B-I00 funded by MCIN/AEI/10.13039/501100011033 and the GEAPAGE research group (Environmental Geomorphology and Geological Heritage) of the University of Salamanca. The authors declare no conflicts of interest.
## References
* Intergovernmental Panel on Climate Change (IPCC) (2007) Intergovernmental Panel on Climate Change (IPCC). Climate Change 2007: The Physical Science Basis Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panelon Climate Change; Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K.B., Tignor, M., Miller, H.L., Eds.; Cambridge University Press: Cambridge, UK, 2007.
* Gariano et al. (2016) Gariano, S.L.; Guzzetti, F. Landslides in a changing climate. _Earth Sci. Rev._**2016**, _162_, 227-252. [CrossRef]
* Alvioli et al. (2018) Alvioli, M.; Meililo, M.; Guzzetti, F.; Rossi, M.; Palazzi, E.; von Hardenberg, J.; Brunetti, M.T.; Peruccacci, S. Implications of climate change on landslide hazard in Central Italy. _Sci. Total. Environ._**2018**, _630_, 1528-1543. [CrossRef]
* Patton et al. (2019) Patton, A.I.; Rathburn, S.L.; Capps, D.M. Landslide response to climate change in permafrost regions. _Geomorphology_**2019**, _340_, 116-128. [CrossRef]
* Wood et al. (2020) Wood, J.; Harrison, S.; Reinhardt, L.; Taylor, F. Landslide databases for climate change detection and attribution. _Geomorphology_**2020**, _355_, 107061. [CrossRef]
* Popescu and Landslide (2002) Popescu, M. Landslide Causal Factors and Landslide Remedial Options. In Proceedings of the 3rd International Conference on Landslides, Slope Stability and Safety of Infra-Structures, Singapore, 11-12 July 2002; pp. 61-81.
* Van Westen et al. (2008) Van Westen, C.J.; Castellanos, E.; Kuriakose, S.L. Spatial data for landslide susceptibility, hazard, and vulnerability assessment: An overview. _Eng. Geol._**2008**, _102_, 112-131. [CrossRef]
* Lv et al. (2022) Lv, L.; Chen, T.; Dou, J.; Plaza, A. A hybrid ensemble-based deep-learning framework for landslide susceptibility mapping. _Int. J. Appl. Earth Obs. Geoinf._**2022**, _108_, 102713. [CrossRef]* Ortiz et al. (2023) Ortiz, J.A.V.; Martinez-Grana, A.M.; Mendez, L.M. Evaluation of Susceptibility by Mass Movements through Stochastic and Statistical Methods for a Region of Bucaramanga, Colombia. _Remote Sens._**2023**, _15_, 4567. [CrossRef]
* Varnes (1978) Varnes, D.J. Slope movement types and processes. In _Landslides: Analysis and Control_; Schuster, R.L., Krizek, R.J., Eds.; Transportation and Road Research Board, National Academy of Science: Washington, DC, USA, 1978; pp. 11-13.
* Fell et al. (2008) Fell, R.; Corominas, J.; Bonnard, C.; Cascini, L.; Leroi, E.; Savage, W.Z. Guidelines for landslide susceptibility, hazard and risk zoning for land use planning. _Eng. Geol._**2008**, _102_, 85-98. [CrossRef]
* Ortiz and Martinez-Grana (2018) Ortiz, J.A.V.; Martinez-Grana, A.M. A neural network model applied to landslide susceptibility analysis (Capitanejo, Colombia). _Geomatics Nat. Hazards Risk_**2018**, \\(9\\), 1106-1128. [CrossRef]
* Liu et al. (2023) Liu, R.; Ding, Y.; Sun, D.; Wen, H.; Gu, Q.; Shi, S.; Liao, M. Insights into spatial differential characteristics of landslide susceptibility from sub-region to whole-region used by northeast Chongqing, China. _Geomat. Nat. Hazards Risk_**2023**, _14_, 2190858. [CrossRef]
* Soeters and van Westen (1996) Soeters, R.; van Westen, C.J. Slope instability recognition, analysis and zonation. In _Landslide Types and Processes_; Turner, A.K., Schuster, R.L., Eds.; Special Report National Research Council 247; National Academy of Sciences: Washington, DC, USA, 1996; Volume 247, pp. 129-177.
* Huang et al. (2020) Huang, F.; Cao, Z.; Guo, J.; Jiang, S.-H.; Li, S.; Guo, Z. Comparisons of heuristic, general statistical and machine learning models for landslide susceptibility prediction and mapping. _CATENA_**2020**, _191_, 104580. [CrossRef]
* Cascini (2008) Cascini, L. Applicability of landslide susceptibility and hazard zoning at different scales. _Eng. Geol._**2008**, _102_, 164-177. [CrossRef]
* van Westen (2013) van Westen, C. _Guidelines for the Generation of 1:50.000 Scale Landslide Inventory, Susceptibility Maps, and Qualitative Risk Maps, Illustrated with Case Studies of the Provinces Thanh Hoa and Nghe An_; University of Twente: Enschede, The Netherlands, 2013.
* Chen et al. (2016) Chen, W.; Chai, H.; Sun, X.; Wang, Q.; Ding, X.; Hong, H. A GIS-based comparative study of frequency ratio, statistical index and weights-of-evidence models in landslide susceptibility mapping. _Arab. J. Geosci._**2016**, \\(9\\), 204. [CrossRef]
* Wieczorek (1996) Wieczorek, G.F. Landslides: Investigation and mitigation. In _Chapter 4-Landslide Triggering Mechanisms_; Transportation Research Board Special Report: Washington, DC, USA, 1996; pp. 76-90.
* Ortiz and Martinez-Grana (2023) Ortiz, J.A.V.; Martinez-Grana, A.M. Calculation of precipitation and seismicity thresholds as triggers for mass movements in the region of Bucaramanga, Colombia. _Ecol. Indic._**2023**, _152_, 110355. [CrossRef]
* Corominas and Moya (1999) Corominas, J.; Moya, J. Reconstructing recent landslide activity in relation to rainfall in the Llobregat River basin, Eastern Pyrenees, Spain. _Geomorphology_**1999**, _30_, 79-93. [CrossRef]
* Iverson (2000) Iverson, R.M. Landslide triggering by rain infiltration. _Water Resour. Res._**2000**, _36_, 1897-1910. [CrossRef]
* Dai and Lee (2001) Dai, F.; Lee, C. Frequency-volume relation and prediction of rainfall-induced landslides. _Eng. Geol._**2001**, _59_, 253-266. [CrossRef]
* Rosi et al. (2016) Rosi, A.; Peternel, T.; Jemec-Aufikic, M.; Komac, M.; Segoni, S.; Casagli, N. Rainfall thresholds for rainfall-induced landslides in Slovenia. _Landslides_**2016**, _13_, 1571-1577. [CrossRef]
* Dikshit et al. (2020) Dikshit, A.; Sarkar, R.; Pradhan, B.; Segoni, S.; Alamri, A.M. Rainfall Induced Landslide Studies in Indian Himalayan Region: A Critical Review. _Appl. Sci._**2020**, _10_, 2466. [CrossRef]
* Sidle and Swaston (1982) Sidle, R.C.; Swaston, D.N. Analysis of a small debris slide in coastal Alaska. _Can. Geotech. J._**1982**, _19_, 167-174. [CrossRef]
* Fredlund (1987) Fredlund, D.C. Slope stability analysis incorporating the effect of soil suction. In _Slope Stability_; Anderson, M.G., Ed.; John Wiley & Sons Ltd.: Saskatchewan, SK, Canada, 1987; Chapter 4.
* Crosta (1998) Crosta, G. Regionalization of rainfall thresholds: An aid to landslide hazard evaluation. _Environ. Geol._**1998**, _35_, 131-145. [CrossRef]
* Aleotti (2004) Aleotti, P. A warning system for rainfall-induced shallow failures. _Eng. Geol._**2004**, _73_, 247-265. [CrossRef]
* Rodriguez et al. (1999) Rodriguez, C.; Bommer, J.; Chandler, R. Earthquake-induced landslides: 1980-1997. _Soil Dyn. Earth. Eng._**1999**, _18_, 325-346. [CrossRef]
* Bird et al. (2004) Bird, J.F.; Bommer, J.J. Earthquake losses due to ground failure. _Eng. Geol._**2004**, _75_, 147-179. [CrossRef]
* Corominas et al. (2014) Corominas, J.; Van Westen, C.; Frattini, P.; Cascini, L.; Malet, J.-P.; Fotopoulou, S.; Catani, F.; Van Den Eeckhaut, M.; Mavrouli, O.; Agliardi, F.; et al. Recommendations for the quantitative analysis of landslide risk. _Bull. Eng. Geol. Environ._**2014**, _73_, 209-263. [CrossRef]
* UNDRR (2015) UNDRR. _Sendai Framework for Disaster Risk Reduction_; Geneva, Switzerland, 2015.
* IGME (1981) IGME. _Mapa Geologico de Espana E. 1:50.000, Puebla de Caramital, Hoja 151 (3-9), Segunda Serie--Primera Eclicion_; Instituto Geologico y Minero de Espana, Servicio de Publicaciones Ministerio de Industria y Energia: Madrid, Spain, 1981.
* IGME (1981) IGME. _Mapa Geologico de Espana E. 1:50.000, Grove, Hoja 184 (3-10), Segunda Serie--Primera Eclicion_; Instituto Geologico y Minero de Espana, Servicio de Publicaciones Ministerio de Industria y Energia: Madrid, Spain, 1981.
* IGME (1981) IGME. _Mapa Geologico de Espana E. 1:50.000, Pontereydra, Hoja 185 (4-10), Segunda Serie--Primera Eclicion_; Instituto Geologico y Minero de Espana, Servicio de Publicaciones Ministerio de Industria y Energia: Madrid, Spain, 1981.
* IGME (1981) IGME. _Mapa Geologico de Espana E. 1:50.000, Pullagarcia De Arosa, Hoja 152 (4-9), Segunda Serie--Primera Eclicion_; Instituto Geologico y Minero de Espana, Servicio de Publicaciones Ministerio de Industria y Energia: Madrid, Spain, 1982.
* Martinez-Grana et al. (2017) Martinez-Grana, A.M.; Arias, L.; Goy, J.L.; Zazo, C.; Silva, P. Geomorphology of the mouth of the Arosa estuary (Coruna-Pontevedra, Spain). _J. Maps_**2017**, _13_, 554-562. [CrossRef]
* Cruden (1991) Cruden, D.M. A simple definition of a landslide. _Bull. Eng. Geol. Environ._**1991**, _43_, 27-29. [CrossRef]* (40) UNDRR. Natural Disasters and Vulnerability Analysis: Report of Expert Group Meeting, 9-12 July 1979. _Office of the United Nations Disaster Relief Coordinator_. Available online: [https://digitallibrary.un.org/record/95986?ln=en&v=pdf](https://digitallibrary.un.org/record/95986?ln=en&v=pdf) (accessed on 11 April 2024).
* (41) Bonham-Carter, G.F.; Agterberg, F.P.; Wright, D.F. Integration of geological datasets for gold exploration in Nova Scotia. _Photogramm. Eng. Remote Sens._**1988**, _54_, 1585-1592. [CrossRef]
* (42) Bonham, G. _Geographic Information Systems for Geoscientists: Modelling with GIS (No. 13)_; Pergamon, Ed.; Elsevier: Amsterdam, The Netherlands, 1994; Volume 13.
* (43) Sato, H.P.; Harp, E.L. Interpretation of earthquake-induced landslides triggered by the 12 May 2008, M7.9 Wenchuan earthquake in the Beichuan area, Sichuan Province, China using satellite imagery and Google Earth. _Landslides_**2009**, \\(6\\), 153-159. [CrossRef]
* (44) Guzzetti, F.; Mondini, A.C.; Cardinali, M.; Fiorucci, F.; Santangelo, M.; Chang, K.-T. Landslide inventory maps: New tools for an old problem. _Earth-Sci. Rev._**2012**, _112_, 42-66. [CrossRef]
* (45) Hunger, O.; Leroueil, S.; Picarelli, L. The Varnes classification of landslide types, an update. _Landslides_**2014**, _11_, 167-194. [CrossRef]
* (46) Skemton, A.W.; Hutchinson, J.N. Stability of natural slopes and embankment foundations. In Proceedings of the Seventh International Conference on Soil Mechanics and Foundation Engineering 4, Mexico City, Mexico, 25-29 August 1969; Sociedad Mexicana de Mecanica de de Suelos: Mexico City, Mexico, 1969; pp. 291-340. Available online: [https://trid.trb.org/view/125702](https://trid.trb.org/view/125702) (accessed on 11 April 2024).
* (47) IGN; Instituto Geografico Nacional. Centro de Descargas--CaroBase ANE. Available online: [https://centrodedescargas.cnig.es/CentroDescargas/index.jsp#](https://centrodedescargas.cnig.es/CentroDescargas/index.jsp#) (accessed on 11 April 2024).
* (48) Zinck, J.A. Geopedologia: Elementos de geomorfologia para estudios de suelos y de risegos naturales: Enschede. In _International Institute for Geo-Information Science and Earth Observation (ITC)_; ITC Special Lecture Notes Series; University of Twente: Enschede, The Netherlands, 2012.
* (49) Bossard, M.; Feranec, J.; Otahel, J. _CORINE Land Cover Technical Guide: Addendum 2000_; European Environment Agency: Copenhagen, Denmark, 2000; Volume 40.
* (50) Charman, P.V.; Murphy, B.W. _Solls: Their Properties and Management_, 2nd ed.; Oxford University Press: Melbourne, Australia; Oxford, UK, 2000.
* (51) IGN; Instituto Geografico Nacional. Centro de Descargas--CORINE Land Cover. Available online: [https://qrcd.org/5bHm](https://qrcd.org/5bHm) (accessed on 12 April 2024).
* (52) IGN; Instituto Geografico Nacional. Centro de Descargas--SIOSE AR. Available online: [https://qrcd.org/5rAl#](https://qrcd.org/5rAl#) (accessed on 12 April 2024).
* (53) Dahal, R.K.; Hasegawa, S.; Nonomura, A.; Yamanaka, M.; Dhakal, S.; Paudyal, P. Predictive modelling of rainfall-induced landslide hazard in the Lesser Himalaya of Nepal based on weights-of-evidence. _Geomorphology_**2008**, _102_, 496-510. [CrossRef]
* (54) Vakhshoori, V.; Zare, M. Landslide susceptibility mapping by comparing weight of evidence, fuzzy logic, and frequency ratio methods. _Geomat. Nat. Hazards Risk_**2015**, \\(7\\), 1731-1752. [CrossRef]
* (55) Lin, L.; Lin, Q.; Wang, Y. Landslide susceptibility mapping on a global scale using the method of logistic regression. _Nat. Hazards Earth Syst. Sci._**2017**, _17_, 1411-1424. [CrossRef]
* (56) Zhao, Z.; He, Y.; Yao, S.; Yang, W.; Wang, W.; Zhang, L.; Sun, Q. A comparative study of different neural network models for landslide susceptibility mapping. _Adv. Space Res._**2022**, _70_, 383-401. [CrossRef]
* (57) Nadim, F.; Kjekstad, O.; Peduzzi, P.; Herold, C.; Jaedicke, C. Global landslide and avalanche hotspots. _Landslides_**2006**, \\(3\\), 159-173. [CrossRef]
* (58) SGC. _Documento Metodologico de la Zonificacion de Susceptibilidad y Amenza por Movimientos en Masa Escala 1:100.000_; Servicio Geologico Colombiano (SGC): Bogota, Colombia, 2013. [CrossRef]
* (59) CEDEX; Centro de Estudios y Experimentacion de Obras Publicas. Ministerio de Transportes y Movilidad Sostenible. Available online: [https://ceh.cedex.es/web/Imp_CClimatico_Pmax.htm](https://ceh.cedex.es/web/Imp_CClimatico_Pmax.htm) (accessed on 12 April 2024).
* (60) IGN. Instituto Geografico Nacional. _Mapas de Sismicidad y Peligrosidad._ Available online: [https://www.ign.es/web/ign/portal](https://www.ign.es/web/ign/portal) (accessed on 12 April 2024).
* (61) Pachauri, A.; Pant, M. Landslide hazard mapping based on geological attributes. _Eng. Geol._**1992**, _32_, 81-100. [CrossRef]
* (62) Dai, F.C.; Lee, C.F. Landslide characteristics and slope instability modeling using GIS, Lantau Island, Hong Kong. _Geomorphology_**2002**, _42_, 213-228. [CrossRef]
* (63) Santacana, N. Analisis de la susceptibilidad del terreno a la formacion de deslizamientos superficiales y grandes deslizamientos mediante el uso de sistemas de informacion geografica. Aplicacion a la cuenca alta del rio Llobregat. Ph.D. Thesis, UPC, Departament d'Enginyeria del Terreny, Cartografica i Geofisica, Barcelona, Spain, 2001.
* (64) van Zuidam, R.A. _Aerial Photo-Interpretation in Terrain Analysis and Geomorphologic Mapping_; Smits Publishers: The Hague, The Netherlands, 1986.
* (65) Felicisimo, A.M. Modelos Digitales del Terreno. Oviedo: Pentalfa. 1994. Available online: [http://www.etsimo.unioives/~feli](http://www.etsimo.unioives/~feli) (accessed on 11 November 2016).
* (66) ESRI. GIS Dictionary. Available online: [https://support.esri.com/en-us/gis-dictionary/geometric-interval-classification](https://support.esri.com/en-us/gis-dictionary/geometric-interval-classification) (accessed on 15 May 2024).
* IPCC (2013) IPCC. _Climate Change 2013: The Physical Science Basis Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change_; Stocker, T.D.-K., Ed.; Cambridge University Press: New York, NY, USA; Cambridge, UK, 2013; pp. 91, 106, 184.
* Bathrellos et al. (2012) Bathrellos, G.D.; Gaki-Papanastassiou, K.; Skidodimou, H.D.; Papanastassiou, D.; Chousianitis, K.G. Potential suitability for urban planning and industry development using natural hazard maps and geological-geomorphological parameters. _Environ. Earth Sci._**2012**, _66_, 537-548. [CrossRef]
* Glade et al. (2000) Glade, T.; Crozier, M.; Smith, P. Applying Probability Determination to Refine Landslide-triggering Rainfall Thresholds Using an Empirical \"Antecedent Daily Rainfall Model\". _Pure Appl. Geophys._**2000**, _157_, 1059-1079. [CrossRef]
* Li and Convertino (2021) Li, J.; Convertino, M. Inferring ecosystem networks as information flows. _Sci. Rep._**2021**, _11_, 7094. [CrossRef] [PubMed]
* de Moel et al. (2015) de Moel, H.; Jongman, B.; Kreibich, H.; Merz, B.; Penning-Rowsell, E.; Ward, P.J. Flood risk assessments at different spatial scales. _Mitig. Adapt. Strat. Glob. Chang._**2015**, _20_, 865-890. [CrossRef] [PubMed]
* CRED (2022) CRED; Centre for Research on the Epidemiology of Disasters. Disasters in Numbers 2022. Available online: [https://www.cred.be/publications](https://www.cred.be/publications) (accessed on 19 April 2024).
* Miteco (2024) Miteco. AdapteCCa.es- Visor de Escenarios de Cambio Climatico. Ministerio para la Transicion Ecologica y el Reto Demogrifico--Miteco. Available online: [https://escenarios.adaptecca.es/](https://escenarios.adaptecca.es/) (accessed on 23 June 2024).
* Aemet (2024) Aemet. Valores climatologicos normales. Agencia Estatal de Meteorologia--Aemet. Available online: [https://qrcd.org/5gqV](https://qrcd.org/5gqV) (accessed on 23 June 2024).
* David (2008) David, L. Quarrying: An anthropogenic geomorphological approach. _Acta Montana. Slovaca_**2008**, _13_, 66-74.
* Szabo et al. (2010) Szabo, J.; David, L.; Loczy, D. _Anthropogenic Geomorphology: A Guide to Man-Made Landforms_; Springer Science & Business Media: Dordrecht, The Netherlands, 2010. [CrossRef]
**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | Knowledge of hazard conditions due to mass movements is one of the non-structural measures for risk management, urban planning, and protection of natural resources. To obtain this type of mapping, a spatial construction was started by correlating the historical movements with the inherent variables of the terrain by means of the bivariate statistical method, which assigns densities or weights of evidence to estimate the degree of susceptibility. This model was combined with the triggering factors (rainfall and earthquake) to determine the spatiotemporal conditions (hazard). From this procedure, it was obtained that the susceptibility model presents 34% (32.33 km\\({}^{2}\\)) of the total area in the high and very high categories, especially in the regions of Mount Siradella and Mount Faro. The validation of the present model obtained a value of 0.945 with the ROC curve. For the hazard condition, 34.1% (32.06 km\\({}^{2}\\)) of the study area was found to be in the high and very high category, especially in the municipalities of El Grove, Sanxenxo, and A Illa de Arousa, which have the greatest extension. The present evaluation is an advance in the knowledge of the risk and the actions that can be derived, as in turn, this type of study is an easy tool to obtain due to its low cost and information processing. | Condense the content of the following passage. | 286 |
arxiv-format/2010_08092v1.md | # Human Segmentation with Dynamic LiDAR Data
Tao Zhong\\({}^{*}\\), Wonjik Kim\\({}^{\\dagger}\\), Masayuki Tanaka\\({}^{\\ddagger}\\) and Masatoshi Okutomi\\({}^{\\lx@sectionsign}\\)
Department of Systems and Control Engineering, School of Engineering,
Tokyo Institute of Technology, Meguro-ku, Tokyo 152-8550, Japan
Email: *[email protected], \\({}^{\\dagger}\\)[email protected], \\({}^{\\ddagger}\\)[email protected], \\({}^{\\lx@sectionsign}\\)[email protected]
## I Introduction
Accurate and fast 3D environment perception is of great importance in robotics field, as real-time geometric information of pedestrians is necessary for the following motion planning. Light Detection and Ranging (LiDAR) is especially attracting research and industry interest because of its large detection range, robustness under various conditions, descending price, etc. While early researches [1] used handcraft feature in point cloud processing, learning-based methods are also able to extract the features these days.
In most of the current datasets, the point cloud generated within one scan is formatted as one frame. Therefore, when the detection or segmentation is done frame by frame, each frame is treated as independent static 3D data. Many researchers have successfully applied deep learning methods to static point cloud perception. They [2, 3, 4, 5, 6] extract spatially local and global features based on various representations of point cloud, such as point-wise method, voxel-based method and spherical representation method, etc. Subsequently, the features are used for classification, object detection, and semantic segmentation.
In the real world, LiDAR rotates constantly, generating consecutive sequences of point cloud frames. These 3D videos are dynamic and thus contain abundant spatial and temporal cues. However, compared to the static point cloud, there is so far few researches regarding the dynamic point cloud. One of the reasons is the lack of annotated training data, since labeling millions of points in sequences costs undoubtedly a tremendous amount of time and resources. Recently, the release of several synthetic and real datasets [7, 8, 9] for dynamic point cloud is accelerating some related researches. The point-wise method [10] and voxel-based method [11], which were already proved effective in static point cloud domain, have been extended to dynamic point cloud domain. They have demonstrated better performance than before, and showed the benefits of spatio-temporal features.
The proposed architecture in this work is inspired by the two-stream networks for unsupervised video object segmentation [12], which constructed communication of spatial and temporal features. With the help of a dataset that provides both per-point classification annotation and velocity annotation for synthetic LiDAR data, we train a network that can jointly predict human segmentation and velocity map in the dynamic point cloud. Consequently, different from the previous works that make use of the temporal information implicitly, we are able to extract it explicitly and leverage it in the segmentation task. Moreover, the proposed model is based on the spherical representation method, taking a sequence of 2D range images of point cloud sequences as input. To the best of our knowledge, our work is the first one that study the dynamic point cloud segmentation with this representation.
The experiments show that the proposed model achieves higher accuracy than the existing state-of-the-art methods on the human segmentation dataset. The utilization of temporal cues effectively improves the performance of the human detection. We also investigate the effect of the length of frames which we use for human detection.
Fig. 1: An example of segmentation results on generated data. Blue points are classified as background, and red points are classified as human.
All data and sample code are available online1.
Footnote 1: [http://www.ok.sc.e.titech.ac.jp/res/LHD/](http://www.ok.sc.e.titech.ac.jp/res/LHD/)
## II Related Works
### _Static point cloud perception_
According to the way of handling the unordered point cloud, recent learning-based methods can be divided into two types: point-based networks [3, 13, 14] and projection-based networks [2, 4, 15, 6].
Point-based methods aim to directly process raw points. As the groundbreaking work, PointNet [3] takes the coordinates of points as input. The global feature is obtained from the point-wise feature with a symmetric function. PointNet++ [13] further takes the local structure into consideration by grouping spatially close points. The point-based method is good at extracting permutation-invariant features, but it also comes with relatively high computation and memory requirements.
Projection-based methods first convert point cloud into intermediate representations. Projecting point cloud onto multi-view planes [4] makes implementing 2D convolution on each view possible. View-wise features are then aggregated into the global feature. Voxelization of point cloud [16] generates 3D tensors, on which 3D convolution can be applied. Another widely used method is spherical representation [15, 17, 18, 6], which projects point cloud onto a spherical plane. The obtained depth image can then be fed into common convolutional networks. SqueezeSeg [15] is a typical and efficient model of this kind. Regrettably, the temporal relationship between frames is ignored when they are considered separately. Based on spherical representation, our work captures the spatiotemporal cues in sequences of depth images.
### _Dynamic point cloud perception_
Due to the difficulty of annotation, most of the dynamic point cloud datasets are generated with simulation. Kim et al. developed a pipeline for single frame [19][20] and sequence of point cloud data generation [7]. SYNTHIA [8] dataset provided many road scene videos generated by a game engine. Behley et al. [9] annotated all the frames of KITTI's [21] odometry dataset and released the SemanticKITTI dataset.
The methods in static point cloud perception also apply to the dynamic condition with some adjustment. Meteornet [10] is a point-wise method based on PointNet. They extended the spatial neighborhood to a temporal and spatial neighborhood when extracting the local features of points. The features were then used for downstream tasks: action recognition, segmentation, and scene flow estimation. PointRNN [22] applies recurrent models to aggregate local features from different time steps. As for the voxel-based method, Luo et al. [5] constructed 4D tensors by concatenating the occupancy grid of point cloud along the time axis. They then performed 3D convolution on the temporal dimension to capture motion features, which benefited the detection, tracking, and motion forecasting tasks. Choy et al. [11] restrained the computation cost of high-dimensional convolution by substituting the 2D convolution layers of UNet [23] with sparse 4D convolution layers. However, when it comes to the accuracy of pedestrian segmentation, the improvement relative to the static method was not so significant. This indicates that extracting temporal features in such an implicit manner might fail. Compared with the above approaches, we leverage temporal information more explicitly.
### _Video Object Segmentation_
In unsupervised video object segmentation task, there is no reference mask as guidance. Therefore, motion cues are necessary when searching for foreground objects. As a representative kind of motion field representation, optical flow [24] is frequently used. Some researchers [25, 26] strengthened the feature by concatenating the warped feature map of previous frames to that of the current frame. Segflow [12] constructed a two-stream network that predicts segmentation and optical flow simultaneously. They proved that the bidirectional communication of feature maps between two streams can boost the performance of both tasks. Our method is inspired by this work, forcing the network to learn motion cues from the velocity estimation task.
## III Proposed Method
To segment humans in the dynamic point cloud with the help of motion information, we construct a two-branch network, one is the segmentation branch, and the other is the velocity estimation branch. It takes a sequence of depth images as input and predicts the class and velocity of each point in the last frame at one stage. In the following, we first explain the structure of each branch and the communication of temporal and spatial features between them. Then we briefly introduce how we generate sequential LiDAR data with segmentation and velocity annotation. The overall architecture of the proposed model is shown in Fig. 2.
### _Segmentation Branch_
The input of the segmentation branch is the last frame of the sequence. Therefore, it only extracts the spatial features and the size of the input tensor is \\(h\\)\\(\\times\\)\\(w\\)\\(\\times\\)\\(1\\), where \\(h\\) and \\(w\\) are the height and width of the range image. The output is the predicted probability distribution over classes for every pixel in this frame.
The segmentation branch has an hour-glass-shaped structure, consisting of a contraction section, an expansion section and skip connections. Considering that the resolution in the vertical direction is much lower than that in the horizontal direction, in the contraction section, the input frame is only downsampled in the horizontal direction. In our case, every two convolutional layers are followed by a max pooling layer with pool size [12], and the kernel size of convolution is \\(3\\)\\(\\times\\)\\(3\\). After three downsampling operations, the spatial feature maps with a size of \\(h\\)\\(\\times\\)\\(\\frac{1}{8}w\\)\\(\\times\\)\\(C\\) are upsampled back to the original image resolution in the expansion section. \\(C\\) denotes the number of feature channels. In order to refine the segmentation result, the feature maps with size of \\(h\\)\\(\\times\\)\\(w\\)\\(\\frac{1}{8}C\\), \\(h\\times\\frac{1}{2}w\\times\\frac{1}{4}C\\), \\(h\\times\\frac{1}{4}w\\times C\\) before pooling layers in contraction section is added to those of the same size in the expansion section via skip connection. The channel number \\(C\\) equals to 512. The activation function of the last layer is softmax.
### _Velocity Estimation Branch_
The velocity estimation branch takes multiple consecutive frames as input, extracting temporal features. The size of the input tensor is \\(n\\times h\\times w\\times 1\\), where \\(n\\) represents the number of input frames. The output is the predicted velocity for every pixel in the last frame.
Each input frame first goes through a weight shared feature extraction network. Its backbone is fully convolutional network [27]. The outputs are \\(n\\) feature maps of corresponding input frames, the size of which is \\(h\\times\\frac{1}{16}w\\times\\frac{1}{8}C\\).
Since velocity is one kind of motion information, the feature maps in this branch are supposed to encode motion cues. Therefore, we concatenate them along the channel axis and regard the generated tensor as temporal feature maps. In order to make use of these features in the process of segmentation, we propagate the temporal feature maps to the segmentation branch. Specifically, they are up-sampled to match the size of spatial features in the expansion section and concatenated, illustrated by the green arrows in Fig. 2. In reverse, the predicted label map from the segmentation branch is also concatenated to the temporal feature maps as guidance. Finally, the last layer with linear activation decodes them into the \\(h\\times w\\times m\\) velocity map after several convolutional layers. \\(m\\) denotes the number of dimensions of the velocity vector, which in our case is two to represent the horizontal plane.
### _Loss Function_
For the loss function of the segmentation branch, we use the pixel-wise categorical cross-entropy. For the loss function of the velocity estimation branch, we use Mean Square Error (MSE). Hence, the total loss used in the optimization process is the combination of segmentation loss and velocity estimation loss for the background and human. The losses for background and human velocity estimation are computed separately. The defected pixels are not considered in the calculation.
Given \\(n\\) input depth maps \\(\\mathbf{d}_{t}\\), \\(\\mathbf{d}_{t-1}\\), \\(\\cdots\\), estimated label map and velocity map for \\(t\\)-th frame can be expressed as
\\[[\\hat{\\mathbf{l}}_{t}|\\hat{\\mathbf{v}}_{t}] = \\mathbf{f}(\\mathbf{d}_{t},\\cdots,\\mathbf{d}_{t-n+1};\\mathbf{\\theta})\\,, \\tag{1}\\]
where \\(\\hat{\\mathbf{l}}_{t}\\) is estimated label map for the \\(t\\)-frame, \\(\\hat{\\mathbf{v}}_{t}\\) is estimated velocity map for the \\(t\\)-frame, and \\(\\mathbf{f}(\\cdots;\\mathbf{\\theta})\\) represents the network with weights \\(\\mathbf{\\theta}\\). The total loss is defined as below:
\\[L_{t} = \\frac{\\lambda_{c}}{N_{c}}\\sum_{i}\\mathrm{CE}(\\hat{l}_{t,i},l_{t,i} )+\\frac{\\lambda_{h}}{N_{h}}\\sum_{j=\\{i|l_{t,i}=h\\}}||\\hat{\\mathbf{v}}_{t,j}-\\mathbf{v} _{t,j}||_{2}^{2} \\tag{2}\\] \\[+\\frac{\\lambda_{b}}{N_{b}}\\sum_{j=\\{i|l_{t,i}=b\\}}||\\hat{\\mathbf{v}}_ {t,j}-\\mathbf{v}_{t,j}||_{2}^{2}\\,,\\]
where \\(\\mathrm{CE}\\) represents a cross-entropy, \\(l_{t,i}\\) is a true label for \\(i\\)-th pixel of \\(t\\)-th frame, \\(||\\cdot||_{2}^{2}\\) represents L2 norm, \\(\\mathbf{v}_{t,j}\\) is a true velocity for \\(i\\)-th pixel of \\(t\\)-th frame, \\(h\\) and \\(b\\) represent human and background labels, \\(N_{c}\\), \\(N_{h}\\), and \\(N_{b}\\) are the number of
Fig. 2: The proposed network architecture which consists of the segmentation branch and the velocity estimation branch. They communicate spatial and temporal features in the expansion section. Frame \\(t\\) denotes the current scan, while frame \\(t-1\\) to \\(t-n+1\\) denote the last \\(n-1\\) scans.
all pixels, human label pixels, and background pixels, and \\(\\lambda_{c}\\), \\(\\lambda_{h}\\), and \\(\\lambda_{b}\\) are hyper-parameters.
### _Data Generation_
As an extension of 2D optical flow, scene flow [28] denotes the translation vectors of points in 3D space. Unfortunately, as far as we know, there exists so far no point cloud dataset that provides both segmentation and scene flow annotation, due to the difficulty of labeling. Alternatively, we generate sequential LiDAR data with pixel-wise segmentation information and pixel-wise velocity information for training and evaluation, using the pipeline proposed in [7].
They are generated by combining the depth map of real background data and the depth map of synthetic human walking models [29]. At the same time, the speed of LiDAR and human models is recorded. Consequently, each point is automatically labeled with its class and velocity. The points are classified into background and human. The velocity of a human point denotes the relative walking speed of the human model it belongs to, with respect to the origin of LiDAR at \\(x,y\\) axis; the velocity of a background point is the relative speed of the background, i.e. \\(-v_{LiDAR}\\). The forward direction of the LiDAR is designated as \\(x\\) axis.
## IV Experiments
We evaluate the proposed network on a large-scale dataset for human segmentation. The performance is compared with multi-frame methods and a single-frame method. We also perform ablation studies on the influence of the velocity estimation branch and the length of the input sequence.
### _Dataset and Training details_
The Automatic Labeled LiDAR Sequence dataset is composed of 1108 generated data sequences and 100 real data sequences. The background data of generated data is collected in the 3rd floor of Miriakan [30] using an HDL-32E LiDAR, and the real data is collected at the same place with pedestrians walking around. Each sequence is composed of 32 frames, the size of each frame is 32 \\(\\times\\) 1024. The data is accessible at [http://www.ok.sc.e.titech.ac.jp/res/LHD/](http://www.ok.sc.e.titech.ac.jp/res/LHD/)
There are total 1108 generated sequences and 100 real sequences in our data set. We use 900 generated sequences for training, 100 generated sequences for validation. Then the networks are tested with the remaining 108 generated sequences and 100 real sequences. We train the network using Adam optimizer with an initial learning rate of 3e-5 and a decay rate of 3\\(\\times\\)\\(10^{-5}\\). The batch size is set to 1 and 250 steps compose an epoch. The network is trained for 1000 epochs. The hyper-parameters \\(\\lambda_{c}\\), \\(\\lambda_{h}\\), \\(\\lambda_{b}\\) in Eq. 2 are set as \\(1\\times\\)\\(10^{5}\\), 1, and 1001, in order to give more weights on the human points than background points.
### _Evaluation Metrics_
In consideration of the overwhelming number of background points, the performance is evaluated with intersection-over-union (IoU) score of human class, where points predicted as humans are regarded as positive. Hence, IoU is calculated with the total number of True Positives (TP), False Positives (FP), and False Negatives (FN) as below:
\\[IoU=\\frac{TP}{TP+FP+FN} \\tag{3}\\]
Note that defected pixels are not considered for evaluation.
### _Ablation Study_
We conduct ablation studies to verify the effect of components in the velocity estimation branch. Furthermore, the IoU score is calculated in several distance ranges in order to verify the effect at different distances. For instance, '0 to 4 [m]' means that only points 0 [m] and 4 [m] far from LiDAR are considered in the calculation.
When studying the effect of velocity estimation, we remove the last several layers for decoding the velocity map, i.e., the model only predicts segmentation result. When studying the effect of temporal feature propagation, we cut off the skip connections from the velocity estimation branch to the segmentation branch. As shown in Table I, in the range of '0 to \\(\\infty\\) [m]', temporal feature propagation manages to bring about 9% improvement on generated data and 2.4% improvement on real data, respectively. Especially, in the '8 to \\(\\infty\\) [m]' range of generated data, velocity estimation and temporal feature propagation increase the IoU score by 7.9% and 15.7%, which indicates that motion cues help detection in the distance, where the point cloud is very sparse.
As shown in Fig. 5, the network can estimate the human segmentation and pixelwise velocity map. Then, we can derive the velocity map of the segmented area. The examples of human velocity estimation are illustrated in Fig. 8. According to the Fig. 8, the estimated velocities show similar tendency with ground truth.Asaconsequence,weconcludethatvelocityestimation with LiDAR only is a feasible task.
\\begin{table}
\\begin{tabular}{c|c|c|c|c} \\hline \\multicolumn{2}{c|}{Method} & \\multicolumn{2}{c}{Proposed} \\\\ \\hline \\multicolumn{2}{c|}{Number of frames} & \\multicolumn{2}{c}{4} \\\\ \\hline \\multicolumn{2}{c|}{Velocity estimation} & & \\(\\surd\\) & \\(\\surd\\) \\\\ \\hline \\multicolumn{2}{c|}{Temporal feature propagation} & \\(\\surd\\) & & \\(\\surd\\) \\\\ \\hline \\multirow{4}{*}{\\begin{tabular}{c} Generated \\\\ data \\\\ \\end{tabular} } & 0 to \\(\\infty\\) & 81.16 & 77.11 & **86.08** \\\\ \\cline{2-5} & \\begin{tabular}{c} Criteria \\\\ range(m) \\\\ \\end{tabular} & 0 to 4 & 92.21 & 90.98 & **94.06** \\\\ \\cline{2-5} & \\begin{tabular}{c} 4 to 8 \\\\ 8 to \\(\\infty\\) \\\\ \\end{tabular} & 63.25 & 59.07 & **72.79** \\\\ \\cline{2-5} & & 8 to \\(\\infty\\) & 33.65 & 25.75 & **41.44** \\\\ \\hline \\multirow{4}{*}{\\begin{tabular}{c} Real \\\\ data \\\\ \\end{tabular} } & 0 to \\(\\infty\\) & 64.23 & 64.87 & **67.29** \\\\ \\cline{2-5} & \\begin{tabular}{c} Criteria \\\\ range(m) \\\\ \\end{tabular} & 0 to 4 & 74.59 & 75.73 & **76.93** \\\\ \\cline{1-1} \\cline{2-5} & \\begin{tabular}{c} 4 to 8 \\\\ 8 to \\(\\infty\\) \\\\ \\end{tabular} & 49.84 & 51.83 & **53.79** \\\\ \\cline{1-1} \\cline{2-5} &
\\begin{tabular}{c} 8 to \\(\\infty\\) \\\\ \\end{tabular} & 23.43 & **25.06** & 24.76 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Ablation study on components of velocity estimation branch. The performance is evaluated with IoU(%) score of human class in different ranges.
## References
Fig. 3: Comparison of segmentation result on generated data and real data. In 2D plots, white area denotes True Positive and black area denotes True Negative, while red area denotes False Positive and yellow area denotes False Negative.
### _Effect of the number of the input frames_
The relationship between the number of input frames and the performance is shown in Table II. We evaluate the proposed model with 1-, 2-, 4-, 8-, 16- frame sequences as input. Note that there is no velocity estimation branch when the input is a single frame.
When the length is not larger than eight frames, overall, accuracy increases along with the increase in the number of frames. On generated data, performance in all ranges is improved, and the IoU score in the range of '0 to \\(\\infty\\) [m]' with an 8-frame sequence input is 6.8% higher than with a single frame input. This is attributed to the significantly better prediction accuracy in the range further than 4[m], which ulteriorly proves the advantage of temporal information in dynamic point cloud perception. This tendency is not so clear when it comes to real data. It might be caused by the domain gap between generated data and real data or inaccurate annotations in real data. However, an increase in number of frames also results in an increase in inference time and the accuracy drops significantly when the length reaches 16. Therefore, we consider 4 frame as an appropriate length.
### _Comparison_
The performance and runtime of the proposed network is compared to the model proposed by Kim and Meteornet whose input is a 4-frame sequence, as well as SqueezeSeg whose input is a single frame. We implement the SqueezeSeg without the CRF layer because it has better performance in human segmentation according to their paper. The results are shown in Table III. Compared to the model from Kim, the proposed network improves the IoU score of the human class by nearly 17% on generated data and by nearly 6% on real data. It also accomplishes better accuracy than Meteornet and SqueezeSeg. This result shows the effectiveness of our model. Note that compared to the KITTI dataset used by SqueezeSeg and the SYNTHIA dataset used by Meteornet, our data contain neither intensity information nor color information. We used a single GTX 2080 Ti GPU and Intel Core i9 CPU. The run time of projection-based methods is shorter than the point-based method, Meteornet. For 4-frame input sequence, the proposed model runs at 51 ms/seq.
Figure 3 is the 2D and 3D visualization of the segmentation results. Please refer to Figure 1 for the 3D plot of prediction on generated data. It can be seen that the proposed model is able to recognize nearly all the human points of generated data with a few False Positives. On real data that are more difficult, the proposed model demonstrates its ability to restrain False Negative more obviously, which is important in the real robotics application.
\\begin{table}
\\begin{tabular}{c|c|c|c|c|c|c} \\hline \\multicolumn{2}{c|}{Method} & \\multicolumn{5}{c}{Proposed} \\\\ \\hline \\multicolumn{2}{c|}{Number of frames} & 1 & 2 & 4 & 8 & 16 \\\\ \\hline \\multirow{4}{*}{Generated data} & \\multirow{4}{*}{Criteria range (m)} & 0 to \\(\\infty\\) & 82.01 & 83.00 & 86.08 & **88.89** & 80.73 \\\\ \\cline{3-7} & & 0 to 4 & 91.75 & 93.17 & 94.06 & **95.10** & 90.61 \\\\ \\cline{3-7} & & 4 to 8 & 64.02 & 67.51 & 72.79 & **78.96** & 78.96 \\\\ \\cline{3-7} & & 8 to \\(\\infty\\) & 38.05 & 34.52 & 41.44 & **46.52** & 31.96 \\\\ \\hline \\multirow{4}{*}{Real data} & \\multirow{4}{*}{Criteria range (m)} & 0 to \\(\\infty\\) & 65.74 & 66.22 & 67.29 & **67.31** & 63.60 \\\\ \\cline{3-7} & & 0 to 4 & 75.85 & 77.20 & 76.93 & **77.43** & 74.28 \\\\ \\cline{1-1} \\cline{3-7} & & 4 to 8 & 52.50 & 52.77 & **53.79** & 51.53 & 46.01 \\\\ \\cline{1-1} \\cline{3-7} & & 8 to \\(\\infty\\) & 23.39 & 23.57 & **24.76** & 22.35 & 23.61 \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Experiments on the effect of the number of input frames.
Fig. 4: Velocity map of human area.
\\begin{table}
\\begin{tabular}{c|c|c|c|c} \\hline \\multirow{2}{*}{Method} & \\multicolumn{2}{c|}{SqueezeSeg (w/o CRF) [15]} & Kim [7] & Meteornet [10] & Proposed \\\\ \\hline Input & depth & depth & xyz & depth \\\\ \\cline{2-5} \\# of frames & 1 & 4 & 4 & 4 \\\\ \\hline Generated & 58.87 & 68.88 & 73.87 & **86.08** \\\\ Real & 11.35 & 58.69 & 52.20 & **67.29** \\\\ \\hline Run time & \\multirow{2}{*}{**6**} & \\multirow{2}{*}{46} & \\multirow{2}{*}{840} & \\multirow{2}{*}{51} \\\\ (ms) & & & & \\\\ \\hline \\end{tabular}
\\end{table} TABLE III: Comparison of networks on the automatic labeled LiDAR sequence dataset for human segmentation. The performance is evaluated with IoU [%] score of human class in the range of ’0 to \\(\\infty\\) [m]’.
### _Velocity estimation_
Our main goal is human segmentation, while the velocity map is also obtained as a by-product. The ground truth and prediction of velocity map of human area is shown in Figure 4, where the tendency of estimated velocities is similar to that of ground truth. This indicates that the velocity estimation branch captures motion information, which is used as cues for segmentation.
## V Conclusion
In this work, we have proposed a two-branch network for dynamic point cloud segmentation, which achieves high accuracy on a dataset for human segmentation. It is able to extract and utilize temporal information via feature propagations between the segmentation and the velocity estimation branches. According to the experimental results, we first conclude that temporal information contained in sequential data is beneficial to segmentation because motion cues can compensate for the sparseness of the input point cloud. Therefore, it is especially helpful for small and far object detection. Secondly, an appropriate increase in the length of sequence can improve the performance by producing more motion cues, while the trade-off between accuracy and computation cost exists.
For future improvement, we would consider how to improve the generalization ability of the model, so that it can be applied to more datasets. Also, we will try to reduce the computation load caused by the increase in the number of frames.
## References
* [1]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [2]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [3]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [4]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [5]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [6]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [7]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [8]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [9]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [10]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [11]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research32 (11), pp. 1231-1237. Cited by: SSII-A.
* [12]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [13]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [14]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [15]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [16]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [17]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [18]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [19]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [20]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [21]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [22]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Semantic video segmentation by gated recurrent flow propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6819-6828. Cited by: SSII-A.
* [23]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [24]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [25]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [26]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [27]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [28]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [29]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [30]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [31]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [32]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [33]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [34]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [35]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [36]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [37]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [38]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [39]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [40]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [41]A. Milioto, I. Vizzo, J. Behley, and C. Stachniss (2019) Rangenet++: fast and accurate lidar semantic segmentation. In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), Cited by: SSII-A.
* [42]A. M | Consecutive LiDAR scans compose dynamic 3D sequences, which contain more abundant information than a single frame. Similar to the development history of image and video perception, dynamic 3D sequence perception starts to come into sight after inspiring research on static 3D data perception. This work proposes a spatio-temporal neural network for human segmentation with the dynamic LiDAR point clouds. It takes a sequence of depth images as input. It has a two-branch structure, i.e., the spatial segmentation branch and the temporal velocity estimation branch. The velocity estimation branch is designed to capture motion cues from the input sequence and then propagates them to the other branch. So that the segmentation branch segments humans according to both spatial and temporal features. These two branches are jointly learned on a generated dynamic point cloud dataset for human recognition. Our works fill in the blank of dynamic point cloud perception with the spherical representation of point cloud and achieves high accuracy. The experiments indicate that the introduction of temporal feature benefits the segmentation of dynamic point cloud. | Condense the content of the following passage. | 207 |
arxiv-format/1205_1366v3.md | # Remote sensing via \\(\\ell_{1}\\)-minimization
Max Hugel
Hausdorff Center for Mathematics and Institute for Numerical Simulation, University of Bonn, Bonn, Germany, [email protected]
Holger Rauhut
Hausdorff Center for Mathematics and Institute for Numerical Simulation, University of Bonn, Bonn, Germany, [email protected]
Thomas Strohmer
Department of Mathematics, University of California at Davis, Davis CA, [email protected]
## 1 Introduction
Our aim is to detect the locations and reflectivities of remote targets (point scatterers) by sending probing signals from an antenna array and recording the reflected signals. This type of inverse scattering -- which has applications in radar, sonar, medical imaging, and microscopy -- is a rather challenging numerical problem. Typically the solution is not unique and instabilities in the presence of noise are a common issue. Standard techniques, such as matched field processing [30] or time reversal methods [1, 18, 19] work well for the detection of very few, well separated targets. However, when the number of targets increases and/or some targets are adjacent to each other, these methods run into severe problems. Moreover, these methods have major difficulties when the dynamic range between the reflectivities of the targets is large.
In [14] a compressive sensing based approach to the inverse scattering problem was proposed to overcome the ill-posedness of the problem by utilizing the sparsity of the target scene. Here, sparsity is meant in the sense that the targets typically occupy only a small fraction of theoverall region of interest. As common in compressive sensing [13, 4, 15, 26], randomness is used and in this setup it is realized by placing the antennas at random locations on a square. It was proved in [14] that under certain conditions it is possible to exactly recover the locations and reflectivities of the targets from noise-free measurements by solving an \\(\\ell_{1}\\)-regularized optimization problem, also known as _basis pursuit_ in the compressive sensing literature.
While the framework in [14] can lead to significant improvements over traditional methods, it also has several limitations. For instance, the main theoretical result in that article requires the targets to be randomly spaced, a condition that is quite restrictive and does not match well with practical scenarios. Also the conditions on the number of targets that can be recovered are far from optimal. In this paper we will overcome most of these limitations, thus leading to a theoretical framework that is better adapted to practical applications. In particular, we also show that recovery is stable with respect to measurement noise and under passing from sparse to approximately sparse scenes. Figure 1 depicts the reconstruction of a sparse scene of
Figure 1: (a) Scene with 100 targets in 6400 resolution cells (b) Reconstruction from 900 noisy measurements with SNR of 20dB
100 targets in 6400 resolution cells with reflectivities in the dynamic range from 1 to 8 from 900 noisy measurements, that is with 30 antennas. Both the detection performance and the approximation of the true values of the reflectivities are very good.
What makes the inverse scattering problem with antenna arrays challenging from a compressive sensing viewpoint is that the associated sensing matrix is not a random matrix with independent rows or columns, but the matrix entries are random variables which are coupled across rows and columns. This in turn means that standard proof techniques from the compressive sensing literature cannot be applied readily and results developed for structured sensing matrices [26] are of limited use in our case. In fact, it is an open problem whether the by now classical and often used restricted isometry property holds for the random scattering matrix arising in our context. Instead we provide high probability recovery bounds for a fixed vector and a random choice of the scattering matrix (also referred to as nonuniform recovery guarantees). We believe that some of the tools that we develop in this paper will potentially be useful in other compressive sensing scenarios, where the sensing matrix has coupled rows and columns.
Our paper is organized as follows. In Section 2 we describe the setup of the imaging problem and state our main results. As preparation for proving our main theorems, we derive a general sparse recovery result in Section 3 and condition number estimates for certain random matrices in Section 4. In Section 5 we prove the recovery of sparse vectors for sensing matrices with dependent rows and columns which are associated with a class of bounded orthonormal systems. This type of matrices includes the sensing matrix arising in the inverse scattering problem as a special case. On the other hand this result assumes that the non-zero coefficients of the signal to be recovered have random phases. In Section 6 we remove the assumption of random phases and show sparse recovery for the inverse scattering setup for signals with fixed deterministic phases. In Section 7 we illustrate our theoretical results by numerical simulations.
### Acknowledgements
M.H. and H.R. acknowledge support by the Hausdorff Center for Mathematics and by the ERC Starting Grant SPALORA StG 258926. T.S. was supported by the National Science Foundation and DTRA under grant dtra-dms 1042939, and by DARPA under grant N66001-11-1-4090. Parts of this manuscript have been written during a stay of H.R. at the Institute for Mathematics and Its Applications, University of Minnesota, Minneapolis. T.S. thanks Haichao Wang for useful comments on an early version of this manuscript. The authors also wish to thank Axel Obermeier and the anonymous reviewers for helpful comments and corrections.
## 2 Problem formulation and main results
### Array imaging setup and problem formulation
Suppose an array of \\(n\\) transducers is located in the square \\([0,B]^{2}\\), where \\(B>0\\) is the array aperture. The spatial part of a wave of wavelength \\(\\lambda>0\\) emitted from some point source \\(b\\in[0,B]^{2}\\) and recorded at another point \\(r\\in\\mathbb{R}^{3}\\) is given by the Green's function \\(G\\) of the Helmholtz equation,
\\[G(r,b):=\\frac{\\exp\\left(\\frac{2\\pi i}{\\lambda}\\left\\|r-b\\right\\|_{2}\\right)}{4 \\pi\\left\\|r-b\\right\\|_{2}}. \\tag{2.1}\\]
Here and in the following \\(\\|\\cdot\\|_{p}\\) refers to the usual \\(\\ell_{p}\\)-norm.
satisfies the convenient orthonormality relation
\\[\\frac{1}{B^{2}}\\int_{[0,B]^{2}}\\widehat{G}(b,r_{m})\\overline{ \\widehat{G}(b,r_{\\ell})}db=\\frac{\\exp\\left(\\frac{\\pi i}{\\lambda z_{0}}\\left(\\|(x_ {m},y_{m})\\|_{2}^{2}-\\|(x_{\\ell},y_{\\ell})\\|_{2}^{2}\\right)\\right)}{B^{2}}\\] \\[\\times\\int_{[0,B]}\\int_{[0,B]}\\exp\\left(-\\frac{2\\pi i}{\\lambda z_ {0}}(x_{m}-x_{\\ell})\\xi\\right)\\exp\\left(-\\frac{2\\pi i}{\\lambda z_{0}}(y_{m}-y_ {\\ell})\\eta\\right)d\\xi d\\eta\\] \\[=\\delta_{\\ell m}. \\tag{2.5}\\]
It is for this relation to hold that we make the approximation (2.3).
Let us now describe the scattering matrix. Assume we have a vector \\((x_{j})_{j\\in[N]}\\in\\mathbb{C}^{N}\\) of reflectivities on the resolution grid. We sample \\(n\\) antenna positions \\(b_{1},\\dots,b_{n}\\in[0,B]^{2}\\) independently at random according to the uniform distribution on \\([0,B]^{2}\\). If antenna element \\(b_{j}\\in[0,B]^{2}\\) transmits and \\(b_{k}\\in[0,B]^{2}\\) receives, then we model the echo \\(y_{jk}\\) as
\\[y_{jk}=\\sum_{\\ell=1}^{N}\\widehat{G}(b_{j},r_{\\ell})\\widehat{G}(r_{\\ell},b_{k}) x_{\\ell},\\quad(j,k)\\in[n]^{2}. \\tag{2.6}\\]
This is called the _Born approximation_[2]. It amounts to discarding multipath scattering effects. So if the transmit-receive mode is that one antenna element transmits at a time and the whole aperture receives the echo, the appropriately scaled sensing matrix \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) is given entrywise by
\\[A_{(j,k),\\ell}:=\\widehat{G}(b_{j},r_{\\ell})\\widehat{G}(r_{\\ell},b_{k}),\\quad (j,k)\\in[n]^{2},\\ell\\in[N]. \\tag{2.7}\\]
Then \\(y=Ax\\) by (2.6). Due to the randomness in the \\(b_{k}\\), \\(k\\in[n]\\), the matrix \\(A\\) is a (structured) random matrix with coupled rows and columns.
In many scenarios the number of targets is small compared to the grid size. This naturally leads to sparsity in the vector \\(x\\in\\mathbb{C}^{N}\\) of reflectivities, \\(\\|x\\|_{0}:=\\#\\{\\ell:x_{\\ell}\
eq 0\\}\\leq s\\), where \\(s\\ll N\\). Compressive sensing suggests that in such a scenario, we can recover \\(x\\) from undersampled measurements \\(y=Ax\\in\\mathbb{C}^{n^{2}}\\) when \\(n^{2}\\ll N\\). We note that \\(A\\) contains only \\(n(n+1)/2\\) different rows due to the symmetries in the sensing setup. Our goal is determine a good bound on the required minimal number of antennas \\(n\\) in order to ensure recovery of an \\(s\\)-sparse scene. A small number of antennas has clear advantages such as low costs of imaging hardware.
### Compressive sensing
We briefly describe the basics of compressive sensing in order to place our results outlined below into context. Given measurements \\(y=Ax\\in\\mathbb{C}^{m}\\) of a sparse vector \\(x\\in\\mathbb{C}^{N}\\), where \\(A\\in\\mathbb{C}^{m\\times N}\\) is the so-called measurement matrix, we would like to reconstruct \\(x\\) in the underdetermined case that \\(m\\ll N\\) by taking into consideration the sparsity.
The naive approach of \\(\\ell_{0}\\)-minimization
\\[\\min_{z\\in\\mathbb{C}^{N}}\\|z\\|_{0}\\quad\\text{ subject to }Az=y \\tag{2.8}\\]
is NP-hard [21]. Hence several tractable alternatives were proposed including \\(\\ell_{1}\\)-minimization, also called basis pursuit [10, 13, 4],
\\[\\min_{z\\in\\mathbb{C}^{N}}\\|z\\|_{1}\\quad\\text{ subject to }Az=y. \\tag{2.9}\\]This can be seen as a convex relaxation of (2.8) and can be solved via efficient convex optimization methods [3, 9]. It is by now well-understood that \\(\\ell_{1}\\)-minimization can recover sparse vectors under appropriate conditions. Remarkably, random matrices provide (near-)optimal measurement matrices in this context and good deterministic constructions are lacking to date, see [26, 15] for a discussion. For instance, an \\(m\\times N\\) Gaussian random matrix \\(A\\) ensures exact (and stable) recovery of all \\(s\\)-sparse vectors \\(x\\) from \\(y=Ax\\) using \\(\\ell_{1}\\)-minimization (and other types of algorithms) with high probability provided
\\[m\\geq Cs\\log(N/s), \\tag{2.10}\\]
where \\(C>0\\) is a universal constants. This bound is optimal [13, 16]. It is crucial that \\(m\\) is allowed to scale linearly in \\(s\\). The log-factor cannot be removed. Recovery is stable under passing to approximately sparse vectors and under adding noise to the measurements. In the latter case, one may rather work with the noise-constrained \\(\\ell_{1}\\)-minimization problem
\\[\\min_{z\\in\\mathbb{C}^{N}}\\|z\\|_{1}\\quad\\text{ subject to }\\|Az-y\\|_{2}\\leq\\eta. \\tag{2.11}\\]
Random partial Fourier matrices [4, 7, 29, 25, 26] (that is, random row-submatrices of the discrete Fourier matrix) and other types of structured random matrices [26, 27] also provide \\(s\\)-sparse recovery under similar conditions as in (2.10) (with additional log-factors).
Some of the mentioned recovery results are derived using the restricted isometry property (RIP) [7, 6]. This leads to uniform guarantees in the sense that once the matrix is selected, then with high probability _every_\\(s\\)-sparse vector can be recovered from \\(y=Ax\\). The RIP, however, is a rather strong condition which is sometimes hard to verify. In particular, it is open to verify it for our random matrix in (2.7). Instead, we may work with weaker conditions, which ensure nonuniform recovery in the sense that a fixed \\(s\\)-sparse vector is recovered with high probability using a random draw of the matrix. Our result below for the structured random matrix in (2.7) is based on the extension of certain general recovery conditions for \\(\\ell_{1}\\)-minimization [17, 32, 5] to stable recovery using a so-called dual certificate, see Section 3.
### Main results
We define the error of best \\(s\\)-term approximation in the \\(\\ell_{1}\\)-norm by
\\[\\sigma_{s}(x)_{1}:=\\inf_{\\|z\\|_{0}\\leq s}\\|x-z\\|_{1}.\\]
Furthermore, we will assume throughout that the _aperture condition_
\\[\\rho:=\\frac{d_{0}B}{\\lambda z_{0}}\\in\\mathbb{N} \\tag{2.12}\\]
holds, which can be accomplished by an appropriate choice of the meshsize \\(d_{0}\\). The further notation is the one used in Section 2.1. We will refer to the matrix \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) in (2.7) with the antenna positions \\(b_{1},\\ldots,b_{n}\\) selected independently and uniformly at random from \\([0,B]^{2}\\) as the _random scattering matrix_. Note that the aperture condition (2.12) implies that \\(\\mathbb{E}A^{*}A=n^{2}\\operatorname{Id}\\) by a similar computation as in (2.5), that is, in expectation the matrix \\(A^{*}A\\) behaves nicely, which will be crucial in the proof. Let us now state our nonuniform recovery result.
**Theorem 2.1**.: _Let \\(x\\in\\mathbb{C}^{N}\\) and \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) be a draw of the random scattering matrix. Let \\(s\\in\\mathbb{N}\\) be some sparsity level. Suppose we are given noisy measurements \\(y=Ax+e\\in\\mathbb{C}^{n^{2}}\\) with \\(\\left\\|e\\right\\|_{2}\\leq\\eta n\\). If, for \\(\\varepsilon>0\\),_
\\[n^{2}\\geq Cs\\log^{2}\\left(\\frac{cN}{\\varepsilon}\\right) \\tag{2.13}\\]
_with universal constants \\(C,c>0\\), then with probability at least \\(1-\\varepsilon\\), the solution \\(\\widehat{x}\\in\\mathbb{C}^{N}\\) to the noise-constrained \\(\\ell_{1}\\)-minimization problem_
\\[\\min_{z\\in\\mathbb{C}^{N}}\\left\\|z\\right\\|_{1}\\quad\\text{ subject to }\\|Az-y\\|_{2}\\leq\\eta n. \\tag{2.14}\\]
_satisfies_
\\[\\left\\|x-\\widehat{x}\\right\\|_{2}\\leq C_{1}\\sqrt{s}\\eta+C_{2}\\sigma_{s}(x)_{1}. \\tag{2.15}\\]
_The constants satisfy \\(C\\leq\\left(800e^{3/4}\\right)^{2}\\approx 2.87\\cdot 10^{6}\\), \\(c\\leq 6\\), \\(C_{1}\\leq 4(1+\\sqrt{2})+8\\sqrt{3}\\approx 23.513\\), \\(C_{2}\\leq 4(1+\\sqrt{6})\\approx 13.798\\)._
**Remark 2.2**.:
* _The constants appearing in Theorem_ 2.1 _are quite large and reflect a worst case analysis. No attempt has been made to optimize the above bounds. In practice, much better bounds can be expected, see also the numerical results below._
* _The scaling of the noise level,_ \\(\\left\\|e\\right\\|_{2}\\leq\\eta n\\) _is natural because_ \\(e\\in\\mathbb{C}^{n^{2}}\\)_. Indeed, if we have a componentwise bound_ \\(|e_{j}|=|(Ax)_{j}-y_{j}|\\leq\\eta\\) _for all_ \\(j\\in[n]^{2}\\) _then it is satisfied._
* _The error bound (_2.15_) is slightly worse than the one we would get under the RIP. In fact, if_ \\(A\\) _has the RIP then the associated error bound improves the right hand side of (_2.15_) by a factor of_ \\(s^{-1/2}\\)___[_6_]__. Unfortunately, it is so far unknown whether the random scattering matrix_ \\(A\\) _obeys the RIP under a similar condition as (_2.13_), so that the error bound (_2.15_) is the best one can presently achieve._
* _If_ \\(x\\) _is_ \\(s\\)_-sparse,_ \\(\\sigma_{s}(x)_{1}=0\\)_, and if there is no noise,_ \\(\\eta=0\\)_, then (_2.15_) implies exact reconstruction,_ \\(\\widehat{x}=x\\)_, by equality-constrained_ \\(\\ell_{1}\\)_-minimization (_2.9_)._
* _We can specialize the error bound in the previous theorem for the case of Gaussian noise. To this end, assume that the components of_ \\(e\\in\\mathbb{C}^{n^{2}}\\) _are i.i.d. complex Gaussians with variance_ \\(\\eta^{2}\\)_, where the real and imaginary part of a complex Gaussian are independent real Gaussians with variance_ \\(\\eta^{2}/2\\)_. A standard calculation shows that the noise satisfies_ \\(\\left\\|e\\right\\|_{2}\\leq\\eta n\\log(1/\\varepsilon)\\) _with probability at least_ \\(1-\\varepsilon\\)_. Assuming that_ \\(e\\) _is independent of the matrix_ \\(A\\)_, it follows that the solution_ \\(\\hat{x}\\) _of noise-constrained_ \\(\\ell_{1}\\)_-minimization with bound_ \\(\\left\\|Az-y\\right\\|_{2}\\leq\\eta n\\log(1/\\varepsilon)\\) _satisfies_ \\[\\left\\|\\hat{x}-x\\right\\|_{2}\\leq C_{1}\\eta\\sqrt{s}\\log(1/\\varepsilon)+C_{2} \\sigma_{s}(x)_{1}\\] (2.16) _with probability at least_ \\((1-\\varepsilon)^{2}\\)_. The constants_ \\(C_{1}\\)_,_ \\(C_{2}\\) _satisfy the bounds of Theorem_ 2.1_._
Theorem 2.1 holds for a fixed, deterministic \\(x\\in\\mathbb{C}^{N}\\). We define the _sign_ of a number \\(a\\in\\mathbb{C}\\) as
\\[\\operatorname{sgn}(a)=\\begin{cases}\\frac{a}{|a|}&\\text{if }a\
eq 0,\\\\ 0&\\text{if }a=0.\\end{cases}\\]For a vector \\(x\\in\\mathbb{C}^{N}\\) we denote by \\(\\operatorname{sgn}(x):=(\\operatorname{sgn}(x_{j}))_{j\\in[N]}\\) the sign pattern of \\(x\\). On the way to the proof of Theorem 2.1, we will provide the easier result stated next for the case when the sign pattern of \\(x\\) restricted to its support set \\(T\\subset[N]\\), \\(\\operatorname{sgn}(x)_{T}=\\left(\\operatorname{sgn}(x_{j})\\right)_{j\\in T}\\), forms a Rademacher or a Steinhaus sequence. The latter amounts to assuming that the phases of the reflectivities are iid uniformly distributed on \\([0,2\\pi]\\), which is a common assumption in array imaging and radar signal processing. Theorem 2.3 below actually establishes sparse recovery in a more general setting than the inverse scattering problem. It is not only applicable to the radar-type sensing matrices analyzed above, but to more general sensing matrices whose rows and columns are not independent, and whose entries are associated with a certain class of orthonormal systems. Its statement requires the notion of bounded orthonormal systems [26].
**Definition 2.1**.: _Let \\(D\\subset\\mathbb{R}^{d}\\) be a measurable set and \\(\
u\\) a probability measure on \\(D\\). A system of functions \\(\\left\\{\\Phi_{k}:D\\to\\mathbb{C}\\right\\}_{k\\in[N]}\\) is called a bounded orthonormal system (BOS) with respect to \\((D,\
u)\\) if_
\\[\\int_{D}\\Phi_{k}(t)\\overline{\\Phi_{\\ell}(t)}\
u(dt)=\\delta_{k\\ell}\\]
_and if the functions are uniformly bounded by a constant \\(K\\geq 1\\),_
\\[\\max_{k\\in[N]}\\left\\|\\Phi_{k}\\right\\|_{\\infty}\\leq K.\\]
Let now \\(\\left\\{\\Phi_{\\ell}\\right\\}_{\\ell\\in[N]}\\) be a BOS on \\((D,\
u)\\) with bounding constant \\(K=1\\) and with the property that \\(\\left\\{\\Phi_{\\ell}^{2}\\right\\}_{\\ell\\in[N]}\\) is also a BOS on \\((D,\
u)\\). Note that due to the orthogonality relation, we then necessarily have \\(|\\Phi_{\\ell}(t)|=1\\) for all \\(t\\in D\\). The functions \\(\\Phi_{\\ell}(t)=\\widehat{G}(r_{\\ell},t)\\), \\(t\\in[0,B]^{2}\\) fall into this setup when the aperture condition (2.12) is satisfied, see also (2.5). Another example is provided by the Fourier system \\(\\left\\{\\Phi_{\\ell}\\right\\}_{\\ell\\in\\mathbb{Z}}\\), where \\(\\Phi_{\\ell}(t)=e^{2\\pi it}\\), \\(\\ell\\in\\mathbb{Z}\\), \\(t\\in[0,1]\\). For \\(b_{1},b_{2}\\in D\\), set
\\[v(b_{1},b_{2}):=\\left(\\overline{\\Phi_{\\ell}(b_{1})}\\,\\overline{\\Phi_{\\ell}(b_ {2})}\\right)_{\\ell\\in[N]}\\in\\mathbb{C}^{N}.\\]
Sample now \\(n\\) elements \\(b_{1},\\ldots,b_{n}\\) independently at random according to \\(\
u\\) from \\(D\\). Define the sampling matrix \\(A\\) via
\\[A:=\\left(v(b_{j},b_{k})^{*}\\right)_{j,k\\in[n]}\\in\\mathbb{C}^{n^{2}\\times N}, \\tag{2.17}\\]
so that \\(A\\) is the matrix with rows \\(v(b_{j},b_{k})^{*}\\), \\((j,k)\\in[n]^{2}\\). Note that with the system \\(\\Phi_{\\ell}(b)=\\widehat{G}(r_{\\ell},b)\\) we recover the random scattering matrix (2.7) in this way.
Now we can state our main result for random sign patterns. We recall that the entries of a (random) Rademacher vector \\(\\epsilon\\) are independent random variables that take the values \\(\\pm 1\\) with equal probability. Similarly, a Steinhaus vector is a random vector where all entries are independent and uniformly distributed on the complex torus \\(\\left\\{z\\in\\mathbb{C}:|z|=1\\right\\}\\).
**Theorem 2.3**.: _Let \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) be a draw of the random sampling matrix from (2.17). Let \\(x\\in\\mathbb{C}^{N}\\) and \\(T\\subset[N]\\) be the index set corresponding to its \\(s\\) largest absolute entries. Assume that the sign vector \\(\\operatorname{sgn}(x)_{T}\\) of \\(x\\) restricted to \\(T\\) forms a Rademacher or a Steinhaus sequence. Suppose we take noisy measurements \\(y=Ax+e\\in\\mathbb{C}^{n^{2}}\\) with \\(\\left\\|e\\right\\|_{2}\\leq\\eta n\\). If_
\\[n^{2}\\geq Cs\\log\\left(\\frac{c_{1}(N-s)}{\\varepsilon}\\right)\\log^{2}\\left(c_{2 }(N-s)^{2}s/\\varepsilon\\right), \\tag{2.18}\\]_then with probability at least \\(1-\\varepsilon\\), the solution \\(\\widehat{x}\\in\\mathbb{C}^{N}\\) to noise-constrained \\(\\ell_{1}\\)-minimization (2.14) satisfies_
\\[\\left\\|\\widehat{x}-x\\right\\|_{2}\\leq C_{1}\\sqrt{s}\\eta+C_{2}\\sigma_{s}(x)_{1}. \\tag{2.19}\\]
_The constants satisfy \\(C\\leq 1024\\), \\(c_{1}\\leq 8\\), \\(c_{2}\\leq 576\\), \\(C_{1}\\leq 4(1+\\sqrt{2})+8\\sqrt{3}\\approx 23.513\\), \\(C_{2}\\leq 4(1+\\sqrt{6})\\approx 13.798\\)._
**Remark 2.4**.: _Whereas the bounds on the constants in Theorem 2.1 are quite large, and certainly improvable, in the case of random sign patterns, the number of antennas required must satisfy_
\\[n\\geq 32\\sqrt{s}\\log^{3/2}\\left(cN/\\varepsilon\\right),\\]
_which is a reasonable bound, see also the improvement in Remark 4.6 (b)._
## 3 Stable sparse recovery via \\(\\ell_{1}\\)-minimization
In this section we establish a general result for the recovery of an individual vector \\(x\\in\\mathbb{C}^{N}\\) from noisy measurements \\(y=Ax+e\\in\\mathbb{C}^{m}\\) with \\(A\\in\\mathbb{C}^{m\\times N}\\). It uses a dual vector in the spirit of [17, 32] and extends these results to the noisy and non-sparse case. The proof is inspired by [5] for recovery based on the weak RIP. However, since we actually do not assume the weak RIP, the error bound in (3.5) below is slightly worse by a factor of \\(\\sqrt{s}\\) than the one in [5, Section 4]. In the noiseless and exact sparse case the theorem below implies exact recovery similar to [17, 32].
For a set \\(T\\subset[N]\\) and a matrix \\(A\\in\\mathbb{C}^{m\\times N}\\) with columns \\(a_{j}\\in\\mathbb{C}^{m}\\), \\(j\\in[N]\\), we denote by \\(A_{T}=(a_{j})_{j\\in T}\\in\\mathbb{C}^{m\\times|T|}\\) the column-submatrix of \\(A\\) with columns indexed by \\(T\\) and by \\(T^{c}:=[N]\\setminus T\\) the complement of \\(T\\) in \\([N]\\). Similarly, we denote by \\(x_{T}\\in\\mathbb{C}^{|T|}\\) the vector \\(x\\in\\mathbb{C}^{N}\\) restricted to its entries in \\(T\\). The operator norm of a matrix \\(B\\) on \\(\\ell_{2}\\) is denoted by \\(\\left\\|B\\right\\|_{2\\to 2}\\).
**Theorem 3.1**.: _Let \\(x\\in\\mathbb{C}^{N}\\) and \\(A\\in\\mathbb{C}^{m\\times N}\\) with \\(\\ell_{2}\\)-normalized columns, \\(\\left\\|a_{j}\\right\\|_{2}=1\\), \\(j\\in[N]\\). For \\(s\\geq 1\\), let \\(T\\subset[N]\\) be the set of indices of the \\(s\\) largest absolute entries of \\(x\\). Assume that \\(A_{T}\\) is well-conditioned in the sense that_
\\[\\left\\|A_{T}^{*}A_{T}-\\operatorname{Id}\\right\\|_{2\\to 2}\\leq\\frac{1}{2} \\tag{3.1}\\]
_and that there exists a dual certificate \\(u=A^{*}v\\in\\mathbb{C}^{N}\\) with \\(v\\in\\mathbb{C}^{m}\\) such that_
\\[u_{T} =\\operatorname{sgn}(x)_{T}, \\tag{3.2}\\] \\[\\left\\|u_{T^{c}}\\right\\|_{\\infty} \\leq\\frac{1}{2},\\] (3.3) \\[\\left\\|v\\right\\|_{2} \\leq\\sqrt{2s}. \\tag{3.4}\\]
_Suppose we are given noisy measurements \\(y=Ax+e\\in\\mathbb{C}^{m}\\) with \\(\\left\\|e\\right\\|_{2}\\leq\\eta\\). Then the solution \\(\\widehat{x}\\in\\mathbb{C}^{N}\\) to noise-constrained \\(\\ell_{1}\\)-minimization (2.11) satisfies_
\\[\\left\\|x-\\widehat{x}\\right\\|_{2}\\leq C_{1}\\sqrt{s}\\eta+C_{2}\\sigma_{s}(x)_{1}. \\tag{3.5}\\]
_The constants satisfy \\(C_{1}\\leq 4(1+\\sqrt{2})+8\\sqrt{3}\\approx 23.513\\), \\(C_{2}\\leq 4(1+\\sqrt{6})\\approx 13.798\\)._
**Remark 3.2**.: _The constants appearing in the conditions above are rather arbitrary and chosen for convenience._
Proof.: Write \\(\\widehat{x}=x+h\\). Due to (2.11) and the assumption on the noise level, \\(\\|e\\|_{2}\\leq\\eta\\), we have
\\[\\left\\|Ah\\right\\|_{2}=\\left\\|Ax-y-(A\\widehat{x}-y)\\right\\|\\leq\\|Ax-y\\|_{2}+\\|A \\widehat{x}-y\\|_{2}\\leq 2\\eta. \\tag{3.6}\\]
Since \\(x\\) is feasible for the optimization program (2.11) we obtain
\\[\\left\\|x\\right\\|_{1}\\geq\\left\\|\\widehat{x}\\right\\|_{1} =\\left\\|(x+h)_{T}\\right\\|_{1}+\\left\\|(x+h)_{T^{c}}\\right\\|_{1}\\] \\[\\geq\\operatorname{Re}\\left(\\left\\langle(x+h)_{T},\\operatorname{ sgn}(x)_{T}\\right\\rangle\\right)+\\left\\|h_{T^{c}}\\right\\|_{1}-\\left\\|x_{T^{c}} \\right\\|_{1}\\] \\[=\\left\\|x\\right\\|_{1}+\\operatorname{Re}\\left(\\left\\langle h_{T},\\operatorname{sgn}(x)_{T}\\right\\rangle\\right)+\\left\\|h_{T^{c}}\\right\\|_{1}-2 \\left\\|x_{T^{c}}\\right\\|_{1},\\]
where we applied Holder's and the triangle inequality in the second line. Rearranging the above yields
\\[\\left\\|h_{T^{c}}\\right\\|_{1}\\leq\\left|\\operatorname{Re}\\left(\\left\\langle h_{ T},\\operatorname{sgn}(x)_{T}\\right\\rangle\\right)\\right|+2\\left\\|x_{T^{c}} \\right\\|_{1}. \\tag{3.7}\\]
Let \\(u=A^{*}v\\) be the dual certificate. Then, using the Cauchy-Schwarz and Holder's inequality
\\[\\left|\\operatorname{Re}\\left(\\left\\langle h_{T},\\operatorname{ sgn}(x)_{T}\\right\\rangle\\right)\\right| =\\left|\\operatorname{Re}\\left(\\left\\langle h_{T},\\left(A^{*}v \\right)_{T}\\right\\rangle\\right)\\right|\\leq\\left|\\left\\langle h,A^{*}v\\right\\rangle \\right|+\\left|\\left\\langle h_{T^{c}},u_{T^{c}}\\right\\rangle\\right|\\] \\[\\leq\\left\\|Ah\\right\\|_{2}\\left\\|v\\right\\|_{2}+\\left\\|h_{T^{c}} \\right\\|_{1}\\left\\|u_{T^{c}}\\right\\|_{\\infty}\\leq 2\\sqrt{2s}\\eta+\\frac{1}{2}\\left\\|h_ {T^{c}}\\right\\|_{1},\\]
where we used (3.3) and (3.4) in the last line. Plugging into (3.7) yields
\\[\\left\\|h_{T^{c}}\\right\\|_{1}\\leq 4\\sqrt{2s}\\,\\eta+4\\left\\|x_{T^{c}}\\right\\|_{1}. \\tag{3.8}\\]
Due to (3.1), we have
\\[\\frac{1}{2}\\left\\|h_{T}\\right\\|_{2}^{2}\\leq\\left\\|A_{T}h_{T}\\right\\|_{2}^{2}= \\left\\langle A_{T}h_{T},Ah\\right\\rangle-\\left\\langle A_{T}h_{T},A_{T^{c}}h_{T^ {c}}\\right\\rangle. \\tag{3.9}\\]
Using Holder's inequality, the normalization of the columns of \\(A\\) and (3.6), we obtain
\\[\\left|\\left\\langle A_{T}h_{T},Ah\\right\\rangle\\right|\\leq\\left\\|h_{T}\\right\\|_{ 1}\\left\\|A_{T}^{*}Ah\\right\\|_{\\infty}\\leq 2\\sqrt{s}\\eta\\left\\|h_{T}\\right\\|_{2}.\\]
The triangle inequality and the Cauchy Schwarz inequality give, by noting that (3.1) implies \\(\\left\\|A_{T}\\right\\|_{2\\to 2}\\leq\\sqrt{\\frac{3}{2}}\\),
\\[\\left|\\left\\langle A_{T}h_{T},A_{T^{c}}h_{T^{c}}\\right\\rangle\\right|\\leq\\sum_ {j\\in T^{c}}|h_{j}||\\left\\langle A_{T}h_{T},a_{j}\\right\\rangle|\\leq\\sum_{j\\in T ^{c}}|h_{j}|\\|A_{T}h_{T}\\|_{2}\\|a_{j}\\|_{2}\\leq\\sqrt{\\frac{3}{2}}\\|h_{T}\\|_{2 }\\|h_{T^{c}}\\|_{1}.\\]
Inserting into (3.9) we obtain
\\[\\left\\|h_{T}\\right\\|_{2}\\leq 4\\sqrt{s}\\eta+\\sqrt{6}\\left\\|h_{T^{c}}\\right\\|_{1}. \\tag{3.10}\\]
Combining (3.8) and (3.10) we arrive at
\\[\\left\\|h\\right\\|_{2} \\leq\\left\\|h_{T}\\right\\|_{2}+\\left\\|h_{T^{c}}\\right\\|_{1}\\] \\[\\leq(4(1+\\sqrt{2})+8\\sqrt{3})\\sqrt{s}\\eta+4(1+\\sqrt{6})\\left\\|x_{ T^{c}}\\right\\|_{1}.\\]
Due to the choice of \\(T\\) we have \\(\\left\\|x_{T^{c}}\\right\\|_{1}=\\sigma_{s}(x)_{1}\\). This completes the proof.
Conditioning of submatrices
Theorem 3.1 requires to find a dual certificate \\(u=A^{*}v\\) with \\(u_{T}=\\operatorname{sgn}(x)_{T}\\), where \\(A\\) is the random scattering matrix introduced in Section 2.1 and \\(T\\subset[N]\\) is some support set. Condition (3.1) in Theorem 3.1 suggests to investigate the conditioning of \\(A_{T}\\). Recall that
\\[v(b_{j},b_{k})=\\left(\\overline{\\Phi_{\\ell}(b_{j})}\\,\\overline{\\Phi_{\\ell}(b_{k })}\\right)_{\\ell\\in[N]}\\in\\mathbb{C}^{N},\\]
where \\(\\{\\Phi_{\\ell}\\}\\) is a bounded orthonormal system with constant \\(K=1\\) such that \\(\\{\\Phi_{\\ell}^{2}\\}\\) is also a bounded orthonormal system. The rows of the random scattering matrix \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) are the vectors \\(v(b_{j},b_{k})^{*}\\in\\mathbb{C}^{1\\times N}\\), \\((j,k)\\in[n]^{2}\\), where the \\(b_{1},\\dots,b_{n}\\) are selected independently at random according to the orthonormalization measure \\(\
u\\), see (2.17) and Definition 2.1. The scattering matrix \\(A\\) in (2.7) is a special case of this setup.
We aim at a probabilistic estimate of the largest and smallest singular value of \\(\\frac{1}{n}A_{T}\\in\\mathbb{C}^{n^{2}\\times s}\\), i.e., the operator norm
\\[\\left\\|\\frac{1}{n^{2}}A_{T}^{*}A_{T}-\\mathrm{Id}\\right\\|_{2\\to 2}=\\left\\| \\frac{1}{n^{2}}\\sum_{j,k=1}^{n}v(b_{j},b_{k})_{T}v(b_{j},b_{k})_{T}^{*}- \\mathrm{Id}\\right\\|_{2\\to 2}. \\tag{4.1}\\]
The central result of this section stated next provides an estimate of the tail of this quantity.
**Theorem 4.1**.: _Let \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) be the random matrix described above and let \\(T\\subset[N]\\) be a (fixed) subset of cardinality \\(|T|=s\\). If, for \\(\\delta,\\varepsilon>0\\),_
\\[n^{2}\\geq 1024\\delta^{-2}s\\log^{2}\\left(\\frac{576s^{3}}{\\varepsilon}\\right) \\tag{4.2}\\]
_then_
\\[\\mathbb{P}\\left(\\left\\|\\frac{1}{n^{2}}A_{T}^{*}A_{T}-\\mathrm{Id}\\right\\|_{2 \\to 2}\\geq\\delta\\right)\\leq\\varepsilon. \\tag{4.3}\\]
The proof will be given after some auxiliary results are presented.
### Auxiliary results
The fact that the rows of \\(A\\) are not independent makes the analysis difficult at first sight. In order to increase the amount of independence, we will use a version of the tail decoupling inequality in Theorem 3.4.1 of [12]. For convenience, we provide a short proof, which essentially repeats the one in [11] in our slightly more general setup. In this way, we also obtain better constants than by tracing the ones in the proof of [12, Theorem 3.4.1].
**Theorem 4.2**.: _Let \\(\\left(X_{i}\\right)_{i\\in[n]}\\), \\(n\\geq 2\\), be independent random variables with values in a measurable space \\(\\Omega\\). Let \\(h:\\Omega\\times\\Omega\\to B\\) be a measurable map with values in a separable Banach space \\(B\\) with norm \\(\\left\\|\\cdot\\right\\|\\). Then there exists a subset \\(S\\subset[n]\\) such that_
\\[\\mathbb{P}\\left(\\left\\|\\sum_{i\
eq j}h(X_{i},X_{j})\\right\\|>t \\right)\\leq 36\\,\\mathbb{P}\\left(4\\left\\|\\sum_{i\\in S,j\\in S^{c}}h(X_{i},X_{j}) \\right\\|>t\\right)\\vee\\] \\[36\\,\\mathbb{P}\\left(4\\left\\|\\sum_{i\\in S^{c},j\\in S}h(X_{i},X_{ j})\\right\\|>t\\right), \\tag{4.4}\\]
_where for \\(a,b\\in\\mathbb{R}\\) we denote \\(a\\lor b:=\\max\\left\\{a,b\\right\\}\\)._The proof of Theorem 4.2 employs Corollary 3.3.8 from [12].
**Lemma 4.3**.: _Let \\((B,\\left\\|\\cdot\\right\\|)\\) be a separable Banach space and let \\(Y\\) be a \\(B\\)-valued random vector such that for each \\(\\xi\\in B^{*}\\), the dual space of \\(B\\), the map \\(\\xi(Y)\\) is measurable, centered and square integrable. Then, for every \\(x\\in B\\),_
\\[\\mathbb{P}\\left(\\left\\|x+Y\\right\\|\\geq\\left\\|x\\right\\|\\right)>\\frac{1}{4}\\inf_ {\\xi\\in B^{*}}\\left(\\frac{\\mathbb{E}\\left[\\left|\\xi(Y)\\right|\\right]}{\\left( \\mathbb{E}\\left[\\left|\\xi(Y)\\right|^{2}\\right]\\right)^{1/2}}\\right)^{2}. \\tag{4.5}\\]
_Proof of Theorem 4.2._ Set \\(\\mathcal{D}:=(X_{i})_{i\\in[n]}\\) and let \\(\\epsilon=(\\epsilon_{1},\\ldots,\\epsilon_{n})\\) be a Rademacher sequence independent of \\(\\mathcal{D}\\). We introduce
\\[Z:=\\sum_{i\
eq j}h(X_{i},X_{j})-\\sum_{i\
eq j}\\epsilon_{i}\\epsilon_{j}h(X_{i},X_{j}) \\tag{4.6}\\]
and \\(Y:=-\\sum_{i\
eq j}\\epsilon_{i}\\epsilon_{j}h(X_{i},X_{j})\\). Observe that
\\[\\mathbb{E}\\left[Z\\left|\\mathcal{D}\\right.\\right]=\\sum_{i\
eq j}h(X_{i},X_{j}).\\]
Let \\(\\xi\\) be an element of the dual space \\(B^{*}\\). Conditional on \\(\\mathcal{D}\\), \\(\\xi(Y)\\) is a homogeneous scalar-valued Rademacher chaos of order \\(2\\). By Holder's inequality, we have for an arbitrary random variable \\(V\\) with finite fourth moment that
\\[\\mathbb{E}\\left[\\left|V\\right|^{2}\\right] \\leq\\left(\\mathbb{E}\\left[\\left|V\\right|\\right]\\right)^{1/2}\\left( \\mathbb{E}\\left[\\left|V\\right|^{3}\\right]\\right)^{1/2}\\] \\[\\leq\\left(\\mathbb{E}\\left[\\left|V\\right|\\right]\\right)^{1/2}\\left( \\mathbb{E}\\left[\\left|V\\right|^{2}\\right]\\right)^{1/4}\\left(\\mathbb{E}\\left[ \\left|V\\right|^{4}\\right]\\right)^{1/4}\\]
and therefore
\\[\\frac{\\mathbb{E}\\left[\\left|V\\right|^{2}\\right]}{\\left(\\mathbb{E}\\left[ \\left|V\\right|^{4}\\right]\\right)^{1/2}}\\leq\\frac{\\mathbb{E}\\left[\\left|V \\right|\\right]}{\\left(\\mathbb{E}\\left[\\left|V\\right|^{2}\\right]\\right)^{1/2}}. \\tag{4.7}\\]
Lemma 2.1 from [11] states that
\\[\\left(\\mathbb{E}\\left[\\left|\\xi(Y)\\right|^{4}\\right|\\mathcal{D}\\right]\\right) ^{1/2}\\leq 3\\mathbb{E}\\left[\\left|\\xi(Y)\\right|^{2}\\right|\\mathcal{D}\\right].\\]
Plugging this result into (4.7) gives
\\[\\frac{\\mathbb{E}\\left[\\left|\\xi(Y)\\right|\\left|\\mathcal{D}\\right]}{\\left( \\mathbb{E}\\left[\\left|\\xi(Y)\\right|^{2}\\left|\\mathcal{D}\\right]\\right)^{1/2}} \\geq\\frac{\\mathbb{E}\\left[\\left|\\xi(Y)\\right|^{2}\\left|\\mathcal{D}\\right]}{ \\left(\\mathbb{E}\\left[\\left|\\xi(Y)\\right|^{4}\\left|\\mathcal{D}\\right]\\right)^ {1/2}}\\geq\\frac{1}{3}.\\]
Taking into account (4.6), an application of Lemma 4.3 yields
\\[\\mathbb{P}\\left(\\left\\|Z\\right\\|\\geq\\left\\|\\sum_{i\
eq j}h(X_{i},X_{j}) \\right\\|\\left|\\mathcal{D}\\right)\\geq\\frac{1}{4}\\left(\\frac{1}{3}\\right)^{2}= \\frac{1}{36}. \\tag{4.8}\\]Multiplying both sides of (4.8) by the characteristic function \\(\\chi\\) of the event
\\[\\left\\{\\left\\|\\sum_{i\
eq j}h(X_{i},X_{j})\\right\\|>t\\right\\}\\leq 36\\mathbb{P} \\left(\\|Z\\|>t\\right)=36\\mathbb{E}_{\\epsilon}\\left[\\mathbb{E}\\left[\\chi_{\\{\\|Z\\|>t \\}}|\\epsilon\\right]\\right]. \\tag{4.9}\\]
We conclude by noting that there is a vector \\(\\epsilon^{*}\\in\\{\\pm 1\\}^{n}\\) such that
\\[\\mathbb{E}\\left[\\chi_{\\{\\|Z\\|>t\\}}|\\epsilon^{*}\\right]\\geq\\mathbb{E}_{ \\epsilon}\\left[\\mathbb{E}\\left[\\chi_{\\{\\|Z\\|>t\\}}|(\\epsilon_{1},\\ldots,\\epsilon _{n})\\right]\\right].\\]
The claim now follows by setting \\(S:=\\{i\\in\\{1,\\ldots,n\\}|\\epsilon_{i}^{*}=1\\}\\).
We will moreover need the following complex version of Hoeffding's inequality from [22], equation (9).
**Theorem 4.4**.: _Let \\(\\xi_{1},\\ldots,\\xi_{n}\\) be complex, independent and centered random variables satisfying \\(|\\xi_{k}|\\leq\\alpha_{k}\\) for constants \\(\\alpha_{1},\\ldots,\\alpha_{n}>0\\). Set \\(\\sigma^{2}:=\\sum_{k=1}^{n}\\alpha_{k}^{2}\\). Then_
\\[\\mathbb{P}\\left(\\left|\\sum_{k=1}^{n}\\xi_{k}\\right|>t\\right)\\leq 4\\exp\\left(- \\frac{t^{2}}{4\\sigma^{2}}\\right). \\tag{4.10}\\]
The final tool to prove that submatrices of \\(A\\) are well-conditioned is the noncommutative Bernstein inequality from [31].
**Theorem 4.5**.: _Let \\(X_{1},\\ldots,X_{n}\\in\\mathbb{C}^{s\\times s}\\) be a sequence of independent, mean zero and self-adjoint random matrices. Assume that, for some \\(K>0\\),_
\\[\\left\\|X_{\\ell}\\right\\|_{2\\to 2}\\leq K\\quad\\text{ a.s. for all }\\ \\ell\\in[n] \\tag{4.11}\\]
_and set_
\\[\\sigma^{2}:=\\left\\|\\sum_{\\ell=1}^{n}\\mathbb{E}X_{\\ell}^{2}\\right\\|_{2\\to 2}. \\tag{4.12}\\]
_Then, for \\(t>0\\), it holds that_
\\[\\mathbb{P}\\left(\\left\\|\\sum_{\\ell=1}^{n}X_{\\ell}\\right\\|_{2\\to 2}\\geq t \\right)\\leq 2s\\exp\\left(-\\frac{t^{2}/2}{\\sigma^{2}+Kt/3}\\right). \\tag{4.13}\\]
### Proof of Theorem 4.1
Denote by
\\[D_{j}:=\\operatorname{diag}\\left(\\overline{\\Phi_{\\ell}(b_{j})}:\\ell\\in T \\right)\\in\\mathbb{C}^{s\\times s}\\]
the diagonal matrix with diagonal consisting of the vector \\(\\left(\\overline{\\Phi_{\\ell}(b_{j})}\\right)_{\\ell\\in T}\\in\\mathbb{C}^{s}\\) and introduce \\(g(b_{k}):=\\left(\\overline{\\Phi_{\\ell}(b_{k})}\\right)_{\\ell\\in T}\\in\\mathbb{C} ^{s}\\). Since \\(D_{j}D_{j}^{*}=\\operatorname{Id}\\) we observe that
\\[\\frac{1}{n^{2}}A_{T}^{*}A_{T}-\\operatorname{Id}=\\frac{1}{n^{2}}\\sum_{j,k=1}^{ n}\\left[v(b_{j},b_{k})_{T}v(b_{j},b_{k})_{T}^{*}-\\operatorname{Id}\\right]= \\frac{1}{n^{2}}\\sum_{j=1}^{n}D_{j}\\left(\\sum_{k=1}^{n}\\left[g(b_{k})g(b_{k})^{ *}-\\operatorname{Id}\\right]\\right)D_{j}^{*}. \\tag{4.14}\\]Let \\(\\mathbf{b}^{\\prime}:=(b_{1}^{\\prime},\\ldots,b_{n}^{\\prime})\\) denote an independent copy of \\(\\mathbf{b}:=(b_{1},\\ldots,b_{n})\\). By the triangle inequality, we have
\\[\\mathbb{P}\\left(\\left\\|\\frac{1}{n^{2}}A_{T}^{*}A_{T}-\\mathrm{Id} \\right\\|_{2\\to 2}\\geq\\delta\\right)\\leq\\mathbb{P} \\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\
eq k}\\left[v(b_{j},b_{k})_{T }v(b_{j},b_{k})_{T}^{*}-\\mathrm{Id}\\right]\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{2}\\right)\\] \\[+\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j=1}^{n}\\left[v(b_{j },b_{j})_{T}v(b_{j},b_{j})_{T}^{*}-\\mathrm{Id}\\right]\\right\\|_{2\\to 2}\\geq \\frac{\\delta}{2}\\right)\\]
Using the decoupling inequality of Theorem 4.2, with \\(S\\subset[n]\\) denoting the corresponding set, and the symmetry relation \\(v(b_{j},b_{k})=v(b_{k},b_{j})\\), we obtain for the first term above
\\[\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\
eq k}\\left[v(b_{j},b_{k})_{T}v(b_{j},b_{k})_{T}^{*}-\\mathrm{Id}\\right]\\right\\|_{2\\to 2}\\geq \\frac{\\delta}{2}\\right)\\] \\[\\leq 36\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\\in S,k\\in S^{c} }\\left[v(b_{j},b_{k}^{\\prime})_{T}v(b_{j},b_{k}^{\\prime})_{T}^{*}-\\mathrm{Id} \\right]\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{8}\\right). \\tag{4.15}\\]
We will now estimate the right hand side of (4.15). Introducing
\\[X^{\\prime}:=\\sum_{k\\in S^{c}}\\left[g(b_{k}^{\\prime})g(b_{k}^{\\prime})^{*}- \\mathrm{Id}\\right]\\in\\mathbb{C}^{s\\times s},\\]
we observe that (4.14) together with Fubini's theorem yields
\\[36\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\\in S,k\\in S^{c} }\\left[v(b_{j},b_{k}^{\\prime})_{T}v(b_{j},b_{k}^{\\prime})_{T}^{*}-\\mathrm{Id} \\right]\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{8}\\right)\\] \\[= 36\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\\in S}D_{j}X^{ \\prime}D_{j}^{*}\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{8}\\right)=\\mathbb{E}_{ \\mathbf{b}^{\\prime}}\\left[36\\mathbb{P}_{\\mathbf{b}}\\left(\\frac{1}{n^{2}} \\left\\|\\sum_{j\\in S}D_{j}X^{\\prime}D_{j}^{*}\\right\\|_{2\\to 2}\\geq\\frac{ \\delta}{8}\\right)\\right]. \\tag{4.16}\\]
As the next step we apply the noncommutative Bernstein inequality, Theorem 4.5, to the inner probability in (4.16). Since \\(D_{j}\\) is a unitary matrix and the functions \\(\\Phi_{\\ell}\\) are orthonormal we have
\\[\\left\\|D_{j}X^{\\prime}D_{j}^{*}\\right\\|_{2\\to 2} =\\left\\|X^{\\prime}\\right\\|_{2\\to 2}, \\tag{4.17}\\] \\[\\mathbb{E}\\left[(D_{j}X^{\\prime}D_{j}^{*})^{2}\\right] =\\operatorname{diag}\\left(X^{\\prime 2}\\right),\\]
where \\(\\operatorname{diag}\\left(X^{\\prime 2}\\right)\\) denotes the matrix that coincides with \\(X^{\\prime 2}\\) on the diagonal and is zero otherwise. Set \\(\\mu\\) to be the coherence parameter
\\[\\mu:=\\max_{\\ell,\\bar{\\ell}\\in T:\\ell<\\bar{\\ell}}\\left|\\sum_{k\\in S^{c}}\\Phi_{ \\ell}(b_{k}^{\\prime})\\overline{\\Phi_{\\bar{\\ell}}(b_{k}^{\\prime})}\\right|.\\]
A crucial observation is that \\(\\operatorname{diag}\\left(X^{\\prime 2}\\right)\\preceq(s-1)\\operatorname{diag} \\left(\\mu^{2},\\mu^{2},\\ldots,\\mu^{2}\\right)\\), where \\(\\preceq\\) denotes the semidefinite ordering. Therefore, it holds that
\\[\\left\\|\\sum_{j\\in S}\\mathbb{E}\\left[(D_{j}X^{\\prime}D_{j}^{*})^{2}\\right] \\right\\|_{2\\to 2}\\leq|S|\\,(s-1)\\mu^{2}\\leq n(s-1)\\mu^{2}. \\tag{4.18}\\]Plugging the bounds (4.17) and (4.18) into (4.13) yields
\\[\\mathbb{P}_{\\mathbf{b}}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\\in S}D_{j}X^{\\prime} D_{j}^{*}\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{8}\\right)\\leq 2s\\exp\\left(-\\frac{\\delta^{2}}{ \\frac{128(s-1)}{n^{3}}\\mu^{2}+\\frac{16\\delta}{3n^{2}}\\left\\|X^{\\prime}\\right\\| _{2\\to 2}}\\right). \\tag{4.19}\\]
Set \\(\\tilde{\\varepsilon}=\\varepsilon/36\\). Multiplying the inner probability in (4.16) by the characteristic function of the event \\(E:=E_{1}\\cap E_{2}\\), where
\\[E_{1} :=\\left\\{\\frac{128(s-1)}{n^{3}}\\mu^{2}\\leq\\frac{\\delta^{2}}{2\\log \\left(8s/\\tilde{\\varepsilon}\\right)}\\right\\},\\] \\[E_{2} :=\\left\\{\\frac{16}{3n^{2}}\\left\\|X^{\\prime}\\right\\|_{2\\to 2}\\leq \\frac{\\delta}{2\\log\\left(8s/\\tilde{\\varepsilon}\\right)}\\right\\},\\]
we obtain, with \\(E_{1}^{c}\\) and \\(E_{2}^{c}\\) denoting the complements of \\(E_{1}\\) and \\(E_{2}\\),
\\[36\\,\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j\\in S}D_{j}X^{\\prime}D_{j}^{* }\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{8}\\right)\\leq\\frac{\\varepsilon}{4}+36 \\left(2s\\mathbb{P}\\left(E_{1}^{c}\\right)+2s\\mathbb{P}\\left(E_{2}^{c}\\right) \\right). \\tag{4.20}\\]
Therefore, it remains to estimate the probabilities of the events \\(E_{1}^{c}\\) and \\(E_{2}^{c}\\). For the event \\(E_{1}^{c}\\), the union bound over all \\(s(s-1)/2\\leq s^{2}/2\\) two element subsets of \\(T\\) implies in the case of a general BOS that
\\[36\\left(2s\\mathbb{P}\\left(E_{1}^{c}\\right)\\right) \\leq 72s\\mathbb{P}\\left(\\bigcup_{\\ell,\\tilde{\\ell}\\in T,\\ell< \\tilde{\\ell}}\\left\\{\\frac{128(s-1)}{n^{3}}\\left|\\sum_{k\\in S^{c}}\\Phi_{\\ell}( b_{k}^{\\prime})\\overline{\\Phi_{\\tilde{\\ell}}(b_{k}^{\\prime})}\\right|^{2}\\geq \\frac{\\delta^{2}}{2\\log\\left(8s/\\tilde{\\varepsilon}\\right)}\\right\\}\\right)\\] \\[\\leq 72s\\sum_{\\ell,\\tilde{\\ell}\\in T,\\ell<\\tilde{\\ell}}\\mathbb{P} \\left(\\left|\\sum_{k\\in S^{c}}\\Phi_{\\ell}(b_{k}^{\\prime})\\overline{\\Phi_{\\tilde {\\ell}}(b_{k}^{\\prime})}\\right|\\geq\\frac{\\delta n^{3/2}}{\\sqrt{256s\\log\\left(8 s/\\tilde{\\varepsilon}\\right)}}\\right) \\tag{4.21}\\] \\[\\leq 144s^{3}\\exp\\left(-\\frac{n^{2}\\delta^{2}}{1024s\\log\\left(8s/ \\tilde{\\varepsilon}\\right)}\\right), \\tag{4.22}\\]
where we have applied Hoeffding's inequality in the form of Theorem 4.4 in the last line. The right hand side of (4.22) is less than \\(\\varepsilon/4\\) provided
\\[n^{2}\\geq 1024\\delta^{-2}s\\log^{2}\\left(576s^{3}/\\varepsilon\\right). \\tag{4.23}\\]
As for \\(E_{2}^{c}\\), we are going to apply the noncommutative Bernstein inequality again. Noting that
\\[\\left\\|g(b_{k}^{\\prime})g(b_{k}^{\\prime})^{*}-\\mathrm{Id}\\right\\| _{2\\to 2} =s-1,\\] \\[\\left\\|\\sum_{k\\in S^{c}}\\mathbb{E}\\left[\\left(g(b_{k}^{\\prime})g( b_{k}^{\\prime})^{*}-\\mathrm{Id}\\right)^{2}\\right]\\right\\|_{2\\to 2} =|S^{c}|\\left(s-1\\right)\\leq n(s-1),\\]
we obtain
\\[36\\left(2s\\mathbb{P}\\left(E_{2}^{c}\\right)\\right)\\leq 144s^{2}\\exp\\left(-\\frac{ \\delta^{2}}{\\left(\\frac{32}{3}\\right)^{2}\\frac{s}{n^{3}}\\log^{2}\\left(8s/ \\tilde{\\varepsilon}\\right)+\\frac{32}{9}\\delta\\frac{s}{n^{2}}\\log\\left(8s/ \\tilde{\\varepsilon}\\right)}\\right). \\tag{4.24}\\]Assuming (4.23), the right hand side of (4.24) is less than \\(\\varepsilon/4\\). Since \\(\\left\\{\\Phi_{k}^{2}\\right\\}_{k\\in[N]}\\) is also a BOS with respect to \\((D,\
u)\\), Condition (4.23) implies, after another application of the noncommutative Bernstein inequality analogously to (4.24) and the preceding steps, that
\\[\\mathbb{P}\\left(\\frac{1}{n^{2}}\\left\\|\\sum_{j=1}^{n}\\left[v(b_{j},b_{j})_{T}v( b_{j},b_{j})_{T}^{*}-\\mathrm{Id}\\right]\\right\\|_{2\\to 2}\\geq\\frac{\\delta}{2} \\right)\\leq\\frac{\\varepsilon}{4}. \\tag{4.25}\\]
This concludes the proof.
**Remark 4.6**.:
1. _In order to show (_4.25_), we used the assumption that_ \\(\\left\\{\\Phi_{k}^{2}\\right\\}_{k\\in[N]}\\) _is also a BOS with respect to_ \\((D,\
u)\\)_. It might be that (_4.25_) also holds under weaker assumptions on the BOS, however, it does not hold if we choose for example the Hadamard system._
2. _Assuming the special case of the scattering matrix (_2.7_), the terms in (_4.21_) take the form_ \\[\\Phi_{\\ell}(b_{k})\\overline{\\Phi_{\\tilde{\\ell}}(b_{k})}=\\exp\\left(\\frac{\\pi i }{\\lambda z_{0}}\\left(\\left\\|r_{\\ell}\\right\\|_{2}^{2}-\\left\\|r_{\\tilde{\\ell}} \\right\\|_{2}^{2}\\right)\\right)\\exp\\left(\\frac{2\\pi i}{\\lambda z_{0}}\\langle(r _{\\tilde{\\ell}}-r_{\\ell}),b_{k}\\rangle\\right),\\] _where due to the aperture condition (_2.4_)_ \\[\\tilde{\\theta}_{k}:=\\exp\\left(\\frac{2\\pi i}{\\lambda z_{0}}\\langle(r_{\\tilde{ \\ell}}-r_{\\ell}),b_{k}\\rangle\\right)\\] _is a Steinhaus random variable and_ \\(\\tilde{\\theta}:=(\\tilde{\\theta}_{1},\\ldots,\\tilde{\\theta}_{n})\\) _is a Steinhaus sequence. We can therefore apply Hoeffding's inequality for Steinhaus sequences, see_ _[_26_]__, Corollary_ 6.13_. This inequality states that, for arbitrary_ \\(v\\in\\mathbb{C}^{n}\\) _and_ \\(\\kappa\\in(0,1)\\)_,_ \\[\\mathbb{P}\\left(\\left|\\langle v,\\tilde{\\theta}\\rangle\\right|>t\\right)\\leq\\frac {1}{1-\\kappa}\\exp\\left(-\\kappa\\frac{t^{2}}{\\left\\|v\\right\\|_{2}^{2}}\\right).\\] (4.26) _Applying this result with_ \\(\\kappa=4/5\\) _instead of Theorem_ 4.4 _in (_4.21_), one obtains that the claimed spectral norm estimate (_4.3_) holds under the slightly improved condition_ \\[n^{2}\\geq 320\\delta^{-2}s\\log\\left(\\frac{288s}{\\varepsilon}\\right)\\log\\left( \\frac{720s^{3}}{\\varepsilon}\\right),\\] (4.27) _where we have also taken into consideration the precise form of (_4.22_)._
## 5 Nonuniform Recovery of Scatterers with Random Phase
Proof of Theorem 2.3.: The key idea of the proof is to apply Theorem 3.1. Note first that (2.14) is equivalent to
\\[\\operatorname*{argmin}_{z\\in\\mathbb{C}^{N}}\\left\\|z\\right\\|_{1}\\quad\\text{ subject to}\\quad\\left\\|\\frac{1}{n}Az-\\frac{1}{n}y\\right\\|\\leq\\eta. \\tag{5.1}\\]
Let \\(T\\subset[N]\\) be the index set corresponding to the \\(s\\) largest absolute entries of \\(x\\) and assume that \\(\\operatorname{sgn}(x)_{T}\\) is either a Rademacher or a Steinhaus sequence. Suppose we are on the event
\\[E:=\\left\\{\\left\\|\\frac{1}{n^{2}}A_{T}^{*}A_{T}-\\mathrm{Id}\\right\\|_{2\\to 2} \\leq\\frac{1}{2}\\right\\}. \\tag{5.2}\\]Theorem 4.1 states that \\(\\mathbb{P}\\left[E^{c}\\right]\\leq\\varepsilon/2\\) if
\\[n^{2}\\geq 4096s\\log^{2}\\left(1152s^{3}/\\varepsilon\\right). \\tag{5.3}\\]
Set \\(\\tilde{A}:=\\frac{1}{n}A\\). The event \\(E\\) means in particular that \\(\\tilde{A}_{T}\\) fulfills condition (3.1). We define the vector \\(v\\in\\mathbb{C}^{n^{2}}\\) in Theorem 3.1 via
\\[v:=\\left(\\tilde{A}^{\\dagger}\\right)^{*}\\operatorname{sgn}(x)_{T}=\\tilde{A}_{T }\\left(\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{-1}\\operatorname{sgn}(x)_{T}, \\tag{5.4}\\]
where \\(\\tilde{A}^{\\dagger}\\) denotes the pseudo-inverse of \\(\\tilde{A}_{T}\\). Setting \\(u:=\\tilde{A}^{*}v\\), we have \\(u_{T}=\\tilde{A}_{T}^{*}v=\\operatorname{sgn}(x)_{T}\\), so that (3.2) is satisfied. Since we are on the event \\(E\\), the smallest singular value of \\(\\tilde{A}_{T}\\) satisfies \\(\\sigma_{\\min}(\\tilde{A}_{T})\\geq 1/\\sqrt{2}\\) and therefore
\\[\\|v\\|_{2}\\leq\\|\\tilde{A}^{\\dagger}\\|_{2\\to 2}\\|\\operatorname{sgn}(x)_{T}\\|_{ 2}\\leq\\sigma_{\\min}(\\tilde{A}_{T})^{-1}\\sqrt{s}\\leq\\sqrt{2s}.\\]
Hence, also (3.4) is satisfied. It remains to check (3.3). To this end, note that
\\[\\left\\|u_{T^{c}}\\right\\|_{\\infty}=\\max_{\\ell\\in T^{c}}\\left|\\left\\langle\\left( \\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{-1}\\tilde{A}_{T}^{*}\\tilde{a}_{\\ell}, \\operatorname{sgn}(x)_{T}\\right\\rangle\\right|=\\max_{\\ell\\in T^{c}}\\left| \\langle\\tilde{A}_{T}^{\\dagger}\\tilde{a}_{\\ell},\\operatorname{sgn}x_{T} \\rangle\\right|.\\]
As in the previous section, we denote \\(\\mathbf{b}=(b_{1},\\ldots,b_{n})\\). Since \\(\\operatorname{sgn}(x)_{T}=:(\\theta_{\\ell})_{\\ell\\in T}=:\\theta\\) is a Rademacher or a Steinhaus sequence, condition (5.3), Fubini's Theorem and Hoeffding's inequality for Rademacher resp. Steinhaus sequences together with the union bound give
\\[\\mathbb{P}\\left(\\max_{\\ell\\in T^{c}}\\left|\\langle\\tilde{A}_{T}^{ \\dagger}\\tilde{a}_{\\ell},\\operatorname{sgn}(x)_{T}\\rangle\\right|>\\frac{1}{2} \\right)\\leq\\mathbb{P}\\left(\\left\\{\\max_{\\ell\\in T^{c}}\\left|\\langle\\tilde{A}_ {T}^{\\dagger}\\tilde{a}_{\\ell},\\operatorname{sgn}(x)_{T}\\rangle\\right|>\\frac{1 }{2}\\right\\}\\cap E\\right)+\\frac{\\varepsilon}{2}\\] \\[\\leq \\mathbb{E}_{\\mathbf{b}}\\left[\\chi_{E}\\sum_{\\ell\\in T^{c}}\\mathbb{ P}_{\\theta}\\left(\\left|\\langle\\tilde{A}_{T}^{\\dagger}\\tilde{a}_{\\ell}, \\operatorname{sgn}(x)_{T}\\rangle\\right|>\\frac{1}{2}\\right)\\right]+\\frac{ \\varepsilon}{2}\\] \\[\\leq \\mathbb{E}_{\\mathbf{b}}\\left[\\chi_{E}\\sum_{\\ell\\in T^{c}}2\\exp \\left(-\\frac{1}{8\\left\\|\\tilde{A}_{T}^{\\dagger}\\tilde{a}_{\\ell}\\right\\|_{2}^{2 }}\\right)\\right]+\\frac{\\varepsilon}{2}\\] \\[\\leq 2(N-s)\\mathbb{E}_{\\mathbf{b}}\\left[\\chi_{E}\\exp\\left(-\\frac{1}{ 8\\max_{\\ell\\in T^{c}}\\left\\|\\tilde{A}_{T}^{\\dagger}\\tilde{a}_{\\ell}\\right\\|_{ 2}^{2}}\\right)\\right]+\\frac{\\varepsilon}{2}. \\tag{5.5}\\]
Since we are on the event \\(E\\) from (5.2), it follows as before that \\(\\left\\|\\left(\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{-1}\\right\\|_{2\\to 2}\\leq \\frac{1}{\\sigma_{\\min}\\left(\\tilde{A}_{T}\\right)^{2}}\\leq 2\\) and therefore
\\[\\max_{\\ell\\in T^{c}}\\left\\|\\tilde{A}_{T}^{\\dagger}\\tilde{a}_{\\ell}\\right\\|_{2} ^{2}\\leq 4\\max_{\\ell\\in T^{c}}\\left\\|\\tilde{A}_{T}^{*}\\tilde{a}_{\\ell} \\right\\|_{2}^{2}\\leq 4s\\max_{\\ell\\in T^{c},\\ell\\in T}\\left|\\langle\\tilde{a}_{\\ell}, \\tilde{a}_{\\ell}\\rangle\\right|^{2}.\\]
Set
\\[\\mu:=\\max_{\\ell\\in T^{c},\\tilde{\\ell}\\in T}\\left|\\sum_{k=1}^{n}\\Phi_{\\ell}(b_ {k})\\overline{\\Phi_{\\tilde{\\ell}}(b_{k})}\\right|.\\]
Since
\\[\\left|\\langle\\tilde{a}_{\\ell},\\tilde{a}_{\\ell}\\rangle\\right|=\\left|\\sum_{k=1} ^{n}\\Phi_{\\ell}(b_{k})\\overline{\\Phi_{\\tilde{\\ell}}(b_{k})}\\right|^{2},\\]we have
\\[\\max_{\\ell\\in T^{c}}\\left\\|\\mathring{A}_{T}^{\\dagger}\\tilde{a}_{\\ell}\\right\\|_{2 }^{2}\\leq 4\\frac{s}{n^{4}}\\mu^{4}.\\]
We then obtain
\\[2(N-s)\\mathbb{E}_{\\mathbf{b}}\\left[\\chi_{E}\\exp\\left(-\\frac{1}{8 \\max_{\\ell\\in T^{c}}\\left\\|\\mathring{A}_{T}^{\\dagger}\\tilde{a}_{\\ell}\\right\\|_{2 }^{2}}\\right)\\right]\\] \\[\\leq 2(N-s)\\mathbb{E}_{\\mathbf{b}}\\left[\\chi_{E}\\exp\\left(-\\frac{ 1}{32\\frac{s}{n^{4}}\\mu^{4}}\\right)\\right]\\] \\[\\leq\\frac{\\varepsilon}{4}+2(N-s)\\mathbb{P}_{\\mathbf{b}}\\left( \\frac{s^{1/4}}{n}\\mu>\\frac{1}{\\left(32\\log\\left(8(N-s)/\\varepsilon\\right) \\right)^{1/4}}\\right).\\]
Applying the union bound and Hoeffding's inequality as in (4.22) gives
\\[2(N-s)\\mathbb{P}_{\\mathbf{b}}\\left(\\frac{s^{1/4}}{n}\\mu>\\frac{1} {\\left(32\\log\\left(8(N-s)/\\varepsilon\\right)\\right)^{1/4}}\\right)\\] \\[\\leq 8(N-s)^{2}s\\exp\\left(-\\frac{n}{16\\sqrt{2s\\log\\left(8(N-s)/ \\varepsilon\\right)}}\\right). \\tag{5.6}\\]
The condition
\\[n\\geq 32\\sqrt{s}\\log^{1/2}\\left(8(N-s)/\\varepsilon\\right)\\log\\left(576(N-s)^{2 }s/\\varepsilon\\right) \\tag{5.7}\\]
implies that the right hand side of equation (5.6) is less than \\(\\varepsilon/4\\). Assuming \\(s\\leq N/3\\) and \\(8(N-s)/\\varepsilon\\geq e^{4}\\), (5.7) implies (5.3) and therefore also \\(\\mathbb{P}\\left(E^{c}\\right)\\leq\\varepsilon/2\\), where \\(E\\) is the event from (5.2). We have thus verified that under condition (5.7), all conditions of Theorem 3.1 are satisfied with probability at least \\(1-\\varepsilon\\). Since we work with the rescaled version (5.1) of \\(A\\), the solution \\(\\hat{x}\\) satisfies (2.19) with the required probability. This finishes the proof of Theorem 2.3.
**Remark 5.1**.: _In the special case of the scattering matrix (2.7), we can apply the same technique as in Remark 4.6 (b) to obtain a slight improvement of (5.7). In fact, assuming also the mild condition \\(8(N-s)/\\varepsilon\\geq e^{7}\\), all conditions of Theorem 3.1 are satisfied with probability at least \\(1-\\varepsilon\\) under the improved condition_
\\[n\\geq 5\\sqrt{2s}\\log^{1/2}\\left(8(N-s)/\\varepsilon\\right)\\log\\left(576s(N-s)^{ 2}/\\varepsilon\\right).\\]
## 6 Nonuniform Recovery of Scatterers with Deterministic Phase
### Set partitions
To prove the central result of this section, we will require a few facts on certain partitions of the set \\([N]\\), \\(N\\in\\mathbb{N}\\). As in [25, Section 2.2] we define \\(\\mathcal{P}\\left(N,k\\right)\\) as the set of all partitions of \\([N]\\) into exactly \\(k\\) blocks such that each block contains at least two elements. Note that then necessarily \\(k\\leq N/2\\). The numbers \\(S_{2}(N,k):=|\\mathcal{P}(N,k)|\\) are called associated Stirling numbers of the second kind. In [25, Section 3.5] it was shown that
\\[S_{2}(N,k)\\leq\\left(\\frac{3N}{2}\\right)^{N-k}. \\tag{6.1}\\]For our purposes, we will also need partitions of \\([N]\\) in which not necessarily all blocks contain at least two elements.
**Definition 6.1**.: _For \\(N\\geq 1\\), \\(t\\leq k\\leq N\\), we define \\(\\mathcal{P}\\left(N,k,k-t\\right)\\) as the set of all partitions of \\([N]\\) into \\(k\\) blocks such that \\(k-t\\) of these blocks contain at least two elements. Moreover, we define \\(\\mathcal{P}_{ex}\\left(N,k,k-t\\right)\\) as the set of all partitions of \\([N]\\) into \\(k\\) blocks such that exactly \\(k-t\\) blocks contain at least two elements and exactly \\(t\\) blocks contain exactly one element._
The above definition of \\(\\mathcal{P}\\left(N,k,k-t\\right)\\) implies that necessarily \\(2(k-t)\\leq N-t\\) and therefore
\\[k\\leq\\frac{N+t}{2}. \\tag{6.2}\\]
Our next goal is a convenient estimate of the numbers \\(S\\left(N,k,k-t\\right):=\\left|\\mathcal{P}\\left(N,k,k-t\\right)\\right|\\). We first observe that
\\[S\\left(N,k,k-t\\right)=\\sum_{r=0}^{t}\\left|\\mathcal{P}_{ex}\\left(N,k,k-r\\right) \\right|.\\]
Moreover, we have
\\[\\left|\\mathcal{P}_{ex}\\left(N,k,k-r\\right)\\right|=\\binom{N}{r}S_{2}(N-r,k-r) \\leq\\binom{N}{r}\\left(\\frac{3N}{2}\\right)^{N-k}, \\tag{6.3}\\]
where the last inequality follows from the estimate (6.1). Since \\(t\\leq N\\) and therefore \\(\\sum_{r=0}^{t}\\binom{N}{r}\\leq 2^{N}\\), this yields
\\[S\\left(N,k,k-t\\right)\\leq(3N)^{N}\\left(\\frac{3N}{2}\\right)^{-k}. \\tag{6.4}\\]
This estimate will become crucial in the next section.
### Construction of a dual certificate
We will use combinatorial estimates inspired by the analysis in [4, 27, 25, 8] in order to construct a dual certificate. Hereby, we exploit the estimates on set partitions stated above. In this way, we will extend the recovery result of Section 2 to a vector \\(x\\in\\mathbb{C}^{N}\\) with deterministic phase pattern \\(\\mathrm{sgn}(x)_{T}\\) - recall that \\(T\\) is the set of indices corresponding to the \\(s\\) largest absolute entries of \\(x\\). Since the phases are now deterministic we can no longer use the additional concentration of measure coming from the independent randomness in the signs. In particular, we have to estimate the probability of the event
\\[\\left\\{\\max_{\\ell\\in T^{c}}\\left|\\langle\\tilde{A}_{T}^{\\dagger}\\tilde{a}_{ \\ell},\\mathrm{sgn}(x)_{T}\\rangle\\right|>\\frac{1}{2}\\right\\}\\]
using only the randomness in \\(\\tilde{A}\\). Throughout this subsection, we will assume that the sampling matrix \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) is given by (2.7). However, we note that exactly the same proof applies if we take the Fourier system \\(\\{\\Phi_{k}\\}\\) from [25] instead and construct the random matrix as in (2.17).
Let us state the central result of this section.
**Theorem 6.1**.: _Let \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) be the random sampling matrix from (2.7) and let \\(x\\in\\mathbb{C}^{N}\\). Let \\(T\\subset[N]\\) with \\(|T|=s\\) be the index set of the \\(s\\) largest absolute entries of \\(x\\). Set \\(\\tilde{A}:=\\frac{1}{n}A\\). If_
\\[n^{2}\\geq Cs\\log^{2}\\left(cN/\\varepsilon\\right), \\tag{6.5}\\]
_then with probability at least \\(1-\\varepsilon\\)_
1. _there is a_ \\(v\\in\\mathbb{C}^{n^{2}}\\) _such that_ \\(u:=\\tilde{A}^{*}v\\) _and_ \\(v\\) _satisfy Conditions (_3.2_),(_3.3_) and (_3.4_) of Theorem_ 3.1_;_
2. _for the matrix_ \\(\\tilde{A}\\)_, it holds that_ \\[\\left\\|\\tilde{A}^{*}_{T}\\tilde{A}_{T}-\\mathrm{Id}\\right\\|_{2\\to 2}\\leq \\frac{1}{e}.\\] (6.6)
_The constants satisfy \\(C\\leq\\left(800e^{3/4}\\right)^{2}\\), \\(c\\leq 6\\)._
Proof.: Suppose we are on the event
\\[E:=\\left\\{\\left\\|\\tilde{A}^{*}_{T}\\tilde{A}_{T}-\\mathrm{Id}\\right\\|_{2\\to 2 }\\leq\\frac{1}{e}\\right\\},\\]
where the constant \\(1/e\\) in the probability is chosen to ease computations later on. Theorem 4.1 implies that \\(\\mathbb{P}\\left[E^{c}\\right]\\leq\\varepsilon/4\\) if Condition (6.5) holds. Our aim is an estimate for the probability of the event
\\[\\widetilde{E}:=\\left\\{\\left\\|\\tilde{A}^{*}_{T^{c}}\\tilde{A}_{T}\\left(\\tilde{A }^{*}_{T}\\tilde{A}_{T}\\right)^{-1}\\mathrm{sgn}(x)_{T}\\right\\|_{\\infty}>\\frac{ 1}{2}\\right\\}. \\tag{6.7}\\]
By expanding the Neumann series, we observe that, for \\(m\\in\\mathbb{N}\\),
\\[\\left(\\mathrm{Id}-\\left(\\mathrm{Id}-\\tilde{A}^{*}_{T}\\tilde{A}_{T}\\right)^{m} \\right)^{-1}=\\mathrm{Id}+\\sum_{r=1}^{\\infty}\\left(\\mathrm{Id}-\\tilde{A}^{*}_{ T}\\tilde{A}_{T}\\right)^{rm}.\\]
With
\\[A_{m}:=\\sum_{r=1}^{\\infty}\\left(\\mathrm{Id}-\\tilde{A}^{*}_{T}\\tilde{A}_{T} \\right)^{rm}\\]
we obtain
\\[\\left(\\tilde{A}^{*}_{T}\\tilde{A}_{T}\\right)^{-1} =\\left(\\mathrm{Id}-\\left(\\mathrm{Id}-\\tilde{A}^{*}_{T}\\tilde{A} _{T}\\right)\\right)^{-1}=\\left(\\mathrm{Id}-\\left(\\mathrm{Id}-\\tilde{A}^{*}_{T} \\tilde{A}_{T}\\right)^{m}\\right)^{-1}\\sum_{k=0}^{m-1}\\left(\\mathrm{Id}-\\tilde{ A}^{*}_{T}\\tilde{A}_{T}\\right)^{k}\\] \\[=\\left(\\mathrm{Id}+A_{m}\\right)\\sum_{k=0}^{m-1}\\left(\\mathrm{Id} -\\tilde{A}^{*}_{T}\\tilde{A}_{T}\\right)^{k}.\\]
An application to \\(\\mathrm{sgn}(x)_{T}\\) yields
\\[\\tilde{A}^{*}_{T^{c}}\\tilde{A}_{T}\\left(\\tilde{A}^{*}_{T}\\tilde{A }_{T}\\right)^{-1}\\mathrm{sgn}(x)_{T} =\\tilde{A}^{*}_{T^{c}}\\tilde{A}_{T}\\sum_{k=0}^{m-1}\\left(\\mathrm{ Id}-\\tilde{A}^{*}_{T}\\tilde{A}_{T}\\right)^{k}\\mathrm{sgn}(x)_{T}\\] \\[+\\tilde{A}^{*}_{T^{c}}\\tilde{A}_{T}A_{m}\\sum_{k=0}^{m-1}\\left( \\mathrm{Id}-\\tilde{A}^{*}_{T}\\tilde{A}_{T}\\right)^{k}\\mathrm{sgn}(x)_{T}.\\]An application of the pigeon hole principle yields
\\[\\mathbb{P}\\left(\\widetilde{E}\\right) \\leq\\mathbb{P}\\left(\\left\\|\\tilde{A}_{T^{c}}^{*}\\tilde{A}_{T}\\sum_{ k=0}^{m-1}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k} \\operatorname{sgn}(x)_{T}\\right\\|_{\\infty}>\\frac{1}{4}\\right) \\tag{6.8}\\] \\[+\\mathbb{P}\\left(\\left\\|\\tilde{A}_{T^{c}}^{*}\\tilde{A}_{T}A_{m} \\sum_{k=0}^{m-1}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right) ^{k}\\operatorname{sgn}(x)_{T}\\right\\|_{\\infty}>\\frac{1}{4}\\right). \\tag{6.9}\\]
We now choose
\\[m:=\\left\\lceil 2\\log\\left(6N/\\varepsilon\\right)\\right\\rceil. \\tag{6.10}\\]
For the treatment of the event
\\[\\overline{E}:=\\left\\{\\left\\|\\tilde{A}_{T^{c}}^{*}\\tilde{A}_{T}A_{m}\\sum_{k=0} ^{m-1}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k} \\operatorname{sgn}(x)_{T}\\right\\|_{\\infty}>\\frac{1}{4}\\right\\}, \\tag{6.11}\\]
in (6.9) we denote by \\(a_{\\ell}\\) the columns of the unnormalized sampling matrix \\(A\\) and set
\\[\\mu^{2}:=\\max_{\\ell\\in T^{c},\\ell\\in T}\\left|\\left\\langle a_{\\ell},a_{\\tilde{ \\ell}}\\right\\rangle\\right|.\\]
For a matrix \\(B\\in\\mathbb{C}^{m\\times k}\\), we denote by
\\[\\left\\|B\\right\\|_{\\infty\\to\\infty}:=\\sup_{\\left\\|x\\right\\|_{\\infty}=1}\\left\\| Bx\\right\\|_{\\infty}=\\max_{\\ell\\in[m]}\\sum_{n\\in[k]}\\left|b_{\\ell n}\\right|\\]
the operator norm of \\(B\\) on \\(\\ell_{\\infty}\\). We then obtain
\\[\\left\\|\\tilde{A}_{T^{c}}^{*}\\tilde{A}_{T}\\right\\|_{\\infty\\to\\infty}\\leq\\frac{ s}{n^{2}}\\mu^{2}.\\]
Moreover, for an arbitrary matrix \\(B\\in\\mathbb{C}^{s\\times s}\\), it follows from the definition of \\(\\left\\|\\cdot\\right\\|_{\\infty\\to\\infty}\\) that \\(\\left\\|B\\right\\|_{\\infty\\to\\infty}\\leq\\sqrt{s}\\left\\|B\\right\\|_{2\\to 2}\\). Conditionally on the event \\(E\\), this inequality gives
\\[\\left\\|A_{m}\\right\\|_{\\infty\\to\\infty}\\leq\\sqrt{s}\\left\\|A_{m}\\right\\|_{2\\to 2 }\\leq\\sqrt{s}\\sum_{r=1}^{\\infty}\\left\\|\\left(\\operatorname{Id}-\\tilde{A}_{T}^ {*}\\tilde{A}_{T}\\right)\\right\\|_{2\\to 2}^{rm}\\leq\\sqrt{s}\\sum_{r=1}^{\\infty} \\left(\\frac{1}{e^{m}}\\right)^{r}=\\sqrt{s}\\frac{1}{e^{m}-1}.\\]
Similarly, we obtain
\\[\\left\\|\\sum_{k=0}^{m-1}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T} \\right)^{k}\\right\\|_{\\infty\\to\\infty}\\leq\\sqrt{s}\\frac{e}{e-1}.\\]
Combining these estimates, we obtain, conditionally on the event \\(E\\),
\\[\\left\\|\\tilde{A}_{T^{c}}^{*}\\tilde{A}_{T}A_{m}\\sum_{k=0}^{m-1} \\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k} \\operatorname{sgn}(x)_{T}\\right\\|_{\\infty\\to\\infty}\\] \\[\\leq \\left\\|\\tilde{A}_{T^{c}}^{*}\\tilde{A}_{T}\\right\\|_{\\infty\\to \\infty}\\left\\|A_{m}\\right\\|_{\\infty\\to\\infty}\\left\\|\\sum_{k=0}^{m-1}\\left( \\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k}\\right\\|_{\\infty \\to\\infty}\\] \\[\\leq \\frac{s^{2}}{n^{2}}\\frac{e}{(e-1)}\\frac{1}{e^{m}-1}\\mu^{2}\\leq 4 \\frac{s^{2}}{e^{m}}\\frac{1}{n^{2}}\\mu^{2}\\leq\\frac{\\varepsilon^{2}}{9n^{2}} \\mu^{2},\\]where we have applied (6.10) and the fact that \\(s\\leq N\\) in the last line. Hence, the probability of the event \\(\\overline{E}\\) in (6.11) can be bounded by
\\[\\mathbb{P}\\left(\\overline{E}\\right) =\\mathbb{P}\\left(\\overline{E}\\cap E\\right)+\\mathbb{P}\\left( \\overline{E}\\cap E^{c}\\right)\\leq\\mathbb{P}\\left(\\frac{\\varepsilon^{2}}{9n^{2} }\\mu^{2}>\\frac{1}{4}\\right)+\\frac{\\varepsilon}{4}\\] \\[\\leq 4s(N-s)\\exp\\left(-\\frac{9n}{8\\varepsilon^{2}}\\right)+\\frac{ \\varepsilon}{4}\\leq\\frac{\\varepsilon}{2},\\]
where we have applied Hoeffding's inequality Theorem 4.4 and the union bound together with (6.5) in the last line. It remains to estimate the term in (6.8). To this end, we define, for \\(\\ell\\in T^{c}\\),
\\[E_{\\ell}:=\\left\\{\\left|\\sum_{k=0}^{m-1}\\tilde{a}_{\\ell}^{*}\\tilde{A}_{T} \\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k}\\operatorname {sgn}(x)_{T}\\right|>\\frac{1}{4}\\right\\}. \\tag{6.12}\\]
Let \\(\\left\\{\\beta_{k}\\right\\}_{k=0,\\ldots,m-1}\\subset(0,1)\\) such that \\(\\sum_{k=0}^{m-1}\\beta_{k}\\leq 1/4\\) and let \\(M_{k}\\in\\mathbb{N}\\) to be chosen below. According to the pigeon hole principle, we have
\\[\\mathbb{P}\\left(E_{\\ell}\\right) \\leq\\sum_{k=0}^{m-1}\\mathbb{P}\\left(\\left|\\tilde{a}_{\\ell}^{*} \\tilde{A}_{T}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k }\\operatorname{sgn}(x)_{T}\\right|\\geq\\beta_{k}\\right)\\] \\[=\\sum_{k=0}^{m-1}\\mathbb{P}\\left(\\left|\\tilde{a}_{\\ell}^{*} \\tilde{A}_{T}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k }\\operatorname{sgn}(x)_{T}\\right|^{2M_{k}}\\geq\\beta_{k}^{2M_{k}}\\right)\\] \\[\\leq\\sum_{k=0}^{m-1}\\beta_{k}^{-2M_{k}}\\mathbb{E}\\left[\\left| \\tilde{a}_{\\ell}^{*}\\tilde{A}_{T}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*} \\tilde{A}_{T}\\right)^{k}\\operatorname{sgn}(x)_{T}\\right|^{2M_{k}}\\right],\\]
where we have applied Markov's inequality in the last step. With \\(r(\\cdot)\\) denoting the function that rounds to the closest integer, we introduce
\\[M_{k}:=r\\left(\\frac{m}{k+1}\\right)\\text{ for }k=0,\\ldots,m-1,\\quad q_{k}:=2M_{k} (k+1).\\]
Then \\(2m/3\\leq M_{k}(k+1)\\leq 4m/3\\) and therefore \\(4m/3\\leq q_{k}\\leq 8m/3\\) and also \\(m/M_{k}\\geq 3(k+1)/4\\). For some \\(\\beta\\in(0,1)\\), we further set
\\[\\beta_{k}:=\\beta^{\\frac{m}{M_{k}}}.\\]
Then with \\(\\beta=1/(5^{4/3})\\), we have \\(\\sum_{k=0}^{m-1}\\beta_{k}\\leq 1/4\\), so that we have found valid choices for the \\(\\beta_{k}\\). The rest of the proof is a straightforward consequence of the following statement.
**Lemma 6.2**.: _Let \\(k,M\\in\\mathbb{N}\\) be given and set \\(q=2M(k+1)\\). If_
\\[n\\geq 3q\\sqrt{s}, \\tag{6.13}\\]
_then_
\\[\\mathbb{E}\\left[\\left|\\tilde{a}_{\\ell}^{*}\\tilde{A}_{T}\\left(\\operatorname{Id }-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k}\\operatorname{sgn}(x)_{T}\\right|^ {2M}\\right]\\leq 6q\\left(\\frac{6q\\sqrt{s}}{n}\\right)^{q}. \\tag{6.14}\\]Before we prove Lemma 6.2, let us first see how one can deduce Theorem 6.1 from it. Condition (6.5) implies
\\[n\\geq 800e^{3/4}\\sqrt{s}\\log\\left(\\frac{6N}{\\varepsilon}\\right),\\]
which, according to the choice \\(m=\\lceil 2\\log\\left(6N/\\varepsilon\\right)\\rceil\\) of \\(m\\) and the definition of \\(q\\) implies (6.13). Then (6.14) yields the series of inequalities
\\[\\sum_{k=0}^{m-1}\\beta_{k}^{-2M_{k}}\\mathbb{E}\\left[\\left|\\tilde{a }_{\\ell}^{*}\\tilde{A}_{T}\\left(\\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{ T}\\right)^{k}\\operatorname{sgn}(x)_{T}\\right|^{2M_{k}}\\right]\\leq\\beta^{-2m}\\sum_{k=0}^ {m-1}6q_{k}\\left(\\frac{6q_{k}\\sqrt{s}}{n}\\right)^{q_{k}}\\] \\[\\leq \\beta^{-2m}\\sum_{k=0}^{m-1}16m\\left(\\frac{16m\\sqrt{s}}{n}\\right)^ {\\frac{4}{3}m}\\leq 16m^{2}\\left(\\frac{16\\beta^{-3/2}m\\sqrt{s}}{n}\\right)^{ \\frac{4}{3}m}.\\]
With \\(E_{\\ell}\\) denoting the events from (6.12), we further obtain, using (6.5) once more,
\\[\\sum_{\\ell\
otin T}\\mathbb{P}\\left[E_{\\ell}\\right]\\leq 16(N-s)m^{2}\\left(\\frac{ 16\\beta^{-3/2}m\\sqrt{s}}{n}\\right)^{\\frac{4}{3}m}\\leq 16(N-s)m^{2}e^{-m} \\leq\\frac{\\varepsilon}{2}.\\]
This finishes the proof of Theorem 6.1.
What remains is the following
_Proof of Lemma 6.2_. So far, we have not used that the bounded orthonormal system underlying the random scattering matrix has the specific structure defined in (2.7). In what follows, we will use the letter \\(\\ell\\in\\mathbb{Z}^{2}\\), possibly indexed further, to denote the rescaled positions (without the distance coordinate) on the resolution grid where the targets can be. We furthermore identify \\([N]\\) with \\([\\sqrt{N}]^{2}\\) in the canonical way, thereby recovering the square grid of resolution cells (recall that we set \\(N:=\\lfloor 2L/d_{0}\\rfloor^{2}\\), where \\(L>0\\) is the size of the target domain and \\(d_{0}>0\\) denotes the meshsize of the resolution grid, so that \\(\\sqrt{N}\\) is actually the number of resolution cells along one axis of the square array). We fix \\(\\ell\\in T^{c}\\) and set \\(\\ell_{0}^{(h)}:=\\ell\\) for \\(h=1,\\ldots,2M\\). A lengthy but straightforward calculation gives with \\(\\omega:=2\\pi d_{0}/(\\lambda z_{0})\\)
\\[\\left|\\tilde{a}_{\\ell}^{*}\\tilde{A}_{T}\\left(\\operatorname{Id}- \\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k}\\operatorname{sgn}(x)_{T}\\right|^{2M }=\\frac{1}{n^{4M(k+1)}}\\] \\[\\times\\sum_{j_{1}^{(1)},\\ldots,j_{k+1}^{(1)}=1}^{n}\\sum_{m_{1}^{ (1)},\\ldots,m_{k+1}^{(1)}=1}^{n}\\sum_{\\ell_{1}^{(1)},\\ldots,\\ell_{k+1}^{(1)} \\in T}\\prod_{t=1}^{M}\\operatorname{sgn}\\left(x_{\\ell_{k+1}^{(2t-1)}}\\right) \\overline{\\operatorname{sgn}\\left(x_{\\ell_{k+1}^{(2t)}}\\right)}\\] \\[\\vdots\\vdots\\vdots\\vdots\\vdots\\vdots\\vdots\\vdots\\vdots\\vdots\\] \\[j_{1}^{(2M)},\\ldots,j_{k+1}^{(2M)}=1\\,m_{1}^{(2M)},\\ldots,m_{k+1}^ {(2M)}=1\\,\\,\\ell_{1}^{(2M)},\\ldots,\\ell_{k+1}^{(2M)}\\in T\\] \\[\\times\\exp\\left(i\\frac{\\omega}{2}\\sum_{p=1}^{2M}(-1)^{p}\\left\\| \\ell_{k+1}^{(p)}\\right\\|_{2}^{2}\\right)\\,\\exp\\left(i\\omega\\sum_{p=1}^{2M}(-1) ^{p}\\sum_{h=1}^{k+1}\\left\\langle\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right), b_{j_{h}^{(p)}}\\right\\rangle\\right)\\] \\[\\times\\exp\\left(i\\omega\\sum_{p=1}^{2M}(-1)^{p}\\sum_{h=1}^{k+1} \\left\\langle\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right),b_{m_{h}^{(p)}} \\right\\rangle\\right). \\tag{6.15}\\]In order to evaluate the above term, we will use combinatorial arguments inspired by [4, 25]. To a given word \\(\\left(j_{h}^{(p)}\\right)_{h=1,\\ldots,k+1}^{p=1,\\ldots,2M}\\) we associate the partition \\(\\mathcal{Q}\\) of \\([k+1]\\times[2M]\\) with the property that \\((h,p)\\) and \\((h^{\\prime},p^{\\prime})\\) are in the same block if and only if \\(j_{h}^{(p)}=j_{h^{\\prime}}^{(p^{\\prime})}\\). Analogously, we associate the partition \\(\\mathcal{R}\\) to the word \\(\\left(m_{h}^{(p)}\\right)_{h=1,\\ldots,k+1}^{p=1,\\ldots,2M}\\). To each \\(Q\\in\\mathcal{Q}\\) resp. \\(R\\in\\mathcal{R}\\) there exists exactly one \\(j_{Q}\\in\\{1,\\ldots,n\\}\\) resp. \\(m_{R}\\in\\{1,\\ldots,n\\}\\) such that \\(j_{h}^{(p)}=j_{Q}\\) for all \\((h,p)\\in Q\\) resp. \\(m_{h}^{(p)}=m_{R}\\) for all \\((h,p)\\in R\\). We define
\\[\\mathcal{Q}\\cap\\mathcal{R} :=\\left\\{(Q,R)\\in\\mathcal{Q}\\times\\mathcal{R}:j_{Q}=m_{R}\\right\\},\\] \\[\\mathcal{Q}^{\\cap} :=\\left\\{Q\\in\\mathcal{Q}:\\text{there exists }R=R(Q)\\in\\mathcal{R} \\text{ such that }m_{R(Q)}=j_{Q}\\right\\},\\] \\[\\mathcal{R}^{\\cap} :=\\left\\{R\\in\\mathcal{R}:\\text{there exists }Q=Q(R)\\in \\mathcal{Q}\\text{ such that }j_{Q(R)}=m_{R}\\right\\}.\\]
With this notation, we can write
\\[\\mathbb{E}\\left[\\exp\\left(i\\omega\\sum_{p=1}^{2M}(-1)^{p}\\sum_{h=1 }^{k+1}\\left\\langle\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right),b_{j_{h}^{(p) }}\\right\\rangle\\right)\\right.\\] \\[\\qquad\\times\\exp\\left(i\\omega\\sum_{p=1}^{2M}(-1)^{p}\\sum_{h=1}^{k +1}\\left\\langle\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right),b_{m_{h}^{(p)}} \\right\\rangle\\right)\\right]\\] \\[= \\mathbb{E}\\left[\\prod_{Q\\in\\mathcal{Q}\\setminus\\mathcal{Q}^{ \\cap}}\\exp\\left(i\\omega\\left\\langle\\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^ {(p)}-\\ell_{h}^{(p)}\\right),b_{j_{Q}}\\right\\rangle\\right)\\right]\\] \\[\\times \\mathbb{E}\\left[\\prod_{R\\in\\mathcal{R}\\setminus\\mathcal{R}^{ \\cap}}\\exp\\left(i\\omega\\left\\langle\\sum_{(h,p)\\in R}(-1)^{p}\\left(\\ell_{h-1}^ {(p)}-\\ell_{h}^{(p)}\\right),b_{m_{R}}\\right\\rangle\\right)\\right]\\] \\[\\times \\mathbb{E}\\left[\\prod_{Q\\in\\mathcal{Q}^{\\cap}}\\exp\\left(i\\omega \\left\\langle\\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)} \\right),b_{j_{Q}}\\right\\rangle\\right.\\right.\\] \\[\\qquad\\qquad\\qquad\\left.\\left.+i\\omega\\left\\langle\\sum_{(h,p)\\in R (Q)}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right),b_{m_{R(Q)}}\\right\\rangle \\right)\\right].\\]
Observe that
\\[\\mathbb{E}\\left[\\prod_{Q\\in\\mathcal{Q}\\setminus\\mathcal{Q}^{ \\cap}}\\exp\\left(i\\omega\\left\\langle\\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^ {(p)}-\\ell_{h}^{(p)}\\right),b_{j_{Q}}\\right\\rangle\\right)\\right]\\] \\[=\\prod_{Q\\in\\mathcal{Q}\\setminus\\mathcal{Q}^{\\cap}}\\delta\\left( \\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right),\\]
where \\(\\delta\\) is the Kronecker delta, that is \\(\\delta(0)=1\\) and \\(\\delta(x)=0\\) for \\(x\
eq 0\\). Since \\(\\ell_{h}^{(p)}\
eq\\ell_{h-1}^{p}\\), this implies that each \\(Q\\in\\mathcal{Q}\\setminus\\mathcal{Q}^{\\cap}\\) must contain at least two elements in order to provide a nonzero contribution to the overall expectation of the expression in (6.15). The same is true for each \\(R\\in\\mathcal{R}\\setminus\\mathcal{R}^{\\cap}\\). However, the blocks \\(Q\\in\\mathcal{Q}^{\\cap}\\) may contain just one element, since they have a corresponding block \\(R(Q)\\) with matching index. Therefore, we can break the evaluation of the right hand side of (6.15) down to three basic questions.
1. What are the numbers \\(t_{1}\\) resp. \\(t_{2}\\) of the distinct indices appearing in the words \\(w_{1}:=\\left(j_{h}^{(p)}\\right)_{h=1,\\ldots,k+1}^{p=1,\\ldots,2M}\\) resp. \\(w_{2}:=\\left(m_{h}^{(p)}\\right)_{h=1,\\ldots,k+1}^{p=1,\\ldots,2M}\\)?
2. What is the number \\(t\\) of indices that the words \\(w_{1}\\) and \\(w_{2}\\) have in common?
3. Given 1. and 2., which constraints must be fulfilled by the partitions \\(\\mathcal{Q}\\) and \\(\\mathcal{R}\\) corresponding to \\(w_{1}\\) and \\(w_{2}\\)?
In the following, we identify partitions of \\([k+1]\\times[2M]\\) with partitions of \\([2M(k+1)]\\) in the canonical way. Moreover, if we have a partition \\(\\mathcal{Q}=\\{Q_{1},\\ldots,Q_{t},Q_{t+1},\\ldots,Q_{t_{1}}\\}\\), we enumerate it without loss of generality such that \\(Q_{t+1},\\ldots,Q_{t_{1}}\\) are the blocks containing at least two elements and \\(Q_{1},\\ldots,Q_{t}\\) are the blocks which might contain just one element. The same is done for the partition \\(\\mathcal{R}=\\{R_{1},\\ldots,R_{t},R_{t+1},\\ldots,R_{t_{2}}\\}\\). We define
\\[\\mathcal{E}:=\\mathbb{E}\\left[\\left|\\tilde{a}_{\\ell}^{*}\\tilde{A}_{T}\\left( \\operatorname{Id}-\\tilde{A}_{T}^{*}\\tilde{A}_{T}\\right)^{k}\\operatorname{sgn }(x)_{T}\\right|^{2M}\\right].\\]
Using the triangle inequality and \\(n>2M(k+1)\\) implied by (6.13) together with the definitions from Subsection 6.1 we obtain
\\[\\mathcal{E}\\leq\\frac{1}{n^{4M(k+1)}}\\sum_{t=0}^{2M(k+1)}\\sum_{t_{ 1}=t}^{M(k+1)+\\lfloor t/2\\rfloor}\\sum_{t_{2}=t}^{M(k+1)+\\lfloor t/2\\rfloor} \\sum_{\\begin{subarray}{c}j_{1},\\ldots,j_{t_{1}}\\text{ pw different}\\\\ m_{1},\\ldots,m_{t_{2}}\\text{ pw different}\\\\ \\left|\\left\\{j_{1},\\ldots,j_{t_{1}}\\right\\}\\cap\\left\\{m_{1},\\ldots,m_{t_{2}} \\right\\}\\right|=t\\end{subarray}}\\] \\[\\sum_{\\begin{subarray}{c}\\mathcal{Q}\\in\\mathcal{P}(2M(k+1),t_{1 },t_{1}-t)\\end{subarray}}\\sum_{\\mathcal{R}\\in\\mathcal{P}(2M(k+1),t_{2},t_{2} -t)\\end{subarray}}\\sum_{\\begin{subarray}{c}\\ell_{1}^{(1)},\\ldots,\\ell_{k+1}^ {(1)}\\in T\\\\ \\vdots\\\\ \\ell_{1}^{(2M)},\\ldots,\\ell_{k+1}^{(2M)}\\in T\\\\ \\ell_{h}^{(p)}\
eq\\ell_{h-1}^{p},h\\in[k+1]\\end{subarray}}\\] \\[\\prod_{Q\\in\\left\\{Q_{t+1},\\ldots,Q_{t_{1}}\\right\\}}\\delta\\left( \\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right) \\tag{6.16}\\] \\[\\times\\prod_{R\\in\\left\\{R_{t+1},\\ldots,R_{t_{2}}\\right\\}}\\delta \\left(\\sum_{(h,p)\\in R}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\] (6.17) \\[\\times\\prod_{j=1}^{t}\\delta\\left(\\sum_{(h,p)\\in Q_{j}}(-1)^{p} \\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)+\\sum_{(h,p)\\in R_{j}}(-1)^{p} \\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right). \\tag{6.18}\\]
For the product \\(\\prod_{Q\\in\\left\\{Q_{t+1},\\ldots,Q_{t_{1}}\\right\\}}\\delta\\left(\\sum_{(h,p)\\in Q }(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\) to be nonzero, we must have \\(\\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)=0\\) for all \\(Q\\in\\left\\{Q_{t+1},\\ldots,Q_{t_{1}}\\right\\},\\) and analogously for the other two products appearing in (6.17), (6.18). Therefore, the expressions (6.16)-(6.18) give at least \\(t_{1}\\lor t_{2}:=\\max\\{t_{1},t_{2}\\}\\) linearly independent constraints. Recalling that \\(q=2M(k+1)\\), this observation yields
\\[\\sum_{\\ell_{1}^{(1)},\\ldots,\\ell_{k+1}^{(1)}\\in T} \\prod_{Q\\in\\big{\\{}Q_{t+1},\\ldots,Q_{t_{1}}\\big{\\}}}\\delta\\left( \\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\] \\[\\vdots\\] \\[\\ell_{1}^{(2M)},\\ldots,\\ell_{k+1}^{(2M)}\\in T\\] \\[\\ell_{h}^{(p)}\
eq\\ell_{h-1}^{p},h\\in[k+1]\\] \\[\\times\\prod_{R\\in\\big{\\{}R_{t+1},\\ldots,R_{t_{2}}\\big{\\}}}\\delta \\left(\\sum_{(h,p)\\in R}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\] \\[\\times\\prod_{j=1}^{t}\\delta\\left(\\sum_{(h,p)\\in Q_{j}}(-1)^{p} \\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)+\\sum_{(h,p)\\in R_{j}}(-1)^{p} \\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\leq s^{q-t_{1}\\lor t_{2}}.\\]
Using (6.4), we arrive at
\\[\\sum_{\\begin{subarray}{c}j_{1},\\ldots,j_{t_{1}}\\text{ pw different}\\\\ m_{1},\\ldots,m_{t_{2}}\\text{ pw different}\\\\ |\\{j_{1},\\ldots,j_{t_{1}}\\}\\cap\\{m_{1},\\ldots,m_{t_{2}}\\}|=t\\end{subarray}} \\sum_{\\begin{subarray}{c}\\mathcal{Q}\\in\\mathcal{P}(2M(k+1),t_{1},t_{1}-t) \\,\\mathcal{R}\\in\\mathcal{P}(2M(k+1),t_{2},t_{2}-t)\\\\ \\vdots\\\\ \\ell_{1}^{(2M)},\\ldots,\\ell_{k+1}^{(2M)}\\in T\\\\ \\ell_{h}^{(p)}\
eq\\ell_{h-1}^{p},h\\in[k+1]\\end{subarray}}\\sum_{\\begin{subarray} {c}\\ell_{1}^{(1)},\\ldots,\\ell_{k+1}^{(1)}\\in T\\\\ \\vdots\\\\ \\ell_{1}^{(2M)},\\ldots,\\ell_{k+1}^{(2M)}\\in T\\\\ \\ell_{h}^{(p)}\
eq\\ell_{h-1}^{p},h\\in[k+1]\\end{subarray}}\\] \\[\\prod_{Q\\in\\big{\\{}Q_{t+1},\\ldots,Q_{t_{1}}\\big{\\}}}\\delta \\left(\\sum_{(h,p)\\in Q}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\] \\[\\times\\prod_{R\\in\\big{\\{}R_{t+1},\\ldots,R_{t_{2}}\\big{\\}}}\\delta \\left(\\sum_{(h,p)\\in R}(-1)^{p}\\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\] \\[\\times\\prod_{j=1}^{t}\\delta\\left(\\sum_{(h,p)\\in Q_{j}}(-1)^{p} \\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)+\\sum_{(h,p)\\in R_{j}}(-1)^{p} \\left(\\ell_{h-1}^{(p)}-\\ell_{h}^{(p)}\\right)\\right)\\] \\[\\leq\\sum_{\\begin{subarray}{c}j_{1},\\ldots,j_{t_{1}}\\text{ pw different}\\\\ m_{1},\\ldots,m_{t_{2}}\\text{ pw different}\\\\ |\\{j_{1},\\ldots,j_{t_{1}}\\}\\cap\\{m_{1},\\ldots,m_{t_{2}}\\}|=t\\end{subarray}}(9q^ {2})^{q}\\left(\\frac{3q}{2}\\right)^{-t_{1}-t_{2}}s^{q-t_{1}\\lor t_{2}}\\] \\[\\leq n^{t_{1}}t_{1}^{t}n^{t_{2}-t}(9q^{2})^{q}\\left(\\frac{3q}{2} \\right)^{-t_{1}-t_{2}}s^{q-t_{1}\\lor t_{2}}\\] \\[\\leq(9q^{2}s)^{q}\\left(\\frac{q}{n}\\right)^{t}\\left(\\frac{n}{\\frac {3}{2}q}\\right)^{t_{1}+t_{2}}s^{-t_{1}\\lor t_{2}},\\]where we have applied \\(t_{1}\\leq q\\) in the last step. Putting these pieces together, we obtain
\\[\\mathcal{E} \\leq\\left(\\frac{9q^{2}s}{n^{2}}\\right)^{q}\\sum_{t=0}^{q}\\left(\\frac {q}{n}\\right)^{t}\\sum_{t_{1}=t}^{q/2+\\lfloor t/2\\rfloor}\\sum_{t_{2}=t}^{q/2+ \\lfloor t/2\\rfloor}\\left(\\frac{n}{\\frac{3}{2}q}\\right)^{t_{1}+t_{2}}s^{-t_{1} \\lor t_{2}}\\] \\[=\\left(\\frac{9q^{2}s}{n^{2}}\\right)^{q}\\sum_{t=0}^{q}\\left(\\frac{ q}{n}\\right)^{t}\\sum_{t_{1}=t}^{q/2+\\lfloor t/2\\rfloor}\\left(\\sum_{t_{2}=t}^{t_{1} -1}\\left(\\frac{n}{\\frac{3}{2}q}\\right)^{t_{1}+t_{2}}s^{-t_{1}}+\\sum_{t_{2}=t_{ 1}}^{q/2+\\lfloor t/2\\rfloor}\\left(\\frac{n}{\\frac{3}{2}q}\\right)^{t_{1}+t_{2}}s ^{-t_{2}}\\right). \\tag{6.19}\\]
Let us evaluate the inner sums in (6.19). Since \\(n\\geq(3/2)q\\) by (6.13) we have
\\[\\sum_{t_{2}=t_{1}}^{q/2+\\lfloor t/2\\rfloor}\\left(\\frac{n}{\\frac{ 3}{2}q}\\right)^{t_{1}+t_{2}}s^{-t_{2}}=\\left(\\frac{n}{\\frac{3}{2}q} \\right)^{t_{1}}\\sum_{t_{2}=t_{1}}^{q/2+\\lfloor t/2\\rfloor}\\left(\\frac{n}{ \\frac{3}{2}qs}\\right)^{t_{2}}\\] \\[=\\left(\\frac{n^{2}}{\\left(\\frac{3}{2}q\\right)^{2}s}\\right)^{t_{1} }\\sum_{t_{2}=0}^{\\lfloor t/2\\rfloor-t_{1}}\\left(\\frac{n}{\\frac{3}{2}qs} \\right)^{t_{2}}\\leq 2\\left(\\frac{n^{2}}{\\left(\\frac{3}{2}q\\right)^{2}s} \\right)^{q/2+t/2}.\\]
Similarly, using once more (6.13) in the form \\(n\\geq(3/2)q\\sqrt{s}\\), we obtain
\\[\\sum_{t_{2}=t}^{t_{1}-1}\\left(\\frac{n}{\\frac{3}{2}q}\\right)^{t_{1}+t_{2}}s^{- t_{1}}\\leq\\left(\\frac{n^{2}}{\\left(\\frac{3}{2}q\\right)^{2}s}\\right)^{t_{1}}\\]
and
\\[\\sum_{t_{1}=t}^{q/2+\\lfloor t/2\\rfloor}\\left(\\frac{n^{2}}{\\left(\\frac{3}{2}q \\right)^{2}s}\\right)^{t_{1}}\\leq 2\\left(\\frac{n^{2}}{\\left(\\frac{3}{2}q \\right)^{2}s}\\right)^{q/2+t/2}.\\]
Plugging everything into (6.19) finishes the proof of the lemma.
### Proof of Theorem 2.1
According to Theorem 6.1, all conditions of Theorem 3.1 are satisfied with probability at least \\(1-\\varepsilon\\) provided
\\[n^{2}\\geq Cs\\log^{2}\\left(cN/\\varepsilon\\right),\\]
where \\(C,c>0\\) are numerical constants satisfying the bounds claimed in Theorem 2.1. This concludes the proof.
## 7 Numerical simulations
### Chambolle and Pock's iterative primal dual algorithm
For the numerical simulations, we use Chambolle and Pock's primal dual algorithm [9] to compute the solution of (2.9) and (2.11). The algorithm is suited for a general convex optimization problem of the form
\\[\\min_{x\\in\\mathbb{C}^{N}}F(Ax)+G(x) \\tag{7.1}\\]
with \\(A\\in\\mathbb{C}^{m\\times N}\\), \\(F:\\mathbb{C}^{m}\\to(-\\infty,\\infty]\\) and \\(G:\\mathbb{C}^{N}\\to(-\\infty,\\infty]\\) lower semi-continuous convex functions. The dual problem to (7.1) is given by
\\[\\max_{\\xi\\in\\mathbb{C}^{m}}-F^{*}(\\xi)-G^{*}(-A^{*}\\xi), \\tag{7.2}\\]where \\(F^{*}\\), \\(G^{*}\\) denote the convex conjugates of \\(F,G\\). Here, we recall that the convex conjugate function \\(F^{*}:\\mathbb{C}^{m}\\to(-\\infty,\\infty]\\) is defined as
\\[F^{*}(y):=\\sup_{x\\in\\mathbb{C}^{m}}\\left\\{\\operatorname{Re}\\left(\\langle x,y \\rangle\\right)-F(x)\\right\\}.\\]
In the cases of interest to us, strong duality holds, meaning that the optimal values of (7.1) and (7.2) coincide. For describing Chambolle and Pock's algorithm, we require the proximal mappings of \\(F\\) and \\(G\\) defined as
\\[P_{G}(\\tau;z):=\\operatorname*{argmin}_{x\\in\\mathbb{C}^{N}}\\left\\{\\tau G(x)+ \\frac{1}{2}\\left\\|x-z\\right\\|_{2}^{2}\\right\\},\\]
and analogously for \\(F\\). The iterative primal dual algorithm then reads as follows. We select parameters \\(\\theta\\in[0,1]\\), \\(\\tau,\\sigma>0\\) such that \\(\\tau\\sigma\\|\\mathbf{A}\\|_{2\\to 2}<1\\) and initial vectors \\(x^{0}\\in\\mathbb{C}^{N},\\xi^{0}\\in\\mathbb{C}^{m}\\), \\(\\bar{x}^{0}=x^{0}\\). Then one iteratively computes
\\[\\xi^{n+1} :=P_{F^{*}}(\\sigma;\\xi^{n}+\\sigma A\\bar{x}^{n})\\;,\\] \\[x^{n+1} :=P_{G}(\\tau;x^{n}-\\tau A^{*}\\xi^{n+1})\\;,\\] \\[\\bar{x}^{n+1} :=x^{n+1}+\\theta(x^{n+1}-x^{n})\\;.\\]
In [9], it is shown that for the parameter choice \\(\\theta=1\\) the algorithm converges in the sense that \\(x^{n}\\) converges to the minimizer of the primal problem (7.1) and \\(\\xi^{n}\\) converges to the solution of the dual problem (7.2) as \\(n\\) tends to \\(\\infty\\). Moreover, [9] also gives an estimate of the convergence rate for a partial primal dual gap.
### The algorithm for \\(\\ell_{1}\\)-minimization
Let us now specialize to the case of \\(\\ell_{1}\\)-minimization. We remark that to the best of our knowledge, Chambolle and Pock's algorithm has not yet been specialized to equality-constrained and noise-constrained \\(\\ell_{1}\\)-minimization before, so we provide the first numerical tests of the algorithm in this setup.
Let us first consider the problem
\\[\\min_{x\\in\\mathbb{C}^{N}}\\left\\|x\\right\\|_{1}\\text{ subject to }Ax=y.\\]
This is a special case of (7.1) with \\(G(x)=\\left\\|x\\right\\|_{1}\\) and \\(F(z)=0\\) if \\(z=y\\) and \\(\\infty\\) otherwise. Straightforward computations show that for all \\(\\xi\\in\\mathbb{C}^{m}\\), \\(\\zeta\\in\\mathbb{C}^{N}\\),
\\[F^{*}(\\xi) =\\operatorname{Re}(\\langle\\xi,y\\rangle),\\qquad G^{*}(\\zeta)= \\chi_{B_{\\|}\\cdot\\|_{\\infty}}(\\zeta)=\\left\\{\\begin{array}{ll}0&\\text{ if }\\|\\zeta\\|_{\\infty}\\leq 1,\\\\ \\infty&\\text{ otherwise },\\end{array}\\right.\\] \\[P_{F}(\\sigma;\\xi) =y,\\qquad\\qquad P_{F^{*}}(\\sigma;\\xi)=\\xi-\\sigma y.\\]
The proximal mapping of \\(G(x)=\\left\\|x\\right\\|_{1}\\) can be evaluated coordinatewise, so that it is enough to compute the proximal of the modulus function \\(|\\cdot|\\) on \\(\\mathbb{C}\\). The latter is given by the well-known soft-thresholding operator \\(S_{\\tau}\\) defined as
\\[S_{\\tau}(z):=P_{|\\cdot|}(\\tau,z)=\\operatorname*{argmin}_{x\\in\\mathbb{C}}\\left\\{ \\frac{1}{2}|x-z|^{2}+\\tau|x|\\right\\}=\\left\\{\\begin{array}{ll}\\operatorname {sgn}(z)(|z|-\\tau)&\\text{ if }|z|\\geq\\tau\\;,\\\\ 0&\\text{ otherwise },\\end{array}\\right.\\]so that
\\[P_{G}(\\tau,z)_{\\ell}=S_{\\tau}(z_{\\ell}),\\quad\\ell\\in[N]\\;. \\tag{7.3}\\]
With these computations at hand, the algorithm for noise-free \\(\\ell_{1}\\)-minimization is given by the iterations
\\[\\xi^{n+1} =\\xi^{n}+\\sigma(A\\bar{x}^{n}-y)\\;,\\] \\[x^{n+1} =\\mathcal{S}_{\\tau}(x^{n}-\\tau A^{*}\\xi^{n+1})\\;,\\] \\[\\bar{x}^{n+1} =x^{n+1}+\\theta(x^{n+1}-x^{n})\\;.\\]
In the noisy case, we aim at solving
\\[\\min_{x\\in\\mathbb{C}^{N}}\\left\\|x\\right\\|_{1}\\ \\text{subject to}\\ \\left\\|Ax-y\\right\\|_{2}\\leq\\eta.\\]
In this setup, \\(G(x)=\\left\\|x\\right\\|_{1}\\) and
\\[F(z)=\\chi_{B(y,\\eta)}(z)=\\left\\{\\begin{array}{ll}0&\\text{if }\\|z-y\\|_{2}\\leq \\eta\\;,\\\\ \\infty&\\text{otherwise}\\;.\\end{array}\\right.\\]
Carrying out analogous computations as in the noise-free case, we find that the corresponding algorithm for the noisy case consists in iteratively computing
\\[\\xi^{n+1} =\\left\\{\\begin{array}{ll}0&\\text{if }\\|\\sigma^{-1}\\xi^{n}+A \\bar{x}^{n}-y\\|_{2}\\leq\\eta\\;,\\\\ \\eta\\sigma&\\\\ \\overline{\\|\\xi^{n}+\\sigma(A\\bar{x}^{n}-y)\\|_{2}}\\end{array}\\right)(\\xi^{n}+ \\sigma(A\\bar{x}^{n}-y))&\\text{otherwise}\\;,\\\\ x^{n+1} =\\mathcal{S}_{\\tau}(x^{n}-\\tau A^{*}\\xi^{n+1})\\;,\\\\ \\bar{x}^{n+1} =x^{n+1}+\\theta(x^{n+1}-x^{n})\\;.\\]
### Numerical results
We apply the above algorithm for \\(\\ell_{1}\\)-minimization to the sensing matrices given by (2.7). We choose the wavelength \\(\\lambda=0.03\\,m\\), the resolution \\(d_{0}=10\\,m\\), the distance \\(z_{0}=10000\\,m\\) and the size of the aperture \\(B=30\\,m\\). Note that in this scenario, we have \\(d_{0}B/(\\lambda z_{0})=1\\). To speed up the algorithm, we exploit the fact that the matrix \\(A\\in\\mathbb{C}^{n^{2}\\times N}\\) from (2.7) can be factorized into a product of diagonal matrices and a nonequispaced Fourier matrix. In fact, assuming a square resolution grid, we can write the grid parameter as double index \\((\\ell,\\tilde{\\ell})\\) with \\(\\ell,\\tilde{\\ell}\\in[N_{1}]\\) where \\(N_{1}^{2}=N\\). For \\(j,k\\in[n]\\) and \\(a_{j}=(\\xi_{j},\\eta_{j})\\), \\(a_{k}=(\\xi_{k},\\eta_{k})\\) we then have
\\[(Az)_{jk} =\\exp\\left(\\frac{\\pi i}{\\lambda z_{0}}\\left(\\left\\|(\\xi_{j},\\eta _{j})\\right\\|_{2}^{2}+\\left\\|(\\xi_{k},\\eta_{k})\\right\\|_{2}^{2}\\right)\\right)\\] \\[\\sum_{\\ell,\\tilde{\\ell}\\in[N_{1}]}\\exp\\left(-2\\pi i\\left\\langle( \\ell,\\tilde{\\ell}),\\left(\\frac{\\xi_{j}+\\xi_{k}}{B},\\frac{\\eta_{j}+\\eta_{k}}{B} \\right)\\right\\rangle\\right)\\exp\\left(\\frac{2\\pi id_{0}}{\\lambda z_{0}}\\left( \\ell^{2}+\\tilde{\\ell}^{2}\\right)\\right)\\tilde{z}_{(\\ell,\\tilde{\\ell})}.\\]
Since the nonequispaced Fourier transform can be implemented at computational costs that are only slightly larger than that of the Fast Fourier Transform, it gives rise to fast approximate matrix-vector multiplication algorithms, see [24] and reference therein. We use an implementation of S. Kunis, which can be found in the Matlab toolbox associated to the paper [20]. The algorithm is run with the renormalized matrix \\(\\tilde{A}=\\frac{1}{\\sqrt{N}}A\\) and the parameter choices \\(\\theta=1\\), \\(\\sigma=1\\) and \\(\\tau=0.5\\). For fixed sparsity \\(s\\), we generate a random vector in the following way: We choose the support set uniformly at random, then we sample a Steinhaus vector on this support and multiply its nonzero entries independently by a dynamic range coefficient uniformly distributed on \\([1,10]\\). With a fixed number of resolution cells, we vary the number \\(n\\) of antennas and compute empirical recovery rates by choosing the \\(n\\) antenna positions uniformly at random from the domain \\([-B/2,B/2]^{2}\\), where we leave the vector to be recovered fixed for the whole period.
With the resulting noise-free measurement vector \\(y\\) we compute the \\(\\ell_{1}\\)-minimizer with Chambolle and Pock's algorithm (which takes about 300 iterations), and we record whether the
Figure 3: Empirical recovery rates for fixed sparsity \\(s=100\\) and varying number \\(n\\) of antennas: (a) \\(N=6400\\) resolution cells (b) \\(N=16900\\) resolution cells
original vector is recovered (up to numerical errors of at most \\(10^{-3}\\) measured in the \\(\\ell_{2}\\)-norm). Repeating this test 100 times for each choice of parameters \\((s,n,N)\\) provides an empirical estimate of the success probability. In Figure 3, we display the result of noiseless recovery for fixed sparsity \\(s=100\\) and for \\(N=6400\\) respectively \\(N=16900\\) resolution cells. The transition from the unsuccessful regime to the successful regime occurs at about 28 antennas, corresponding to 784 measurements, for \\(N=6400\\), so in practice, the algorithm works even better than predicted by our theoretical results. In the situation with more resolution cells, the transition occurs at a slightly increased number of antennas. The illustration in Figure 3 was produced with the version of the algorithm for equality constrained \\(\\ell_{1}\\)-minimization.
To test the robustness of our recovery scheme with respect to noise, we compute receiver operating characteristic curves for various parameter choices, see [28, Chapter 6] and [23, Chapter II.D], using the noise-constrained version of Chambolle and Pock's algorithm algorithm. We start by simulating a target vector \\(x\\in\\mathbb{C}^{6400}\\) with \\(\\left\\|x\\right\\|_{0}=100\\), that is we simulate 100 targets in 6400 resolution cells. We do this as described above, that is we select the support uniformly at random, then simulate random phases on the support and multiply them independently by a dynamic range coefficient uniformly distributed on \\([1,10]\\). We then leave the vector \\(x\\) fixed, draw a realization of our random scattering matrix \\(A\\) and run noise constrained basis pursuit with the noisy measurements \\(y=Ax+e\\), where \\(e\\) is a complex Gaussian noise vector. The entries of the recovered solution \\(\\hat{x}\\) are then compared to a threshold \\(\\tau>0\\). If \\(|\\hat{x}_{k}|<\\tau\\), then it is set to zero, otherwise it remains unchanged. We then count how many of the actual targets in \\(x\\) are detected. The detection probability is the number of detections divided by the true number of targets, in our case 100. Moreover, we count the number of false alarms, that is the number of positions \\(k\\in[6400]\\) where \\(\\hat{x}_{k}\
eq 0\\) but \\(x_{k}=0\\). The false alarm probability is the number of false alarms divided by the number of scatterers. For fixed \\(x\\) and \\(\\tau\\), we repeat this a 100 times and compute the empirical probability of detection \\(P_{d}\\) and the probability of false alarm \\(P_{f}\\). This is then again repeated for varying values of the threshold \\(\\tau\\), resulting in a plot of \\(P_{d}\\) versus \\(P_{f}\\), which is called the receiver operating characteristic curve.
In Figure 4, the results of the simulation are depicted. We see that if we choose the number of antennas at the critical value 28 observed in Figure 3, then we get a significant number of missed targets and false alarms. If we however slightly increase the number of antennas, we
Fig. 4: ROC-curves for a fixed 100-sparse vector \\(x\\) in 6400 resolution cellsget almost perfect detection and virtually no false alarms if we choose the threshold correctly, in our case as \\(\\tau\\approx 0.5\\). So our recovery scheme is in fact very robust with respect to noise in the sense that the support is very well recovered. However, the quality of the approximation of the true reflectivities decreases with the SNR, as is to be expected.
## References
* [1] L. Borcea, G. Papanicolaou, and C. Tsogka. Theory and applications of time reversal and interferometric imaging. _Inverse Problems_, 19:5139-5164, 2003.
* [2] M. Born and E. Wolf. _Principles of Optics_. Cambridge University Press, Cambridge, 7th edition, 1999.
* [3] S. Boyd and L. Vandenberghe. _Convex Optimization_. Cambridge Univ. Press, 2004.
* [4] E. J. Candes, T. Tao, and J. K. Romberg. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. _IEEE Trans. Inform. Theory_, 52(2):489-509, 2006.
* 7254, 2011.
* [6] E. J. Candes, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. _Comm. Pure Appl. Math._, 59(8):1207-1223, 2006.
* [7] E. J. Candes and T. Tao. Near optimal signal recovery from random projections: universal encoding strategies? _IEEE Trans. Inform. Theory_, 52(12):5406-5425, 2006.
* [8] E. J. Candes and T. Tao. The power of convex relaxation: near-optimal matrix completion. _IEEE Trans. Inform. Theory_, 56(5):2053-2080, 2010.
* [9] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. _J. Math. Imaging Vision_, 40:120-145, 2011.
* [10] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. _SIAM J. Sci. Comput._, 20(1):33-61, 1998.
* [11] S. Chretien and S. Darses. Invertibility of random submatrices via tail decoupling and a matrix Chernoff inequality. _Statistics and Probability Letters_, 82:1479-1487, 2012.
* [12] V. de la Pena and E. Gine. _Decoupling: From Dependence to Independence_. Springer, 1999.
* [13] D. L. Donoho. Compressed sensing. _IEEE Trans. Inform. Theory_, 52(4):1289-1306, 2006.
* [14] A. Fannjiang, P. Yan, and T. Strohmer. Compressed remote sensing of sparse objects. _SIAM J. Imag. Sci._, 3:595-618, 2010.
* [15] M. Fornasier and H. Rauhut. Compressive sensing. In O. Scherzer, editor, _Handbook of Mathematical Methods in Imaging_, pages 187-228. Springer, 2011.
* [16] S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich. The Gelfand widths of \\(\\ell_{p}\\)-balls for \\(0<p\\leq 1\\). _J. Complexity_, 26(6):629-640, 2010.
* [17] J. J. Fuchs. On sparse representations in arbitrary redundant bases. _IEEE Trans. Inform. Theory_, 50(6):1341-1344, 2004.
* [18] F. Gruber, E. Marengo, and A. Devaney. Time-reversal-based imaging and inverse scattering of multiply scattering point targets. _J. Acoust. Soc. Am._, 118:3129-3138, 2005.
* 2208, Asilomar, 2007.
* orthogonal matching pursuit versus basis pursuit. _Found. Comput. Math._, 8(6):737-763, 2008.
* [21] B. K. Natarajan. Sparse approximate solutions to linear systems. _SIAM J. Comput._, 24:227-234, 1995.
* [22] J. Nelson and V. Temlyakov. On the size of incoherent systems. _J. Approx. Theory_, 163(9):1238-1245, 2011.
* [23] H. V. Poor. _An Introduction to Signal Detection and Estimation_. Springer, 1994.
* 270. Birkhauser, 2001.
* [25] H. Rauhut. Random sampling of sparse trigonometric polynomials. _Appl. Comput. Harmon. Anal._, 22(1):16-42, 2007.
* [26] H. Rauhut. Compressive sensing and structured random matrices. In M. Fornasier, editor, _Theoretical Foundations and Numerical Methods for Sparse Recovery_, volume 9 of _Radon Series Comp. Appl. Math._, pages 1-92. deGruyter, 2010.
* [27] H. Rauhut and G. E. Pfander. Sparsity in time-frequency representations. _J. Fourier Anal. Appl._, 16(2):233-260, 2010.
* [28] M. Richards. _Fundamentals of Radar Signal Processing_. McGraw-Hill, 2005.
* [29] M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. _Comm. Pure Appl. Math._, 61:1025-1045, 2008.
* [30] A. Tolstoy. _Matched Field Processing in Underwater Acoustics_. World Scientific, Singapore, 1993.
* [31] J. A. Tropp. User-friendly tail bounds for sums of random matrices. _Found. Comput. Math._, 12(4):389-434, 2012.
* [32] J. A. Tropp. Recovery of short, complex linear combinations via \\(l_{1}\\) minimization. _IEEE Trans. Inform. Theory_, 51(4):1568-1570, 2005. | We consider the problem of detecting the locations of targets in the far field by sending probing signals from an antenna array and recording the reflected echoes. Drawing on key concepts from the area of compressive sensing, we use an \\(\\ell_{1}\\)-based regularization approach to solve this, in general ill-posed, inverse scattering problem. As common in compressive sensing, we exploit randomness, which in this context comes from choosing the antenna locations at random. With \\(n\\) antennas we obtain \\(n^{2}\\) measurements of a vector \\(x\\in\\mathbb{C}^{N}\\) representing the target locations and reflectivities on a discretized grid. It is common to assume that the scene \\(x\\) is sparse due to a limited number of targets. Under a natural condition on the mesh size of the grid, we show that an \\(s\\)-sparse scene can be recovered via \\(\\ell_{1}\\)-minimization with high probability if \\(n^{2}\\geq Cs\\log^{2}(N)\\). The reconstruction is stable under noise and under passing from sparse to approximately sparse vectors. Our theoretical findings are confirmed by numerical simulations.
**AMS Subject Classification:** 65K05, 65C99, 65F22, 94A99, 90C25
**Keywords:** Compressive sensing, sparsity, \\(\\ell_{1}\\)-minimization, inverse scattering, regularization | Write a summary of the passage below. | 303 |
arxiv-format/1903_05580v1.md | # Hyperspectral Data Augmentation
Jakub Nalepa, _Member, IEEE_, Michal Myller, and Michal Kawulok, _Member, IEEE_
This work was funded by European Space Agency (HYPERNET project). J. Nalepa, M. Myller, and M. Kawulok are with Silesian University of Technology, Gliwice, Poland (e-mail: {jnalepa, michal.kawulok}@ieee.org), and KP Labs, Gliwice, Poland ({jnalepa, mmyller, mkawulok}@kplabs.pl).
## I Introduction
Hyperspectral satellite imaging (HSI) captures a wide spectrum (commonly more than a hundred of bands) of light per pixel (forming an array of reflectance values). Such detailed information is being exploited by the remote sensing, pattern recognition, and machine learning communities in the context of accurate HSI _classification_ (elaborating a class label of an HSI pixel) and _segmentation_ (finding the boundaries of objects) in many fields [1]. Although the segmentation techniques include conventional machine learning algorithms (both unsupervised [2] and supervised [3, 1]), deep learning based techniques became the main stream [4, 5, 6, 7, 8, 9, 10, 11]. They encompass deep belief networks [4, 7], recurrent neural networks [8], and convolutional neural networks (CNNs) [5, 6, 9, 10, 11].
Deep neural networks discover the underlying data representation, hence they do not require feature engineering and can potentially capture features which are unknown for humans. However, to train such large-capacity learners (and to avoid overfitting), we need huge amount of ground truth data. Acquiring such training sets is often expensive, time-consuming, and human-dependent. These problems are very important real-world obstacles in training well-generalizing models (and validating such learners) faced by the remote sensing community--they are manifested by a very small number of ground truth benchmark HSI sets (there are approx. 10 widely-used benchmarks, with the Salinas Valley, Pavia University, and Indian Pines scenes being the most popular).
To combat the problem of limited, non-representative, and imbalanced training sets, _data augmentation_ can be employed. It is a process of synthesizing new examples following the original data distribution. Since such enhanced training sets can improve generalization of the learners, data augmentation may be perceived as implicit regularization. In computer vision tasks, data augmentation often involves simple _affine_ (e.g., rotation or scaling) and _elastic_ transforms of the image data [12]. These techniques, albeit applicable to HSI, do not benefit from all available information to model useful data.
### _Related literature_
The literature on HSI data augmentation is fairly limited (not to mention, only _one_ of the deep-learning HSI segmentation methods discussed earlier in this section used augmentation--simple mirroring of training samples--for improved classification [10]). In [13], the authors calculated per-spectral-band standard deviation (for each class) in the training set. The augmented samples are later drawn from a zero-mean multivariate normal distribution \\(\\mathcal{N}(0,\\alpha\\Sigma)\\), where \\(\\Sigma\\) is a diagonal matrix with the standard deviations (for all classes) along its main diagonal, and \\(\\alpha\\) is a hyper-parameter (scale factor) of this technique. Albeit its simplicity, this augmentation was shown to be able to help improve generalization.
Li et al. utilized both spectral and spatial information to synthesize new samples in their pixel-block augmentation [14]. Two data-generation approaches: (i) Gaussian smoothing filtering alongside (ii) label-based augmentation were exploited in [15]. The latter technique resembles weak-labeling [16], and builds on an assumption that neighboring HSI pixels should share the same class (the label of a pixel propagates to its neighbors, and these generated examples are inserted into the training set). Thus, it may introduce wrongly-labeled samples.
Generative adversarial networks (GANs) have already attracted research attention in the context of data augmentation due to their ability of introducing invariance of models with respect to affine and appearance variations. GANs model an unknown data distribution based on the provided samples, and they are composed of a _generator_ and _discriminator_. A generator should generate data samples which follow the underlying data distribution and are indistinguishable from the original data by the discriminator (hence, they compete with each other). In a recent work [17], Audebert et al. applied GAN conditioning to ensure that the synthesized HSI examples (from random distribution) belong to the specified class. Overall, all of the state-of-the-art HSI augmentation methods are aimed at increasing the size and representativeness of training sets which are later fed to train the deep learners.
### _Contribution_
In this letter, we propose a novel _online augmentation_ technique for hyperspectral data (Section II-A)--instead ofsynthesizing samples and adding them to the training set (hence increasing its size which adversely affects the training time of deep learners), we generate new examples _during the inference_. These examples (both original and artificial) are classified using a deep net trained over the original set, and we apply the voting scheme to elaborate the final label. To our knowledge, such online augmentation has not been exploited in HSI analysis so far (test-time augmentation was utilized in medical imaging [18], where the authors applied affine transforms and noise injection into brain-tumor images for better segmentation). Also, we introduce principal component analysis (PCA) based augmentation (Section II-B) which may be used both _offline_ (before the training) and _online_. This PCA-based augmentation simulates data variability, yet follows the original data distribution (which GANs are intended to learn [17], but they are not applicable at test-time).
Our rigorous experiments performed over HSI benchmarks revealed that the online approach is very flexible--different augmentation techniques can be exploited in this setting (Section III). The results obtained using a spectral CNN indicated that the test-time augmentation significantly improves abilities of the models when compared with those trained using the original sets, and augmented using other algorithms (also, we compared our CNN with a spectral-spatial CNN from the literature whose capacity is much larger [11]). Our online approach does not sacrifice the inference speed and allows for real-time classification. We showed that the proposed PCA augmentation is extremely fast, and ensures the highest-quality generalization of the deep models for all data-split scenarios. Finally, we demonstrated that the offline and online augmentations can be effectively combined for better classification.
## II Proposed Hyperspectral Data Augmentation
### _Online Hyperspectral Data Augmentation_
Our online (test-time) data augmentation involves synthesizing \\(A\\) artificial samples \\(\\mathbf{x}^{\\prime}_{1},\\mathbf{x}^{\\prime}_{2},\\ldots,\\mathbf{x}^{\\prime}_{A}\\) for each incoming example \\(\\mathbf{x}\\) during the inference. We traverse the neighborhood of the original example and try to mitigate potential input-dependent uncertainty of the deep model. In contrast to the offline augmentation techniques, the test-time augmentation does not cause increasing the training time of the network, and it does not require defining the number of synthesized samples beforehand (also, for the majority of specific augmentation algorithms, the operation time of a trained learner would not be significantly affected since the inference is fast). We build upon the theory of ensemble learning, where elaborating a combined classifier (encompassing several weak learners) delivers high-quality generalization (it is an efficient regularizer). Here, by creating \\(A\\) artificial data points, we implicitly form a homogeneous ensemble of deep models (trained over the same training set \\(\\mathbf{T}\\)). The final class label is elaborated using _majority voting_ (with equal weights) over all (\\(A+1\\)) samples (for low ensemble confidence, when two or more classes receive the same number of votes, we perform _soft voting_--we average all class probabilities, and the final class label corresponds to the class with the highest average probability).
The proposed online HSI augmentation may be considered to be a meta-algorithm, in which a specific data augmentation method is applied to synthesize samples on the fly. Although we exploited the noise injection based approach [13], and our principal component analysis based technique (see Section II-B) in this work, we anticipate that other augmentations which are aimed at modifying an existent sample can be straightforwardly utilized here. Finally, the online augmentation may be coupled with any offline technique (Section III).
### _Principal component analysis based data augmentation_
In this work, we propose a new augmentation method based on PCA. Let us consider a training set \\(\\mathbf{T}\\) of \\(N\\) HSI pixels \\(\\mathbf{t}_{i}\\), where \\(i=1,2,\\ldots,N\\), and each \\(\\mathbf{t}_{i}\\) is \\(b\\)-dimensional (\\(b\\) denotes the number of bands in this HSI). PCA extracts a set of \\(b^{\\prime}\\) (\\(b^{\\prime}\\ll b\\)) projection directions (vectors) by maximizing the projected variance of a given \\(b\\)-dimensional dataset--the first principal component (\\(PC_{1}\\)) accounts for as much of the data variability as possible, and so forth. First, we center the data at the origin (hence, we subtract the average sample \\(\\bar{\\mathbf{t}}=\\sum_{i=1}^{N}\\mathbf{t}_{i}/N\\) from each \\(\\mathbf{t}_{i}\\), and form the data matrix \\(\\mathcal{D}\\) (of \\(b\\times N\\) size), whose \\(i\\)th column is (\\(\\mathbf{t}_{i}-\\bar{\\mathbf{t}}\\)). The \\(b\\times b\\) covariance matrix becomes \\(\\mathcal{C}=(1/N)\\mathcal{D}\\mathcal{D}^{\\mathrm{T}}\\), and it undergoes eigendecomposition \\(\\mathcal{C}=\\mathbf{\\Phi}\\mathbf{\\Lambda}\\mathbf{\\Phi}^{T}\\), where \\(\\mathbf{\\Lambda}=diag(\\lambda_{1},\\lambda_{2},\\ldots,\\lambda_{b})\\) is the matrix with the non-increasingly ordered eigenvalues along its main diagonal, and \\(\\mathbf{\\Phi}=[PC_{1}|PC_{2}|\\ldots|PC_{b}]\\) is the matrix with \\(b\\) corresponding eigenvectors (principal components) as columns. Finally, \\(b^{\\prime}\\) principal components form an orthogonal base, and each sample \\(\\mathbf{t}\\) can be projected onto a new feature space: \\(\\mathbf{t}^{\\prime}=\\mathbf{\\Phi}^{T}\\mathbf{t}\\). Importantly, each sample \\(\\mathbf{t}^{\\prime}\\) can be projected back to its original space: \\(\\mathbf{t}^{\\prime\\prime}=\\mathbf{\\Phi}\\mathbf{t}^{\\prime}\\) with the error \\(\\epsilon=\\sum_{i=1}^{b}(t_{i}-t_{i}^{{}^{\\prime\\prime}})\\) (the PCA-training procedure minimizes this error--it is non-zero if \\(b^{\\prime}<b\\); otherwise, if \\(b^{\\prime}=b\\), there is no reconstruction error).
The first step of our PCA-based data augmentation involves transforming all training samples using PCA (trained over \\(\\mathbf{T}\\)). Afterwards, the first principal component1\\(PC_{1}\\) (of each sample) is multiplied by a random value \\(\\alpha\\) drawn from a uniform distribution \\(\\mathcal{U}(\\alpha_{\\min},\\alpha_{\\max})\\), where \\(\\alpha_{\\min}\\) and \\(\\alpha_{\\max}\\) are the hyper-parameters of our method (\\(\\alpha\\) is drawn independently for all original examples). This process is visualized in Fig. 1--we can observe that the synthesized examples (Fig. 1c) preserve the original data distribution (Fig. 1b) projected onto a reduced
Fig. 1: In our PCA-based augmentation, we randomly shift the values of the original samples along the first principal component (PC) to synthesize new data. A part (black rectangle) of (a) Pavia University data distribution (after PCA) is zoomed (b), and the synthesized examples are shown in (c) with random \\(\\alpha\\) values (for each sample), where \\(\\alpha_{\\min}=0.9\\) and \\(\\alpha_{\\max}=1.1\\) (note that too large \\(\\alpha\\)’s could adversely impact separability of the classes).
feature space, and preserve inter-class relationships. Finally, these samples are projected back onto the original space (using all principal components to ensure correct mapping), and they are added to the augmented \\(\\mathbf{T}\\) (if executed offline). This PCA-based augmentation can be applied in both offline and online settings (in both cases, PCA is trained over the original \\(\\mathbf{T}\\)).
## III Experiments
The experimental objective was to verify the impact of data augmentation on the deep model generalization. For online augmentation, we applied our PCA augmentation (PCA-ON), and noise injection (Noise-ON) [13], whereas for the offline setting, we used our PCA-based method (PCA), generative adversarial nets (GAN) [17], and noise injection (Noise) [13]. GAN cannot be used online, since it does not modify an incoming example, but rather synthesizes samples which follow an approximated distribution. Finally, we combined online and offline augmentation in PCA/PCA-ON (PCA augmentation is used to both augment the set beforehand, and to generate new samples at test time), and GAN/PCA-ON. For each offline technique, we at most doubled the number of original samples (unless that number would exceed the most numerous class--in such case, we augment only by the missing difference). For online augmentation, we synthesize \\(A=4\\) samples, and for PCA and PCA-ON, we had \\(\\alpha_{\\text{min}}=0.9\\) and \\(\\alpha_{\\text{max}}=1.1\\).
We exploit our shallow (thus resource-frugal) 1D spectral CNN (Fig. 2) coded in Python 3.6 [19]. Larger-capacity CNNs require longer training and infer slower, hence are less likely to be deployed for Earth observation, especially on board of a satellite. The training (ADAM, learn. rate of \\(10^{-4}\\), \\(\\beta_{1}=0.9\\), and \\(\\beta_{2}=0.999\\)) stops, if after 15 epochs the validation set \\(\\mathbf{V}\\) (random subset of \\(\\mathbf{T}\\)) accuracy plateaus.
We train and validate the deep models using: (1) balanced \\(\\mathbf{T}\\) sets with random pixels (B), (2) imbalanced \\(\\mathbf{T}\\) sets with random pixels (IB), and (3) our patched sets (P) [19] (for fair comparison, the numbers of pixels in \\(\\mathbf{T}\\) and \\(\\Psi\\) for B and IB are close to those reported in [11]). We also report the results obtained using a spectral-spatial CNN (3D-CNN) [11], trained over the original \\(\\mathbf{T}\\) (3D-CNN--in contrast to our CNN--does suffer from the training-test information leak problem, and the 3D-CNN results over B and IB are over-optimistic [19]). For each fold in (3), we repeat the experiments \\(5\\times\\), and for (1) and (2), we perform Monte-Carlo cross-validation with the same number of runs (e.g., if 5 folds are run \\(5\\times\\), we execute \\(25\\) Monte-Carlo runs for B and IB). We report per-class, average (AA), and overall accuracy (OA), averaged across all runs.
We focused on three HSI benchmarks (see their class-distribution characteristics at: [http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes](http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes)): Salinas Valley, USA (\\(217\\times 512\\) pixels, NASA AVIRIS sensor; \\(||\\mathbf{T}||=4320\\), \\(||\\mathbf{V}||=480\\), \\(||\\Psi||=49329\\)) presents different sorts of vegetation (16 classes, 224 bands, 3.7 m spatial resolution). Indian Pines (\\(145\\times 145\\), AVIRIS; \\(||\\mathbf{T}||=2444\\), \\(||\\mathbf{V}||=271\\), \\(||\\Psi||=7534\\)) covers the North-Western Indiana, USA (agriculture and forest, 16 classes, 200 channels, 20 m). Pavia University (\\(340\\times 610\\), ROSIS; \\(||\\mathbf{T}||=2025\\), \\(||\\mathbf{V}||=225\\), \\(||\\Psi||=40526\\)) was captured over Lombardy, Italy (urban scenery, 9 classes, 103 channels, 1.3 m).
The results (over the test sets \\(\\Psi\\)) obtained for the Salinas Valley, Indian Pines, and Pavia University datasets are gathered in Table I. Introducing augmented samples (in both online and offline settings) helped boost generalization abilities of the deep models in the majority of cases (even up to more than 8% of OA for GAN, PCA, and PCA/PCA-ON in Salinas, B; only the Noise offline augmentation deteriorated both OA and AA for P). Interestingly, exploring the local neighborhood randomly (Noise and Noise-ON) can notably deteriorate OA and AA. It usually occurs for under-represented classes (e.g., C7 in Pavia) since their examples lay close to other-class examples in the discovered feature space (therefore, they can be easily \"confused\" with each other). This problem is addressed by the data-distribution analysis in our PCA-based augmentations. Coupling offline and online augmentation (PCA/PCA-ON and GAN/PCA-ON) gave consistent high-quality results over all sets and all training-test splits, and dealt well with the HSI imbalance (in P, we did not ensure that examples of all classes are included in the original \\(\\mathbf{T}\\), thus P is very challenging [19]).
To verify the statistical significance of the results (and see if the differences in the average per-class accuracy are important in the statistical sense), we executed two-tailed Wilcoxon's tests for each dataset split (B, IB, and P) over per-class AA for all HSI. The results reported in Table II show that applying HSI data augmentation is beneficial in most cases and delivers significant improvements in accuracy. GAN did equally well as e.g., PCA, PCA-ON, and our combined PCA/PCA-ON and GAN/PCA-ON for B, as Noise-ON, PCA-ON, and GAN/PCA-ON for IB, and as Noise-ON and PCA-ON for P. It indicates that employing time-consuming and complex deep-learning engines for data augmentation not necessarily brings larger improvements in the performance of the deep models.
Our combined approaches (PCA/PCA-ON and GAN/PCA-ON) were stable and consistently ensured high-quality generalization (as shown in Table I) of the deep models over all splits. This stability is also manifested in Table III, where we summarize the results across all sets (although PCA gave the best accuracy for B, the differences between PCA and PCA/PCA-ON and GAN/PCA-ON are not statistically significant). We can appreciate that our PCA-based augmentation (offline, online, or combined) allowed us to obtain the best generalization--very intuitive PCA-based data-distribution analysis for synthesizing samples outperformed or worked on par with GAN in the case of difficult (small and imbalanced) sets. Finally, our CNN surpassed the accuracy elaborated using a significantly larger 3D-CNN from the literature (with a bigger capacity) for P (note that the results obtained using 3D-CNN for B and IB are over-optimistic due to the intrinsic training-test information leak problem, hence they cannot be considered reliable [11]).
Fig. 2: Our CNN with \\(n\\) kernels in the convolutional layer (\\(s\\) is a stride) and \\(l_{1}\\) and \\(l_{2}\\) neurons in the fully-connected (FC) layers. BN is batch normalization.
\\begin{table}
\\begin{tabular}{l|c|c c c c c c c c c c c c c c c c c c c c c c} \\hline \\hline Set & Augmentation & C1 & C2 & C3 & C4 & C5 & C6 & C7 & C8 & C9 & C10 & C11 & C12 & C13 & C14 & C15 & C16 & OA & AA \\\\ \\hline \\multirow{9}{*}{**S**} & Without & 0.374 & 24.62 & 85 & 18.7 & 81.19 & 95.55 & 93.88 & 59.05 & 92.49 & 85.37 & 80.87 & 88.94 & 91.92 & 92.71 & 6
To gain better insights into the augmentation performance (and its potential overhead imposed on the deep models in terms of training and/or test times), we collected the average execution times of the most important steps of the investigated methods in Table IV. It can be observed that training of GANs is very time-consuming (it was executed using NVIDIA GeForce GTX 1060), and is of orders of magnitude higher than pre-processing in other offline techniques (PCA and Noise). Although all offline augmentations affect the training time of deep networks, these differences are not dramatic. Finally, the online augmentation allowed us to classify test pixels in real-time (note that we report the inference time in ms).
## IV Conclusions
In this letter, we introduced a new online HSI data augmentation approach which synthesizes examples at test time. It is in contrast to other state-of-the-art hyperspectral data augmentation techniques that work offline (i.e., before the deep-network training to increase the training set cardinality and representativeness). Our experimental study, performed over three HSI benchmark sets (with different training-test data splits) and coupled with statistical tests revealed that our online augmentation is very flexible (different augmentations can be applied here), improves the generalization abilities of deep neural networks, and works in real-time. Also, we showed that combining online and offline augmentation leads to consistently well-performing models. Finally, we proposed a principal component analysis based augmentation which operates extremely fast, synthesizes high-quality data, outperforms other augmentations for small and imbalanced sets, and is applicable in online and offline settings.
## References
* [1] T. Dundar and T. Ince, \"Sparse representation-based hyperspectral image classification using multiscale superpixels and guided filter,\" _IEEE GRSL_, pp. 1-5, 2018.
* [2] G. Bilgin, S. Erturk, and T. Yildirim, \"Segmentation of hyperspectral images via subtructive clustering and cluster validation using one-class SVMs,\" _IEEE TGRS_, vol. 49, no. 8, pp. 2936-2944, 2011.
* [3] F. Li, D. Clausi, L. Xu _et al._, \"ST-IRGS: A region-based self-training algorithm applied to hyperspectral image classification and segmentation,\" _IEEE TGRS_, vol. 56, no. 1, pp. 3-16, 2018.
* [4] Y. Chen, X. Zhao, and X. Jia, \"Spectral spatial classification of hyperspectral data based on deep belief network,\" _IEEE J-STARS_, vol. 8, no. 6, pp. 2381-2392, 2015.
* [5] W. Zhao and S. Du, \"Spectral-spatial feature extraction for hyperspectral image classification,\" _IEEE TGRS_, vol. 54, no. 8, pp. 4544-4554, 2016.
* [6] Y. Chen, H. Jiang, C. Li _et al._, \"Deep feature extraction and classification of hyperspectral images based on convolutional neural networks,\" _IEEE TGRS_, vol. 54, no. 10, pp. 6232-6251, 2016.
* [7] P. Zhong, Z. Gong, S. Li _et al._, \"Learning to diversify deep belief networks for hyperspectral image classification,\" _IEEE TGRS_, vol. 55, no. 6, pp. 3516-3530, 2017.
* [8] L. Mou, P. Ghamisi, and X. X. Zhu, \"Deep recurrent nets for hyperspectral classification,\" _IEEE TGRS_, vol. 55, no. 7, pp. 3639-3655, 2017.
* [9] A. Santara, K. Mani, P. Hatwar _et al._, \"BASS Net: Band-adaptive spectral-spatial feature learning neural network for hyperspectral image classification,\" _IEEE TGRS_, vol. 55, no. 9, pp. 5293-5301, 2017.
* [10] H. Lee and H. Kwon, \"Going deeper with contextual CNN for hyperspectral classification,\" _IEEE TIP_, vol. 26, no. 10, pp. 4843-4855, 2017.
* [11] Q. Gao, S. Lim, and X. Jia, \"Hyperspectral image classification using convolutional neural networks and multiple feature learning,\" _Rem. Sens._, vol. 10, no. 2, p. 299, 2018.
* [12] A. Krizhevsky, I. Sutskever, and G. Hinton, \"Imagenet classification with deep convolutional neural networks,\" _Proc. NIPS_, vol. 10971105, 2012.
* [13] V. Slavkovily, S. Venstock, W. De Neve _et al._, \"Hyperspectral image classification with CNNs,\" in _Proc. ICM_. ACM, 2015, pp. 1159-1162.
* [14] W. Li, C. Chen, M. Zhang _et al._, \"Data augmentation for hyperspectral classification with deep CNN,\" _IEEE GRSL_, pp. 1-5, 2018.
* [15] J. Acquarelli, E. Marchiori, L. M. Buydens _et al._, \"Spectral-spatial classification of hyperspectral images,\" _Rem. Sens._, vol. 10, no. 7, 2018.
* [16] Y.-Y. Sun, Y. Zhang, and Z.-H. Zhou, \"Multi-label learning with weak label,\" in _Proc. AAAI_. AAAI Press, 2010, pp. 593-598.
* [17] N. Andebert, B. L. Saux, and S. Lefevre, \"Generative adversarial networks for realistic synthesis of hyperspectral samples,\" _CoRR_, vol. abs/1806.02583, pp. 1-4, 2018.
* [18] G. Wang, W. Li, S. Ourselin _et al._, \"Automatic brain tumor segmentation using convolutional neural networks with test-time augmentation,\" _CoRR_, vol. abs/1810.07884, pp. 1-12, 2018.
* [19] J. Nalepa, M. Myller, and M. Kawulok, \"Validating hyperspectral image segmentation,\" _IEEE GRSL_, pp. 1-5, 2019, in press, DOI:10.1109/LGRS.2019.2895697 (pre-print: [https://arxiv.org/abs/1811.03707](https://arxiv.org/abs/1811.03707)). | Data augmentation is a popular technique which helps more generalization capabilities of deep neural networks. It plays a pivotal role in remote-sensing scenarios in which the amount of high-quality ground truth data is limited, and acquiring new examples is costly or impossible. This is a common problem in hyperspectral imaging, where manual annotation of image data is difficult, expensive, and prone to human bias. In this letter, we propose online data augmentation of hyperspectral data which is executed during the inference rather than before the training of deep networks. This is in contrast to all other state-of-the-art hyperspectral augmentation algorithms which increase the size (and representativeness) of training sets. Additionally, we introduce a new principal component analysis based augmentation. The experiments revealed that our data augmentation algorithms improve generalization of deep networks, work in real-time, and the online approach can be effectively combined with offline techniques to enhance the classification accuracy.
Hyperspectral imaging, data augmentation, deep learning, classification, segmentation, PCA. | Condense the content of the following passage. | 200 |
arxiv-format/1805_02579v1.md | # 30m resolution Global Annual Burned Area Mapping based on Landsat images and Google Earth Engine
Tengfei Long
Zhaoming Zhang
e2018-0108217818.
Guojin He
Weili Jiao
Chao Tang
Bingfang Wu
Xiaomei Zhang
Guizhou Wang
Ranyu Yin
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing, China. 100094 Hainan Key Laboratory for Earth Observation, Sanya, Hainan Province, China. 572029 China University of Mining and Technology, Xuzhou, Jiangsu, China. 221116
## 1 Introduction
Accurate and complete data on fire locations and burned areas (BA) are important for a variety of applications including quantifying trends and patterns of fire occurrence and assessing the impacts of fires on a range of natural and social systems, e.g. simulating carbon emissions from biomass burning (Chuvieco et al., 2016). Remotely sensed satellite imagery has been widely used to generate burned area products. Burned area products at global scale using satellite images have been mostly based on coarse spatial resolution data such as Advanced Very High Resolution Radiometer (AVHRR), Geostationary Operational Environmental Satellite (GOES), VEGETATION or Moderate Resolution Imaging Spectroradiometer (MODIS)images. Main global burned area products include GBS (Carmona-Moreno et al., 2005), Global Burned Area 2000 (GBA2000) (Tansey, 2004), GLOBSCAR (Simon, 2004), GlobCarbon (Plummer et al., 2006), L3JRC (Tansey et al., 2008), MCD45 (Roy et al., 2005), GFED (Giglio et al., 2010), MCD64 (Giglio et al., 2016), and Fire_cci (Chuvieco et al., 2016).
The recently released Fire_cci product is produced based on MODIS images and has the highest spatial resolution (250m) of all the existing global burned area products (Chuvieco et al., 2016; Pettinari and Chuvieco, 2018). However, the requisites of the climate modelling community are not yet met with the current global burned area products, as these products do not provide enough spatial detail (Bastarrika et al., 2011). Imagery collected by the family of Landsat sensors is useful and appropriate for monitoring the extent of area burned and provide spatial and temporal resolutions ideal for science and management applications. Landsat sensors can provide a longer temporal record (from 1970s until now) of burned area relative to existing global burned area products and potentially with increased accuracy and spatial detail in most areas on the earth (Stroppiana et al., 2012). Great importance has been attached to developing burned area products based on Landsat data in the past 10 years (Bastarrika et al., 2011; Stroppiana et al., 2012; Hawbaker et al., 2017). Up to now, there is no Landsat based global burned area product, however, some regional Landsat burned area products have been publicly released in recent years. Australia released its Fire Scars (AFS) products derived from all available Landsat 5, 7 and 8 images using time series change detection technique (Goodwin and Collett, 2014). Fire scars are automatically detected and mapped using dense time series of Landsat imagery acquired over the period 1987 2015 and the AFS product only covers the state of Queensland, Australia. Monitoring Trends in Burn Severity (MTBS) project, sponsored by the Wildland Fire Leadership Council (WFLC) provides consistent, 30-meter resolution burn severity data and fire perimeters across all lands of the United States from 1984-2015 (only fires larger than 200 ha in the eastern US and 400 ha in the western US are mapped) (Eidenshink et al., 2007). MTBS products are generated based on the difference of Normalized Burned Ratio (NBR) calculated from pre-fire and post-fire images, in which the burned area boundary is delineated by on-screen interpretation and the process of developing a categorical burn severity product is subjective and dependent on analyst interpretation. The Burned Area Essential Climate Variable (BAECV), developed by the U.S. Geological Survey (USGS), produced Landsat derived burned area products across the concurminous United States (CONUS) from 1984-2015, and its products have been released in April 2017 (Hawbaker et al., 2017). The main differences between the MTBS and BAECV is the BAECV products are automatically generated based on all available Landsat images.
In summary, global burned area products are only available at coarse spatial resolution while 30-meter resolution burned area products are limited to specific regions. The majority of coarse spatial resolution algorithms developed to produce global burned area products use a multi-temporal change detection technique, because such satellite data have very high temporal resolution and are capable of monitoring fire-affectedland cover changes. For example, the algorithm of MODIS burned area product (MCD45) is developed from the bi-directional reflectance model-based expectation change detection approach (Roy et al., 2005). One of the difficulties to produce Landsat based burned area products is that the traditional approaches successfully applied to extract global burned area from MODIS, VEGFTATION, etc. dont work well due to the limited temporal resolution of the Landsat sensors. Moreover, the analysis of post-fire reflectance may be easily contaminated by clouds or weakened by quick vegetation recovery, particularly in Tropical regions (Alonso-Canas and Chuvieco, 2015). Another difficulty is that global 30-meter resolution annual burned areas mapping needs to utilize dense time-series Landsat images, and the required datasets can be hundreds of thousands of Landsat scenes, resulting impractical processing time. Although some researches have been addressed to detect burned area regionally from Landsat time series (Goodwin and Collett, 2014; Hawbaker et al., 2017; Liu et al., 2018), results of global-scale have not been reported. However, thanks to Google Earth Engine (GEE), a new generation of cloud computing platforms with access to a huge catalog of satellite imagery and global-scale analysis capabilities (Gorelick et al., 2017), it is now possible to perform global-scale geospatial analysis efficiently without caring about pre-processing of satellite images. In this study, we focus on an automated approach to generate global-scale high resolution burned area map using dense time-series of Landsat images on GEE, and a novel 30-meter resolution global annual burned area map of 2015 (GABAM 2015) is released.
## 2 Methodology
### Sampling design
The spectral characteristics of burned areas vary in complex ways for different ecosystems, fire regimes and climatic conditions. In terms of guaranteeing the accuracy of global burned area map and also the completeness of quality assessment, a stratified random sampling method(Padilla et al., 2015; Boschetti et al., 2016; Padilla et al., 2017) was used to generate two sets of sites for classifier training and the validation of GABAM 2015, respectively. The training and validation sites were chosen randomly based on stratifications of both fire frequency and type of land cover.
Firstly, the Earth's Land Surface was partitioned based on the 14 land cover classes according to the MCD12C1 product (Friedl and Sulla-Menashe, 2015) of 2012 using University of Maryland (UMD) scheme. These types were then merged into 8 categories based on their similarities (Chuvieco et al., 2011), i.e. Broadleaved Evergreen, Broadleaved Deciduous, Coniferous, Mixed forest, Shrub, Rangeland, Agriculture and Others. Table 1 shows the reclassification rule from UMD land cover types to new classifications. As \"Others\" category consists of the biomes less prone to fire, only other 7 land cover categories are considered to create the geographic stratifications in this work.
Secondly, the globe was divided into 5 partitions based on the BA density in 2015 provided by the Global Fire Emissions Database (GFED) version 4.0 (Giglio et al., 2013), the most widely used inventory in global biogeochemical and atmospheric modeling studies (Giglio et al., 2016). Specifically, GFED4 monthly products of 2015 were utilized to produce an annual composition (GFED4 2015), consisting of 720 rows and 1440 columns which correspond to the global \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) GFED grid, and each pixel summed the total areas of BA (BA density, km\\({}^{2}\\)) occurred in the grid cell during the whole year. The BA density of GFED4 2015 was then divided into 5 equal-frequency intervals (Chuvieco et al., 2011) with Quantile classification.
By spatially intersecting the 7 land cover categories and 5 BA density levels, we obtained the final 35 strata with different fire frequencies and biomes. The samples were equally allocated to 5 BA density levels, but for different land cover categories, we also took into account the BA extent within each stratum with larger sample sizes allocated to strata with higher BA extent (Padilla et al., 2014). According to the strategy of stratified sampling, 120 samples (24 for each BA density level) were randomly selected to generate training dataset, and spatial dimension of sampling units was based on Landsat World Reference System II (WRS-II). Similarly, 80 validation sites (16 for each BA density level) were also created by randomly stratified sampling, but trying to keep a distance (at least 200km) from the training samples so as not to fall into the extent of training Landsat scenes. Figure 1 illustrates the distribution of 120 random Landsat image scenes and 80 validation sites over a map of BA density extracted from GFED4 2015, and Table 2 shows
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline New classification & Original UMD type \\\\ \\hline Broadleaved Evergreen & Evergreen Broadleaf forest \\\\ Broadleaved Deciduous & Deciduous Broadleaf forest \\\\ Coniferous & Evergreen Needleleaf forest \\\\ & Deciduous Needleleaf forest \\\\ Mixed Forest & Mixed forest \\\\ Shrub & Closed shrublands \\\\ & Open shrublands \\\\ Rangeland & Woody savannas \\\\ & Savannas \\\\ & Grasslands \\\\ Agriculture & Croplands \\\\ Others & Water \\\\ & Urban and built-up \\\\ & Barren or sparsely vegetated \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Map between original UMD land cover types and new classifications for the geographic stratification.
the distribution of training and validation samples over the different land cover types.
### Training dataset
In terms of analyzing the characteristics of burned areas in Landsat images, 120 Landsat-8 image scenes were chosen according to the WRS-II frames generated by stratified random sampling in section 2.1. All the Landsat-8 images used in this study were acquired from datasets of USGS Landsat-8 Surface Reflectance Tier 1 and Tier 2 in Google Earth Engine platform, whose ImageCollection IDs are \"LANDSAT/LC08/C01/T1_SR\" and \"LANDSAT/LC08/C01/T2_SR\". These data have been atmospherically corrected using LaSRC (Vermote et al., 2016), and include a cloud, shadow, water and snow mask produced using Fmask (Zhu and Woodcock, 2014), as well as a per-pixel saturation mask. For the purpose of burned area mapping, 6 bands of Landsat-8 image were used, i.e. three Visible bands (BLUE, 0.452-0.512 \\(\\upmu\\)m;
\\begin{table}
\\begin{tabular}{l c c} \\hline \\hline Land cover type & Training sample count & Validation sample count \\\\ \\hline Broadleaved Evergreen & 16 & 11 \\\\ Broadleaved Deciduous & 12 & 9 \\\\ Coniferous & 13 & 9 \\\\ Mixed Forest & 12 & 8 \\\\ Shrub & 18 & 12 \\\\ Rangeland & 25 & 15 \\\\ Agriculture & 24 & 16 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Distribution of training and validation samples over the different land cover types.
Figure 1: The distribution of 120 random Landsat image scenes and 80 validation sites over a map of BA density extracted from GFED4 2015.
GREEN, 0.533-0.590 \\(\\upmu\\)m; RED, 0.636-0.673 \\(\\upmu\\)m), Near Infrared band (NIR, 0.851-0.879 \\(\\upmu\\)m), and two Short Wave Infrared bands (SWIR1, 1.566-1.651 \\(\\upmu\\)m; SWIR2, 2.107-2.294 \\(\\upmu\\)m).
In this work, burned area mapping algorithm was implemented on GEE platform, and the maximum quantity of input samples are limited by GEEs classifiers, thus average 90-100 sample points were collected by experienced experts from each Landsat-8 image, making the total quantity of sample points 12881 (6735 burned samples and 6146 unburned samples). Specifically, shortwave infrared (SWIR2), Near Infrared (NIR) and Green bands were composited in a Red, Green, Blue (RGB) combination in order to visualize burned areas better, and burned samples, including fire scars of different burn severity and of various biomass types, were extracted from the pixels showing magenta color (Koutsias and Karteris, 2000). The unburned pixels were extracted randomly over the non fire affected areas covering vegetation, built-up land, bare land, topographic shadows, borders of lakes, etc. For those confusing pixels which were difficult to identify whether they were burned scars, further check was performed by examining the Landsat images on nearest date of the previous year or higher resolution images on nearest date in Google Earth software. To ensure only clearly burned pixels were selected, the burned samples were collected carefully to avoid pixels near the boundaries of burned scar (Bastarrika et al., 2011); and burned pixels located in burning flame or covered by smoke were also excluded to prevent potential contamination of burned samples. Land surface reflectance of the collected samples in BLUE, GREEN, RED, NIR, SWIR1, SWIR2 bands were extracted for further analysis.
### Sensitive features for burned surfaces
Figure 2 shows the statistical mean reflectance (with standard deviations) of burned samples in Landsat 8 bands.
Burned areas are characterized by deposits of charcoal, ash and fuel, and the reflectance of the burned
Figure 2: Means and standard deviations of land surface reflectance of burned Landsat-8 pixels in different bands.
pixels generally increases along with the wave length while the burned pixels have similar reflectance in SWIR1 and SWIR2 bands, which is greater than that in other bands. However, the spectral character of post-fire pixels varies greatly (standard deviations in Figure 2) according to the type and condition of the vegetation prior to burning and the degree of combustion (Bastarrika et al., 2014), and none of existing spectral indices can be considered the best choice for identifying burned surfaces without misclassification with other targets in all environments or fire regimes (Boschetti et al., 2010). Consequently, in this work, we made use of most common spectral indices for Landsat image previously suggested in BA studies, and their formulas are summarized as Table 3. Some of these spectral indices were specifically developed for burn detection as they are sensitive to charcoal and ash deposition, such as normalized burned ratio (NBR (Key and Benson, 1999)), normalized burned ratio 2 (NBR2 (Lutes et al., 2006)), burned area index (BAI (Martin, 1998)), mid infrared burn index (MIRBI (S. Trigg, 2001)). In addition, other indices that are not burn-specific may also be useful to map burned areas when cooperating with burn-specific indices. For instance, although normalized difference vegetation index (NDVI) is not the best index for burned area mapping, it is sensitive to vegetation greenness and therefore to the absence of vegetation in the case of burned areas (Stroppiana et al., 2009). The global environmental monitoring index (GEMI (Pinty and Verstraete, 1992)) is an improved vegetation index, specifically designed to minimize problems of contamination of the vegetation signal by extraneous factors, and it is considered very important for the remote sensing of dark surfaces, such as recently burned areas (Pereira, 1999). The soil adjusted vegetation index (SAVI (Huete, 1988)), which is originally designed for sparse vegetation and outperforms NDVI in environments with a single vegetation (Veraverbeke et al., 2012), is also helpful to improve separability of burns from soil and water (Stroppiana et al., 2012). The normalized difference moisture index (NDMI (Wilson and Sader, 2002)), which is sensitive to the moisture levels in vegetation, is also relative to fuel levels in fire-prone areas. We also evaluated the relative importance of 14 Landsat features (8 spectral indices in Table 3 and the surface reflectance in 6 bands of Landsat-8 image) when applied to classify burned areas using random forest algorithm (Pedregosa et al., 2011) (as shown in Figure 3).
Figure 3 shows that spectral indices were generally more important than the surface reflectance in most bands, except for NIR band and SWIR2 band, which are sensitive to removal of vegetation cover and deposits of char and ash (Pleniou and Koutsias, 2013). We also found that NBR2, BAI, MIRBI and SAVI had the greatest relative importance, whereas SAVI was not initially developed for burned area detection. However, considering the potential contribution of those features with relative low importance in distinguishing burned scars, all of 14 features were selected as sensitive features to perform global burned area mapping in this study.
### Burned area mapping via GEE
In this work, annual burned area map is defined as spatial extent of fires that occurs within a whole year and not of fires that occurred in previous years. Therefore, global 30-meter resolution annual burned areas mapping needs to utilize dense time-series Landsat images, and the pipeline of annual burned area mapping via GEE is described as Figure 4.
As shown in Figure 4, the pipeline mainly consists of three steps, model training, per-pixel processing and burned area shaping, and the following provides more details of each step.
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline
**Name** & **Formula** \\\\ \\hline Normalized burned ratio & \\(NBR=\\frac{\\rho_{NIR}-\\rho_{SWIR2}}{\\rho_{NIR}+\\rho_{SWIR2}}\\) \\\\ Normalized burned ratio 2 & \\(NBR2=\\frac{\\rho_{SWIR1}-\\rho_{SWIR2}}{\\rho_{SWIR1}+\\rho_{SWIR2}}\\) \\\\ Burned area index & \\(BAI=\\frac{1}{(\\rho_{NIR}-0.06)^{2}+(\\rho_{RED}-0.1)^{2}}\\) \\\\ Mid infrared burn index & \\(MIRBI=10\\rho_{SWIR2}-0.98\\rho_{SWIR1}+2\\) \\\\ Normalized difference vegetation index & \\(NDVI=\\frac{\\rho_{NIR}-\\rho_{RED}}{\\rho_{NIR}+\\rho_{RED}}\\) \\\\ Global environmental monitoring index & \\(GEMI=\\frac{\\eta(1-0.25\\eta)-(\\rho_{RED}-0.125)}{1-\\rho_{RED}}\\), \\\\ & \\(\\eta=\\frac{2(\\rho_{NIR}^{2}-\\rho_{RED}^{2})+1.5\\rho_{NIR}+0.5\\rho_{RED}}{ \\rho_{NIR}+\\rho_{RED}+0.5}\\) \\\\ Soil adjusted vegetation index & \\(SAVI=\\frac{(1+L)(\\rho_{NIR}-\\rho_{RED})}{\\rho_{NIR}+\\rho_{RED}+L}\\), \\(L=0.5\\) \\\\ Normalized difference moisture index & \\(NDMI=\\frac{\\rho_{NIR}-\\rho_{SWIR1}}{\\rho_{NIR}+\\rho_{SWIR1}}\\) \\\\ \\hline \\hline \\end{tabular} \\(\\rho_{RED}\\) is the surface reflectance in RED, \\(\\rho_{NIR}\\) is the surface reflectance in NIR, \\(\\rho_{SWIR1}\\) is the surface reflectance in SWIR1 band, and \\(\\rho_{SWIR2}\\) is the surface reflectance in SWIR2 band.
\\end{table}
Table 3: The formulas of spectral indices that are sensitive to burned areas.
Figure 3: Relative importance of Landsat image features on burned areas classification evaluated by random forest algorithm.
#### 2.4.1 Model Training
The random forest (RF) algorithm provided by GEE were applied to train a decision forest classifier, and the global training data consisted of 6735 burned and 6146 unburned samples which were manually collected from 120 Landsat scenes generated by stratified random sampling (in section 2.1 and section 2.2). Random forest classifier with higher number of decision trees usually provides better results, but also causes higher cost in computation time. Since the input features of the algorithm includes the surface reflectance (SR) in 6 bands of Landsat-8 image as well as 8 spectral indices that have high sensitivity to burned surface, we limited the number of decision trees in the forest to 100 for trade-off between accuracy and efficiency. Additionally, we chose \"probability\" mode for GEE's RF algorithm, in which the output is the probability that the classification is correct, and the probability would be further utilized to perform region growing in the step of burned area shaping.
Figure 4: Workflow for annual burned area mapping using Google Earth Engine.
#### 2.4.2 Per-pixel Processing
In this step, Landsat surface reflectance collections from GEE, which consist of all the available Landsat scenes, were employed for dense time-series processing. At a pixel, the occurrence of a single Landsat satellite could be more than 20 or 40 times (considering the overlap between adjacent paths) within a year, and it would double when contemporary satellites (e.g. Landsat-7 and Landsat-8) were utilized. However, considering the failure of Scan Line Corrector (SLC) in the ETM+ instrument of Landsat-7 satellite, we only utilized USGS Landsat-8 Surface Reflectance collections (\"LANDSAT/LC08/C01/T1_SR\" and \"LANDSAT/LC08/C01/T2_SR\"). The quality assessment (QA) band of Landsat image, which was generated by FMask algorithm (Zhu and Woodcock, 2014), was used to perform QA masking. Pixels flagged as being clouds, cloud shadows, water, snow, ice, or filled/dropped pixels were excluded from Landsat scenes, and only clear land pixels remained after QA masking. At each pixel, the geometrically aligned dense time-series Landsat image scenes provided a reflectance stack of 6 bands, which was then split into two stacks by date filters, i.e. a stack of current year and that of the previous year.
For the reflectance stack of current year, 8 spectral indices were computed at each time period, and then the trained decision forest classifier in section 2.4.1 produced a stack of burned probability using the 8 spectral indices and the reflectance of 6 bands. The maximum value of a probability stack indicates the probability that the pixel had ever appeared like burned scar during the whole year. Four quantities were noted for each pixel, i.e. the date on which the maximum probability was observed (\\(t_{1}\\)), as well as the burned probability (\\(p_{max}\\)), NDVI value (\\(NDVI_{1}\\)) and NBR value (\\(NBR_{1}\\)) on that date. However, it is not usually possible to unambiguously separate in a single image the spectral signature of burned areas from those caused by unrelated phenomena and disturbances such as shadows, flooding, snow melt, or agricultural harvesting (Boschetti et al., 2015); the burned scars which occurred in previous years but not yet recovered (particularly in boreal forests) should also be excluded from the annual BA map of current year. In this sense, we also concerned the summary statistics of current year and previous year: \\(NDVI_{2}\\), the maximum NDVI value within the couple of years (current year and previous year); \\(t_{2}\\), the date of \\(NDVI_{2}\\); and \\(NBR_{2}\\), the minimum NBR value within the previous year. Then most of the unreasonable tree-covered burned-like pixels would be excluded unless they met all the following constraints.
1. \\(NDVI_{2}>T_{NDVI}\\), the maximum NDVI value within the couple of years should be greater than a threshold \\(T_{NDVI}\\). We choose NDVI as it has been found to be a good identifier of vigorous vegetation, and this constraint is used to exclude areas that appear like burned but in fact were just lacking vegetation.
2. \\(NDVI_{2}-NDVI_{1}>T_{dNDVI}\\), the difference between the maximum NDVI and the NDVI when the pixel was most like burned scar should be greater than a threshold \\(T_{dNDVI}\\). This constraint ensures an evidence of vegetation decrease when burn happened.
3. \\(NBR_{2}-NBR_{1}>T_{dNBR}\\), the NBR value of a burned pixel should be less than the minimum NBR of the previous year, and the threshold \\(T_{dNBR}\\) is the minimum acceptable decline of NBR. This constraint is useful to exclude false detections with periodic variation of NBR and NDVI, such as mountain shadows, burned-like soil in deciduous season, snow melting and flooding.
4. \\(t_{1}>t_{2}\\) or \\(t_{2}-t_{1}>T_{DAY}\\), the most flourishing date of vegetation should be earlier than the burning date, or the lagged days should be less than a threshold \\(T_{DAY}\\). For tree-covered surface, it usually takes a long time for the vegetation to recover more flourishing than the previous year, thus the burn-like pixels with \\(t_{1}<=t_{2}\\) are likely attributed to a false alarm. However, as the recovering of burned trees can be fast in tropic regions, high post-fire regrowth within a reasonable days is also acceptable.
We named the first two constraints as \"NDVI filter\", the third and fourth ones as \"NBR filter\" and \"temporal filter\", respectively. In this work, the thresholds in above constraints were chosen empirically, \\(T_{NDVI}=0.2\\), \\(T_{dNDVI}=0.2\\), \\(T_{DAY}=100\\)(days) and \\(T_{dNBR}=0.1\\). Determining a globally optimal NDVI threshold is not easy or even impossible for various types and conditions of the vegetation, and we chose a low threshold \\(T_{NDVI}=0.2\\)(Sobrino and Raissouni, 2000), not expecting to directly exclude all confusing surfaces never covered by vegetation. Actually, the second constraint would also help to exclude non-vegetation with high NDVI, because the decline of NDVI, in the absence of vegetation variation, commonly wouldn't meet the constraint. The change of NBR in pre-fire and post-fire images, defined as delta NBR or dNBR, has proved to be a good indicator of burn severity and vegetation regrowth (higher the severity, greater the dNBR) (Miller and Thode, 2007; Lhermitte et al., 2011). It was suggested dNBR greater than 0.1 commonly indicates burn of low severity (Lutes et al., 2006), thus we chose \\(T_{dNBR}=0.1\\). Lastly, in the temporal filter, a fixed recovery cycle for all kinds of trees is also not available, and we just approximately chose an average time, 100 days.
However, for herbaceous vegetation, we should use only the first two constraints, as grassland usually recovers very quickly and can be burned year after year. Accordingly, annual MODIS Vegetation Continuous Fields (VCF) 250 m Collection 5.1 (MOD44B) product (DAAC, 2015) of current and previous year, which contains the Tree-Cover Percent layer and Non-Tree Vegetation layer, were utilized to determine whether the pixel is dominated by tree or by herbaceous vegetation. Passing the filters of NDVI, NBR and temporal context, those pixels with annual burned probability greater than or equal to 0.95 (\"probability filter\") were selected as seeds for region growing.
#### 2.4.3 Burned Area Shaping
In this step, a region growing process was employed to shape the burned areas. Region growing has proved to be necessary for BA mapping in many studies (Bastarrika et al., 2011; Stroppiana et al., 2012; Hawbaker et al., 2017), because spectral based methods sometimes give ambiguous evidence (i.e. spectral overlapping between burned areas and unrelated phenomena with similar spectral characteristics, such as cloud shadows,ephemeral water or dark soils (Stroppiana et al., 2012)), and accepting all positive evidence can lead to confusion errors. Although candidate seeds were chosen with high confidence, false seed pixels were still frequently included in confusing surfaces, e.g. shadows, borders of lakes. Different from the candidate seeds in the actual burned scars, those falsely introduced seed pixels always distributed sparsely. Consequently, we aggregated the seed pixels into connected components using a kernel of 8-connected neighbors, and by ignoring small fires with areas less than 1 ha (Laris, 2005), those fragmentary components (smaller than 11 pixels), which included most false seed pixels, were removed. Finally, an iterative procedure of region growing were performed around each seed pixel. For each iteration, the 8-connected neighbors of the seed pixels were aggregated as burned pixels (new seeds) if their burned probabilities were greater than or equal to 0.5, and the iteration stopped when no more pixels can be aggregated as burned pixels. Figure 5 shows an example of region growing. One can see that only some pixels showing strong magenta color in the burned scars were chosen as seeds while those showing light magenta color were labeled as candidates for region growing, including some actual burned pixels as well as some false detections (right-middle in Figure 5b). However, after the processes of small seeds removal and region growing, the false detections were excluded while those candidates near the seeds were aggregated to the final BA map.
## 3 Results and analysis
### Product description
Employing the proposed approach, we produced the global annual burned area map of 2015 (GABAM 2015), which was projected in a Geographic (Lat/Long) projection at 0.00025\\({}^{\\circ}\\) (approximately 30 meters) resolution, with the WGS84 horizontal datum and the EGM96 vertical datum. The result consists of 10x10 degree tiles spanning the range 180W180E and 80N60S and can be freely downloaded from [https://vapd.gitlab.io/post/gabam2015/](https://vapd.gitlab.io/post/gabam2015/). To make visualization of GABAM better, burned area density is used instead of directly drawing the burned pixels on a global map, and it is defined as the proportion of burned pixels in a \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) grid. An overview of global distribution of burned area density, derived from the 1 arc-second resolution GABAM 2015, is shown in Figure 8a, together with that of the Fire_cci product in section 3.2.
Figure 6 illustrates an examples of GABAM 2015 in Canada, and the annually composited Landsat reference images with minimum NBR values of 2015 and 2014 are also included. This region is located in high latitude zones, and the burned scars may not completely recover within a year. Consequently, when new burning occurs around the unrecovered burned scars, we must determine which burned scars come from this year. Owing to the temporal filters, GABAM succeeded to clear up such confusion. From Figure 6b, one can see that the burned scars mainly consist of two components, separated by the river. Figure 6a,
however, shows that burned scars on the right side of the river can be observed in 2014, hence the result of GABAM 2015 only remains the component on the left side.
### Comparison with Fire_cci product
#### 3.2.1 Data preparing
As 30m resolution global burned area products are currently not available, we made a comparison between GABAM 2015 and the Fire_cci version 5.0 products (spatial resolution is approximately 250 meters) (Pettinari and Chuvieco, 2018), which are based on MODIS on board the Terra satellite. The monthly Fire_cci pixel BA products of 2015 were composited as an annual pixel BA product by labeling the pixels as burned ones once their values in Julian Day (the Date of the first detection) layer were valid (from 1 to 366) in any of the 12 monthly products. Additionally, in order to perform regression analysis between two products of different spatial resolution, we also produced an annual grid composition of BA within 2015 from the composited annual pixel BA product by computing the proportion of burned pixels in each \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) grid. Note that the monthly grid BA products of Fire_cci were not used to composite the annual grid prod
Figure 5: Example of region growing for burned area detection. (a)a is the Landsat-8 image displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band), (b)b is the map of burned probability generated by proposed method, (c)c is the candidate seeds of burned area, (d)d shows the final burned area map after region growing.
uct, because summing up the areas of BA for each grid in all monthly products might result in repetitive counting at those pixels burned more than once within the year.
#### 3.2.2 Visually comparing
Figure 7 shows an example of the two annual pixel BA products, and it can be seen that both products correctly detected the BAs in Landsat image (Figure 6(b)), yet the BAs in Figure 6(c) occupy more pixels than those in Figure 6(d). Due to the limitation in spatial resolution of the input sensor of the Fire_cci BA product, some of the mixed pixels (consisting of burned and unburned pixels) may be classified as burned ones. On the other hand, the result of GABAM 2015 shows finer boundaries of BAs, compared with that of the Fire_cci product.
#### 3.2.3 Global grid map
Figure 8 illustrates the GABAM and Fire_cci annual grid composition of BA, consisting of percentage of burned pixels in each \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) grid. Figure 7(b) and Figure 7(a) show similar global distributions of BA density.
#### 3.2.4 Regression analysis
Figure 9 shows the proportion of BA in \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) grids of different land cover categories in Table 1, for Fire_cci product (x-axis) and GABAM 2015 (y-axis), and regression analysis was also performed between the two products, providing a regression line (expressed as the slope and the intercept coefficient estimates)
Figure 6: Burned area map example in Canada. 5(a) is the annually composited Landsat images of 2014 with the minimum NBR values; 5(b) is the annually composited Landsat images of 2015; 5(c) shows the detected burned scars occurred in 2015.
and the coefficient of determination (\\(R^{2}\\)) for each land cover category (Figure 8(a)-8(h)) and for global scale (Figure 8(i)). Moreover, as many points overlapped in the scatter graphs, we also rendered the scatters with different colors according to the number of grid cells (1 to 10 or more) having the same proportion values.
According to Figure 9, the intercept values of the estimated regression lines were close to 0 while the slopes were lower than 1, showing that GABAM 2015 generally underestimated (Chuvieco et al., 2011) burned scars compared with Fire_cci product. Moreover, the distribution and color of scatters in Figure 8(i) also show that
Figure 7: Comparison between Fire_cci and GABAM. 8(a) and 8(b) are the Landsat-8 images before (June 24th, 2015) and after (July 26th, 2015) fire, respectively, displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band); 8(c) shows the burned areas of annually composited Fire_cci product, and 8(d) shows the burned areas generated by proposed method.
a large number of grids were considered to have higher burned proportion by Fire_cci product than by GABAM. The main reason for the inconsistence can be attributed to the difference in spatial resolution of data sources, and less pixels were commonly classified as BA in Landsat images, e.g. Figure 7. Specifically, only a few \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) grids were occupied by more than 90% BA in GABAM while grids with high proportion of BA were more common in Fire_cci product.
Considering the coefficients of determination of estimated regression lines, the two products showed highest linear relationship strengths in coniferous forest (\\(R^{2}=0.82\\)), rangeland (\\(R^{2}=0.75\\)) and shrub
Figure 8: Global distribution of burned area density (percentage of burned pixels in every \\(0.25^{\\circ}\\times 0.25^{\\circ}\\) grid) of GABAM and Fire_cci product within 2015. 8a is the annual grid composition of BA of GABAM, and 8b is that of the Fire_cci product.
Figure 9: Scatter graphs and regression lines between GABAA and Fire_cci. (a)a–(b)h are the results in different land cover categories; (i)i shows the global result in all kinds of land covers. The color scheme illustrates the number of grid cells having the same proportion values.
(\\(R^{2}=0.62\\)), and lowest strengths in agriculture land (\\(R^{2}=0.31\\)) and \"Others\" category (\\(R^{2}=0.07\\)). In \"Others\" category, which is considered to be not prone to fire, the two products only included a few grids containing BA (with low burned proportions), thus they were not likely to be correlated; the low correlation in agriculture land is owing to the uncertainty of both products, which will be further discussed in section 3.4.
The quantity and color of scatters in Figure 9 indicate that most of burned areas were located in range-land, and the global relationship (Figure 9i) of the GABAM and Fire_cci product was mainly determined by that in rangeland (Figure 9e), i.e. woody savannas, savannas and grasslands.
### Validation
#### 3.3.1 Data sources
Accuracy assessment was carried out according to the 80 validation sites which were created in section 2.1, and the reference data were selected in these sites from multiple data sources, including fire perimeter datasets and satellite images. Commonly, when satellite data are used as reference data, they should have higher spatial resolution than the data used to generate the BA product (Boschetti et al., 2009). For Landsat BA product, however, access to global higher-resolution time-series satellite data is difficult, and a thorough validation of Landsat science products can be completed with independent Landsat-derived reference data while strengthened by the use of complementary sources of high-resolution data (Vanderhoof et al., 2017). Consequently, in this study, some publicly available satellite images of higher-resolution were included in the validation scheme, while Landsat was the majority of validation data source. Specifically, Landsat-8 (LC8) images were employed to generate reference data independently for most of the validation sites except those located in the United States (U.S.), South America and China. In U.S., MTBS perimeters of 2015 were used as the supplemental reference data of LC8 images, and in South America and China, CBERS-4 MUX (CB4) and Gaofen-1 WFV (GF1) satellite images were used to create perimeters of burned area, respectively. The characteristics of CB4 and GF1 are illustrated in Table 4. Note that the size of validation site varied by the type of data source, i.e. a WRS-II frame (about 185km \\(\\times\\) 185km) for Landsat images, a scene for CB4 images (about 120km \\(\\times\\) 120km) and a box of 100km \\(\\times\\) 100km for GF1 images. Using Landsat frames or image scenes as a unit of validation site is convenient for data downloading and processing; we chose a smaller box for GF1 to improve the data availability considering the extents of GF1 frames or scenes are not fixed due to the long orbital return period.
#### 3.3.2 Reference data generation
In each validation site, all the available image scenes (LC81, CB42 or GF13) acquired in 2015 were used. LC8 images were ortho-rectified surface reflectance products, CB4 images were ortho products, and GF1 images were not geometrically rectified. The procedure of generating reference BA can be summarized as following steps.
Footnote 1: [https://earthexplorer.usgs.gov](https://earthexplorer.usgs.gov)
Footnote 2: [http://www.dgi.inpe.br/catalog/](http://www.dgi.inpe.br/catalog/)
Footnote 3: [http://218.247.138.119:7777/DSSPlatform/productSearch.html](http://218.247.138.119:7777/DSSPlatform/productSearch.html)
1. **Preprocessing** All the images utilized to generate BA reference data were spatially aligned with mean squared error less than 1 pixel. The ortho-rectified LC8 and CB4 images met the requirement of geometric accuracy, yet the GF1 images did not. Accordingly, an automated method (Long et al., 2016) was applied to ortho-rectify the time-series GF1 images, taking the LC8 panchromatic images (spatial resolution is 15 meters) as geo-references.
2. **BA Detection** BA perimeters were generated from the time-series images via a semi-automatic approach. Firstly, image pairs (pre- and post-fire) were manually selected from the time-series image by checking whether any new burned scars appeared in the newer images. For LC8 images, SWIR2, NIR and Green bands were composited in a Red, Green, Blue (RGB) combination; for CB4 and GF1 images, Red, NIR and Green bands were composited in an RGB combination. The identification of BA might be difficult for CB4 and GF1 images due to the lack of shortwave infrared bands, thus Fire_cci BA product was used to verify the BA identification. Secondly, burned and unburned samples were manually collected from each selected image pair. The burned samples included only the newly burned scars, which appeared burned in the newer image but unburned in the older image; the unburned samples consisted of unburned pixels, partially recovered BA pixels, and also pixels covered by cloud or cloud shadows in either images. Afterwards, the support vector machines (SVM) classifier in ENVITM software were used to classify each image pair into burned and unburned pixels, and the detected burned pixels in all the image pairs were integrated to create a composited annual BA map. Note that the sensitive
\\begin{table}
\\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{**Sensors**} & **Spatial resolution** & **Swath width** & \\multicolumn{3}{c}{**Spectral bands (\\(\\mu m\\))**} \\\\ & **at nadir (m)** & **at nadir (km)** & **blue** & **green** & **red** & **NIR** \\\\ \\hline CBERS-4 MUX & 20 & 120 & & 0.45-0.52 & 0.52-0.59 & 0.63-0.69 & 0.77-0.89 \\\\ Gaofen-1 WFV & 16 & 192 & & & & \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Characteristics of CBERS-4 MUX and Gaofen-1 WFV.
features in section 2.3 were utilized in SVM for each LC8 image pair; but for CB4 and GF1 images, features used for classification consisted of the digital number (DN) values in four bands of an image pair (totally 8 DN values), as most of the burned-sensitive spectral indices cannot be derived from the RGB-NIR bands. Finally, the BA perimeters of 2015 were generated from the annual BA composition using the vectorization tool in ArcGIS(tm) software.
3. **Reviewing and manually revision** The result of supervised classifier (SVM) and automated vectorization algorithm might not be perfect, thus BA perimeters were further edited visually by experienced experts, via overlapping the vector layer of BA perimeters with the satellite image layers.
Additionally, in U.S., MTBS perimeters of 2015 were directly used as the main reference data, supplemented by the interpreted results of LC8 time-series images, which could help to avoid missing of small fires.
#### 3.3.3 Validation results
To assess the accuracy of GABAM 2015, a cross tabulation (Pontius and Millones, 2011) between the pixels assigned by in our BA product and in the reference data was computed to produce the confusion matrix for each validation site. Afterwards, the global cross tabulation (Table 5) was generated by averaging all the cross tabulations.
Finally, three statistics, i.e. commission error, omission error and overall accuracy, can be derived from the confusion matrix:
* **Commission Error \\((E_{c})\\):**\\(X_{12}/(X_{11}+X_{12})\\), the ratio between the false BA positives (detected burned areas that were not in fact burned) and the total area classified as burned by GABAM 2015.
* **Omission Error \\((E_{o})\\):**\\(X_{21}/(X_{11}+X_{21})\\), the ratio between the false BA negatives (actual burned areas not detected) and the total area classified as burned by the reference data.
* **Overall Accuracy \\((A_{o})\\):**\\((X_{11}+X_{12})/(X_{11}+X_{12}+X_{21}+X_{22})\\), the ratio between the area classified correctly and the total area to evaluate.
\\begin{table}
\\begin{tabular}{l l l l l} \\hline \\multicolumn{2}{c}{} & \\multicolumn{4}{c}{**Reference data (pixel)**} \\\\ & & **Burned** & **Unburned** & **Total** \\\\ \\hline \\hline \\multirow{3}{*}{**GABAM 2015 (pixel)**} & **Burned** & 5473720 \\((X_{11})\\) & 823170 \\((X_{12})\\) & 6296890 \\\\ & **Unburned** & 2360096 \\((X_{21})\\) & 43661559 \\((X_{22})\\) & 46021655 \\\\ & **Total** & 7833816 & 44484729 & 52318545 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Cross tabulation between GABAM 2015 and the reference data.
According to Table 5, \\(E_{c}\\) and \\(E_{o}\\) of GABAM 2015 were 13.17% and 30.13%, respectively, while \\(A_{o}\\) was 93.92%. Generally, GABAM 2015 was expected to have a lower \\(E_{c}\\) but a higher \\(E_{o}\\). High omission error might result from several reasons:
1. In the validation sites located in tropical zones, clear burned evidences were frequently missed by Landsat sensor due to the quick recovery of the vegetation surface. This point will be further discussed in section 3.4.
2. Some pixels located within a burned area, but not showing strong burned appearance, might be excluded by GABAM 2015 (e.g. Figure 6(d)), while they were considered as a part of a complete burned scar in the reference data. Particularly, high \\(E_{o}\\) was found in those validation sites using MTBS perimeters, e.g. \\(E_{c}\\) and \\(E_{o}\\) of the validation site in Figure 16 were 1.45% and 67.97%.
Table 6 shows the average accuracy of GABAM 2015 in various land cover categories, and more details of validation can be found in appendix A, which includes 5 examples of validation sites from various regions, with different data sources as reference data.
### Discussion
Different from the satellite images of coarse spatial resolution, the temporal resolution of Landsat images is not high enough to capture the short-term events on the earth. Specifically, the general revisit period of Landsat image is more than 10 days, hence active fire will be observed by Landsat satellite with probability less than 10% (considering the cloud coverage). In addition, the gaps between Landsat images of adjacent time phases and the occurrence of cloud also increase the uncertainty to analyze the time-series patterns of land surface. Without using the evidence of active fire, it is not easy to identify the burned scars at global-scale with high confidence due to the wide variety of vegetation types, phenological characters and burned-like landcovers, and spectral characteristics within a burned scar (char, scorched leaves or grass, or
\\begin{table}
\\begin{tabular}{l r r r} \\hline \\hline
**Land cover type** & \\(E_{c}\\) **(\\%)** & \\(E_{o}\\) **(\\%)** & \\(A_{o}\\) **(\\%)** \\\\ \\hline Broadleaved Evergreen & 8.64 & 10.95 & 90.99 \\\\ Broadleaved Deciduous & 23.59 & 34.85 & 99.03 \\\\ Coniferous & 7.41 & 18.27 & 99.77 \\\\ Mixed Forest & 8.73 & 34.33 & 98.36 \\\\ Shrub & 13.00 & 3.78 & 99.49 \\\\ Rangeland & 11.91 & 23.06 & 91.79 \\\\ Agriculture & 10.91 & 45.38 & 94.41 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Information of site validation examples.
even green leaves when the fire is not very severe(Bastarrika et al., 2011)). In this work, MODIS Vegetation Continuous Fields (VCF) product was applied to discriminate tree-dominated and grass-dominated regions, but the VCF product is neither precise in spatial resolution nor available before 2000 year and, moreover, two categories is far from enough to separate different burning types. Actually, much prior knowledge can be utilized to improve the accuracy of GABAM, if the globe is carefully divided into intensive regions according to the fire behaviour, land cover types and climate. For instance, most biomass burning in the tropics is limited to a burning season, around 10% of the savanna biome burns every year, burning cropland after a harvest is extremely prevalent, and so on. Consequently, region-specified algorithms should be helpful to improve the accuracy of high-resolution global annual burned area mapping. Furthermore, despite of the high correlation between GABAM and Fire_cci, the area of detected BA was generally smaller in GABAM than that in Fire_cci, since some pixels located within a burned area but not showing strong burned appearance, were not included in GABAM. This situation can be considered as underestimation of BA or omission error if only taking into account the connectivity and completeness of burned patches; on the other hand, however, the detailed perimeter of BA from GABAM can be useful to statistics the area of biomes actually burned, and therefore to improve the simulation of carbon emissions from biomass burning. At present form, however, GABAM suffers limitations in the following aspects.
#### 3.4.1 BA in Agriculture land
It is difficult to detect BA in cropland with high confidence (low commission error and low omission error) from satellite images:
* A lot of croplands have comparable spectral characteristics to burned areas when harvested or plughed.
* The temporal behaviour of harvest or burning of cropland is similar to that of grassland fire, e.g. sudden decline and gradual recovery of NDVI, and periodic variation of NBR values year after year.
* Different from the wildfires in rangeland and forest, most of the fires in croplands are human-intended stubble burning, and they are commonly small and of short duration, difficult to be captured by satellite sensors. In this sense, the traditional burned area detection algorithms, which are frequently used to generate BA products from data source of the middle resolution (e.g. MODIS, AVHRR, MERIS) are likely to have high omission error in croplands for small cropland fire.
Figure 10 shows an example of cropland in Mykolayiv, Ukraine, including the Landsat-8 time series (Figure 9(a)-9(j)) and the burned scars mapped by Fire_cci (Figure 9(k)) and GABAM (Figure 10(l)). Small fire spots, showing light orange color, can be visually observed from Figure 9(a), 9(b) and 9(i), but burned scars surrounding these fire spots were not included in Fire_cci product. On the other hand, without fire evidence or field validation, it is also difficult to tell whether the burned-like surfaces detected by GABAM were false alarms.
Due to these difficulties, discriminating true-burned areas from croplands is not a trivial task, and cropland masks can be employed to remove potential confusions.
#### 3.4.2 Omission of observations
Using Landsat images as input data for GABAM, the number of valid observations is a limiting factor for detecting fires, since the active- or post-fire evidence may be omitted or weaken due to the temporal gaps caused by temporal resolution as well as cloud contamination. Especially in Tropical regions, where vegetation recovery is quite quick after fire, temporal gaps usually result in high omission error. Figure 11 shows an example of omission error in South America. From the CBERS-4 images (Figure 10(b)-11(s)), a new burned scar, which occurred during August 21 to October 12, can be identified at the center of image patch. However, all the Landsat-8 images (Figure 11(a)-11(b)) acquired between the date interval from September 21 to November 24 were contaminated by cloud, thus the region covering this burned scar in these images were masked by QA band during the process of BA detecting.
#### 3.4.3 Validation
For satellite data product validation, a commonly used method is to employ higher spatial resolution satellite data. For example, in order to validate MODIS derived data product (1 km spatial resolution), Landsat satellite data is commonly used. In this study, however, Landsat images were used as the main reference source to validate Landsat derived burned area product. Although the validation process was conducted by independent experienced experts with great caution, relying on Landsat for both product generation and validation limits our ability to assess inaccuracies imposed by the satellite sensor itself, such as radiometric calibration accuracy, spectral band settings, geolocation and mixed pixels (Strahler et al., 2006). Accordingly, extensive validation of GABAM is expected to be further performed by professional users.
## 4 Conclusions
An automated pipeline for generating 30m resolution global-scale annual burned area map utilizing Google Earth Engine was proposed in this study. Different from the previous coarse resolution global burned area products, GABAM 2015, a novel 30-m resolution global annual burned area map of 2015 year, was derived from all available Landsat-8 images, and its commission error and omission error are 13.17% and 30.13%, respectively, according to global validation. Comparison with Fire_cci product showed a similar spatial distribution and strong correlation between the burned areas from the two products, particularly in coniferous forests. The automated pipeline makes it possible to efficiently generate GABAM from huge catalog of Landsat images, and our future effort will be concentrated to produce long time-series 30m resolution GABAAFigure 10: Burned area map example of croplands in Mykolayiv, Ukraine. 10a–10j shows the Landsat-8 images displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band); 10k and 10l show the BA from Fire_cci product and GABAM 2015, respectively.
Figure 11: Example of omission error of GABAM 2015. 11a–11o are Landsat-8 image patches displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band), 11p–11s are CBERS-4 image patches displayed in false color composition (red: NIR band, green: RED band and blue: GREEN band), and 11t shows the detected BA.
## Acknowledgments
This research has been supported by The National Key Research and Development Program of China (2016YFA0600302 and 2016YFB0501502), and National Natural Science Foundation of China (61401461 and 61701495).
## Appendix A Examples of validation sites
Figure 12-16 show some examples of site validation, and Table 7 summarizes the information of these validation sites, including the location, source of reference data, commission error, omission error and overall accuracy.
Figure A.12: Example of validation using GF-1 images. A.12a–A.12g show the GF-1 images used to generate reference map, displayed in false color composition (red: NIR band, green: RED band and blue: GREEN band), A.12h is reference BA map generated from GF-1 images, and A.12i is detected BA by proposed method.
Figure A.13: Example of validation using CBERS-4 images. A.13a–A.13g show the CBERS-4 images used to generate reference map, displayed in false color composition (red: NIR band, green: RED band and blue: GREEN band), A.13h is reference BA map generated from CBERS-4 images, and A.13i is detected BA by proposed method.
Figure A.14: Example of validation using Landsat-8 images (path/row:193/054) in Africa. A.14a–A.14j show the Landsat-8 images used to generate reference map, displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band), A.14k is reference BA map generated from Landsat-8 images, and A.14l is detected BA by proposed method.
Figure A.15: Example of validation using Landsat-8 images (path/row:104/074) in Australia. A.14a–A.14j show the Landsat-8 images used to generate reference map, displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band), A.14k is reference BA map generated from Landsat-8 images, and A.15h is detected BA by proposed method.
## References
* Alonso-Canas and Chuvieco (2015) Alonso-Canas, I., Chuvieco, E., 2015. Global burned area mapping from ENVISAT-MERIS and MODIS active fire data. Remote Sensing of Environment 163, 140-152. URL: [https://doi.org/10.1016%2Fj.rse.2015.03.011](https://doi.org/10.1016%2Fj.rse.2015.03.011), doi:10.1016/j.rse.2015.03.011.
* Bastarrika et al. (2014) Bastarrika, A., Alvarado, M., Artano, K., Martinez, M., Mesanza, A., Torre, L., Ramo, R., Chuvieco, E., 2014. BAMS: A tool for supervised burned area mapping using landsat data. Remote Sensing 6, 12360-12380. URL: [https://doi.org/10.3390%2Frs61212360](https://doi.org/10.3390%2Frs61212360), doi:10.3390/rsf61212360.
* Bastarrika et al. (2011) Bastarrika, A., Chuvieco, E., Martin, M.P., 2011. Mapping burned areas from landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors. Remote Sensing of Environment 115, 1003-1012. URL: [https://doi.org/10.1016%2Fj.rse.2010.12.005](https://doi.org/10.1016%2Fj.rse.2010.12.005), doi:10.1016/j.rse.2010.12.005.
Figure A.16: Comparison between MTBS and detected BA. A.16a and A.16b are the Landsat-8 images (path/row:044/026) displayed in false color composition (red: SWIR2 band, green: NIR band and blue: GREEN band), A.16c is the MTBS perimeters of 2015, A.16d shows reference BA perimeters generated from Landsat-8 images and MTBS perimeters of 2015, and A.16e shows burned areas generated by proposed method.
Boschetti, L., Roy, D., Justice, C., 2009. International global burned area satellite product validation protocol (part i-production and standardization of validation reference data), in: CEOS-CalVal, (Ed.). USA: Committee on Earth Observation Satellites, pp. 1-11.
* Boschetti et al. (2015) Boschetti, L., Roy, D.P., Justice, C.O., Humber, M.L., 2015. MODIS-landsat fusion for large area 30m burned area mapping. Remote Sensing of Environment 161, 27-42. URL: [https://doi.org/10.1016%2Fj.rse.2015.01.022](https://doi.org/10.1016%2Fj.rse.2015.01.022), doi:10.1016/j.rse.2015.01.022.
* Boschetti et al. (2016) Boschetti, L., Stehman, S.V., Roy, D.P., 2016. A stratified random sampling design in space and time for regional to global scale burned area product validation. Remote Sensing of Environment 186, 465-478. URL: [https://doi.org/10.1016%2Fj.rse.2016.09.016](https://doi.org/10.1016%2Fj.rse.2016.09.016), doi:10.1016/j.rse.2016.09.016.
* Boschetti et al. (2010) Boschetti, M., Stroppiana, D., Brivio, P.A., 2010. Mapping burned areas in a mediterranean environment using soft integration of spectral indices from high-resolution satellite images. Earth Interactions 14, 1-20. URL: [https://doi.org/10.1175%2F2010e1349.1](https://doi.org/10.1175%2F2010e1349.1), doi:10.1175/2010e1349.1.
* Carmona-Moreno et al. (2005) Carmona-Moreno, C., Belward, A., Malingreau, J.P., Hartley, A., Garcia-Alegre, M., Antonovskiy, M., Buchshtaber, V., Pivovarov, V., 2005. Characterizing interannual variations in global fire calendar using data from earth observing satellites. Global Change Biology 11, 1537-1555. URL: [https://doi.org/10.1111%2Fj.1365-2486.2005.01003.x](https://doi.org/10.1111%2Fj.1365-2486.2005.01003.x), doi:10.1111/j.1365-2486.2005.01003.x.
* Chuvieco et al. (2011) Chuvieco, E., Padilla, M., Hanson, S., Theis, R., Snadow, C., 2011. Esa cci eccv fire disturbance-product validation plan (v3. 1). ESA Fire-CCI project ([http://www.esa-fire-cci](http://www.esa-fire-cci). org/).
* Chuvieco et al. (2016) Chuvieco, E., Yue, C., Heil, A., Mouillot, F., Alonso-Canas, I., Padilla, M., Pereira, J.M., Oom, D., Tansey, K., 2016. A new global burned area product for climate assessment of fire impacts. Global Ecology and Biogeography 25, 619-629. URL: [https://doi.org/10.1111%2Fgb.12440](https://doi.org/10.1111%2Fgb.12440), doi:10.1111/geb.12440.
* DAAC (2015) DAAC, N.L., 2015. Modis vegetation continuous fields (vcf) product. version 5.1. [https://lpdaac.usgs.gov/dataset_discovery/modis/modis_products_table/mod44b](https://lpdaac.usgs.gov/dataset_discovery/modis/modis_products_table/mod44b). doi:10.4225/13/511C71F8612C3. nASA EOSDIS Land Processes DAAC, USGS Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota ([https://lpdaac.usgs.gov](https://lpdaac.usgs.gov)).
* Eidenshink et al. (2007) Eidenshink, J., Schwind, B., Brewer, K., Zhu, Z.L., Quayle, B., Howard, S., 2007. A project for monitoring trends in burn severity. Fire Ecology 3, 3-21. URL: [https://doi.org/10.4996%2Ffiereecology.0301003](https://doi.org/10.4996%2Ffiereecology.0301003), doi:10.4996/fiereecology.0301003.
* Friedl and Sulla-Menashe (2015) Friedl, M., Sulla-Menashe, D., 2015. Mcd12c1 modis/terra+aqua land cover type yearly l3 global 0.05deg cmg. [https://lpdaac.usgs.gov/dataset_discovery/modis/modis_products_table/mcd12c1](https://lpdaac.usgs.gov/dataset_discovery/modis/modis_products_table/mcd12c1). doi:10.5067/MODIS/MCD12c1.006. nASA EOSDIS Land Processes DAAC ([https://lpdaac.usgs.gov](https://lpdaac.usgs.gov)).
* Giglio et al. (2013) Giglio, L., Randerson, J.T., van der Werf, G.R., 2013. Analysis of daily, monthly, and annual burned area using the fourth-generation global fire emissions database (GFED4). Journal of Geophysical Research: Biogeosciences 118, 317-328. URL: [https://doi.org/10.1002%2Fjgrg.20042](https://doi.org/10.1002%2Fjgrg.20042), doi:10.1002/jgrg.20042.
* Giglio et al. (2010) Giglio, L., Randerson, J.T., van der Werf, G.R., Kasibhatla, P.S., Collatz, G.J., Morton, D.C., DeFries, R.S., 2010. Assessing variability and long-term trends in burned area by merging multiple satellite fire products. Biogeosciences 7, 1171-1186. URL: [https://doi.org/10.5194%2Fbg-7-1171-2010](https://doi.org/10.5194%2Fbg-7-1171-2010), doi:10.5194/bg-7-1171-2010.
* Giglio et al. (2016) Giglio, L., Schroeder, W., Justice, C.O., 2016. The collection 6 MODIS active fire detection algorithm and fire products. Remote Sensing of Environment 178, 31-41. URL: [https://doi.org/10.1016%2Fj.rse.2016.02.054](https://doi.org/10.1016%2Fj.rse.2016.02.054), doi:10.1016/j.rse.2016.02.054.
* Goodwin and Collett (2014) Goodwin, N.R., Collett, L.J., 2014. Development of an automated method for mapping fire history captured in landsat TM and ETM+ time series across queensland, australia. Remote Sensing of Environment 148, 206-221. URL: [https://doi.org/10.1016%2Fj.rse.2014.03.021](https://doi.org/10.1016%2Fj.rse.2014.03.021), doi:10.1016/j.rse.2014.03.021.
* Gorelick et al. (2017) Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., Moore, R., 2017. Google earth engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment 202, 18-27. URL: [https://doi.org/10.1016%2Fj.rse](https://doi.org/10.1016%2Fj.rse).
2017.06.031, doi:10.1016/j.rse.2017.06.031.
* 2015). Remote Sensing of Environment 198, 504-522. URL: [https://doi.org/10.5066/F73B5X76](https://doi.org/10.5066/F73B5X76), doi:10.1016/j.rse.2017.06.027.
* Hawbaker et al. (2017b) Hawbaker, T.J., Vanderhoof, M.K., Beal, Y.J., Takacs, J.D., Schmidt, G.L., Falgout, J.T., Williams, B., Fairaux, N.M., Caldwell, M.K., Picotte, J.J., Howard, S.M., Stitt, S., Dwyer, J.L., 2017b. Mapping burned areas using dense time-series of landsat data. Remote Sensing of Environment 198, 504-522. URL: [https://doi.org/10.1016%2Fj.rse.2017.06.027](https://doi.org/10.1016%2Fj.rse.2017.06.027), doi:10.1016/j.rse.2017.06.027.
* Huete (1988) Huete, A., 1988. A soil-adjusted vegetation index (SAVI). Remote Sensing of Environment 25, 295-309. URL: [https://doi.org/10.1016%2F0034-4257%2888%2990106-x](https://doi.org/10.1016%2F0034-4257%2888%2990106-x), doi:10.1016/0034-4257(88)90106-x.
* Key and Benson (1999) Key, C.H., Benson, N.C., 1999. The normalized burn ratio (nbr): A landsat tm radiometric measure of burn severity. United States Geological Survey, Northern Rocky Mountain Science Center.(Bozeman, MT).
* Koutsias and Karteris (2000) Koutsias, N., Karteris, M., 2000. Burned area mapping using logistic regression modeling of a single post-fire landsat-5 thematic mapper image. International Journal of Remote Sensing 21, 673-687. URL: [https://doi.org/10.1080%2F014311600210506](https://doi.org/10.1080%2F014311600210506), doi:10.1080/014311600210506.
* Laris (2005) Laris, P.S., 2005. Spatiotemporal problems with detecting and mapping mosaic fire regimes with coarse-resolution satellite data in savanna environments. Remote Sensing of Environment 99, 412-424. URL: [https://doi.org/10.1016%2Fj.rse.2005.09.012](https://doi.org/10.1016%2Fj.rse.2005.09.012), doi:10.1016/j.rse.2005.09.012.
* Lhermitte et al. (2011) Lhermitte, S., Verbesselt, J., Verstraeten, W., Veraverbeke, S., Coppin, P., 2011. Assessing intra-annual vegetation regrowth after fire using the pixel based regeneration index. ISPRS Journal of Photogrammetry and Remote Sensing 66, 17-27. URL: [https://doi.org/10.1016%2Fj.isprs.2010.08.004](https://doi.org/10.1016%2Fj.isprs.2010.08.004), doi:10.1016/j.isprsjprs.2010.08.004.
* Liu et al. (2018) Liu, J., Heiskanen, J., Maeda, E.E., Pellikka, P.K., 2018. Burned area detection based on landsat time series in savannas of southern burkina Faso. International Journal of Applied Earth Observation and Geoinformation 64, 210-220. URL: [https://doi.org/10.1016%2Fj.jag.2017.09.011](https://doi.org/10.1016%2Fj.jag.2017.09.011), doi:10.1016/j.jag.2017.09.011.
* Long et al. (2016) Long, T., Jiao, W., He, G., Zhang, Z., 2016. A fast and reliable matching method for automated georeferencing of remotely-sensed imagery. Remote Sensing 8, 56. URL: [https://doi.org/10.3390%2Frs8010056](https://doi.org/10.3390%2Frs8010056), doi:10.3390/rs8010056.
* Lutes et al. (2006) Lutes, D.C., Keane, R.E., Caratti, J.F., Key, C.H., Benson, N.C., Sutherland, S., Gangi, L.J., et al., 2006. Firemon: Fire effects monitoring and inventory system. Gen. Tech. Rep. RMRS-GTR-164-CD. Fort Collins, CO: US Department of Agriculture, Forest Service, Rocky Mountain Research Station 1.
* Martin (1998) Martin, M., 1998. Cartografia e inventario de incendios forestales en la peninsula iberica a partir de imagenes noaa-avhrr. Departmento de Geografia. Alcala de Henares, Universidad de Alcala.
* Miller and Thode (2007) Miller, J.D., Thode, A.E., 2007. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta normalized burn ratio (dNBR). Remote Sensing of Environment 109, 66-80. URL: [https://doi.org/10.1016%2Fj.rse.2006.12.006](https://doi.org/10.1016%2Fj.rse.2006.12.006), doi:10.1016/j.rse.2006.12.006.
* Padilla et al. (2017) Padilla, M., Olofsson, P., Stehman, S.V., Tansey, K., Chuvieco, E., 2017. Stratification and sample allocation for reference burned area data. Remote Sensing of Environment 203, 240-255. URL: [https://doi.org/10.1016%2Fj.rse.2017.06.041](https://doi.org/10.1016%2Fj.rse.2017.06.041), doi:10.1016/j.rse.2017.06.041.
* Padilla et al. (2014) Padilla, M., Stehman, S.V., Chuvieco, E., 2014. Validation of the 2008 MODIS-MCD45 global burned area product using stratified random sampling. Remote Sensing of Environment 144, 187-196. URL: [https://doi.org/10.1016%2Fj.rse.2014.01.01.008](https://doi.org/10.1016%2Fj.rse.2014.01.01.008), doi:10.1016/j.rse.2014.01.008.
* Padilla et al. (2015) Padilla, M., Stehman, S.V., Ramo, R., Corti, D., Hanson, S., Oliva, P., Alonso-Canas, I., Bradley, A.V., Tansey, K., Mota, B., Pereira, J.M., Chuvieco, E., 2015. Comparing the accuracies of remote sensing global burned area products using stratified random sampling and estimation. Remote Sensing of Environment 160, 114-121. URL: [https://doi.org/10.1016%2Fj.rse.2015.01.005](https://doi.org/10.1016%2Fj.rse.2015.01.005), doi:10.1016/j.rse.2015.01.005.
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E., 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825-2830.
* Pereira (1999) Pereira, J., 1999. A comparative evaluation of NOAA/AVHRR vegetation indexes for burned surface detection and mapping. IEEE Transactions on Geoscience and Remote Sensing 37, 217-226. URL: [https://doi.org/10.1109%2F36.739156](https://doi.org/10.1109%2F36.739156), doi:10.1109/36.739156.
* modis, version 1.0. ESA Fire-CCI project ([http://www.esa-firecci.org/documents](http://www.esa-firecci.org/documents)).
* Pinty and Verstraete (1992) Pinty, B., Verstraete, M.M., 1992. GEMI: a non-linear index to monitor global vegetation from satellites. Vegetatio 101, 15-20. URL: [https://doi.org/10.1007%2Fbf00031911](https://doi.org/10.1007%2Fbf00031911), doi:10.1007/bf00031911.
* Pleniou and Koutsias (2013) Pleniou, M., Koutsias, N., 2013. Sensitivity of spectral reflectance values to different burn and vegetation ratios: A multi-scale approach applied in a fire affected area. ISPRS Journal of Photogrammetry and Remote Sensing 79, 199-210. URL: [https://doi.org/10.1016%2Fj.isprs.2013.02.016](https://doi.org/10.1016%2Fj.isprs.2013.02.016), doi:10.1016/j.isprsjprs.2013.02.016.
* Plummer et al. (2006) Plummer, S., Arino, O., Simon, M., Steffen, W., 2006. Establishing a earth observation product service for the terrestrial carbon community: The globcarbon initiative. Mitigation and Adaptation Strategies for Global Change 11, 97-111. URL: [https://doi.org/10.1007%2Fsf11027-006-1012-8](https://doi.org/10.1007%2Fsf11027-006-1012-8), doi:10.1007/s11027-006-1012-8.
* Pontius and Millones (2011) Pontius, R.G., Millones, M., 2011. Death to kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment. International Journal of Remote Sensing 32, 4407-4429. URL: [https://doi.org/10.1080%2F01431161.2011.552923](https://doi.org/10.1080%2F01431161.2011.552923), doi:10.1080/01431161.2011.552923.
* Roy et al. (2005) Roy, D., Jin, Y., Lewis, P., Justice, C., 2005. Prototyping a global algorithm for systematic fire-affected area mapping using MODIS time series data. Remote Sensing of Environment 97, 137-162. URL: [https://doi.org/10.1016%2Fj.rse.2005.04.007](https://doi.org/10.1016%2Fj.rse.2005.04.007), doi:10.1016/j.rse.2005.04.007.
* Trigg (2001) S. Trigg, S.F., 2001. An evaluation of different bi-spectral spaces for discriminating burned shrub-savannah. International Journal of Remote Sensing 22, 2641-2647. URL: [https://doi.org/10.1080%2F01431160119380](https://doi.org/10.1080%2F01431160119380), doi:10.1080/01431160119380.
* Simon (2004) Simon, M., 2004. Burnt area detection at global scale using ATSR-2: The GLOBSCAR products and their qualification. Journal of Geophysical Research 109. URL: [https://doi.org/10.1029%2F2003jd003622](https://doi.org/10.1029%2F2003jd003622), doi:10.1029/2003jd003622.
* Sobrino and Raissouni (2000) Sobrino, J.A., Raissouni, N., 2000. Toward remote sensing methods for land cover dynamic monitoring: Application to morocco. International Journal of Remote Sensing 21, 353-366. URL: [https://doi.org/10.1080%2F014311600210876](https://doi.org/10.1080%2F014311600210876), doi:10.1080/014311600210876.
* Strahler et al. (2006) Strahler, A.H., Boschetti, L., Foody, G.M., Friedl, M.A., Hansen, M.C., Herold, M., Mayaux, P., Morisette, J.T., Stehman, S.V., Woodcock, C.E., 2006. Global land cover validation: Recommendations for evaluation and accuracy assessment of global land cover maps. European Communities, Luxembourg 51.
* Stroppiana et al. (2012) Stroppiana, D., Bordogna, G., Carrara, P., Boschetti, M., Boschetti, L., Brivio, P., 2012. A method for extracting burned areas from landsat TM/ETM+ images by soft aggregation of multiple spectral indices and a region growing algorithm. ISPRS Journal of Photogrammetry and Remote Sensing 69, 88-102. URL: [https://doi.org/10.1016%2Fj.isprsjprs.2012.03.001](https://doi.org/10.1016%2Fj.isprsjprs.2012.03.001), doi:10.1016/j.isprsjprs.2012.03.001.
* Stroppiana et al. (2009) Stroppiana, D., Boschetti, M., Zaffaroni, P., Brivio, P., 2009. Analysis and interpretation of spectral indices for soft multicriteria burned-area mapping in mediterranean regions. IEEE Geoscience and Remote Sensing Letters 6, 499-503. URL: [https://doi.org/10.1109%2F1grs.2009.2020067](https://doi.org/10.1109%2F1grs.2009.2020067), doi:10.1109/Igrs.2009.2020067.
* Tansey (2004) Tansey, K., 2004. Vegetation burning in the year 2000: Global burned area estimates from SPOT VEGETATION data. Journal of Geophysical Research 109. URL: [https://doi.org/10.1029%2F2003jd003598](https://doi.org/10.1029%2F2003jd003598), doi:10.1029/2003jd003598.
* T. T.
Tansey, K., Gregoire, J.M., Defourny, P., Leigh, R., Pekel, J.F., van Bogaert, E., Bartholome, E., 2008. A new, global, multi-annual (2000-2007) burnt area product at 1 km resolution. Geophysical Research Letters 35. URL: [https://doi.org/10.1029%2F2007gl031567](https://doi.org/10.1029%2F2007gl031567), doi:10.1029/2007gl031567.
* Vanderhoof et al. (2017) Vanderhoof, M.K., Fairaux, N., Beal, Y.J.G., Hawbaker, T.J., 2017. Validation of the USGS landsat burned area essential climate variable (BAECV) across the conterminous united states. Remote Sensing of Environment 198, 393-406. URL: [https://doi.org/10.1016%2Fj.rse.2017.06.025](https://doi.org/10.1016%2Fj.rse.2017.06.025), doi:10.1016/j.rse.2017.06.025.
* Veraverbeke et al. (2012) Veraverbeke, S., Gitas, I., Katagis, T., Polychronaki, A., Somers, B., Goossens, R., 2012. Assessing post-fire vegetation recovery using rednear infrared vegetation indices: Accounting for background and vegetation variability. ISPRS Journal of Photogrammetry and Remote Sensing 68, 191. URL: [https://doi.org/10.1016%2Fj.isprsjprs.2012.03.003](https://doi.org/10.1016%2Fj.isprsjprs.2012.03.003), doi:10.1016/j.isprsjprs.2012.03.003.
* Vermote et al. (2016) Vermote, E., Justice, C., Claverie, M., Franch, B., 2016. Preliminary analysis of the performance of the landsat 8/OLI land surface reflectance product. Remote Sensing of Environment 185, 46-56. URL: [https://doi.org/10.1016%2Fj.rse.2016.04.008](https://doi.org/10.1016%2Fj.rse.2016.04.008), doi:10.1016/j.rse.2016.04.008.
* Wilson and Sader (2002) Wilson, E.H., Sader, S.A., 2002. Detection of forest harvest type using multiple dates of landsat TM imagery. Remote Sensing of Environment 80, 385-396. URL: [https://doi.org/10.1016%2Fs0034-4257%2801%2900318-2](https://doi.org/10.1016%2Fs0034-4257%2801%2900318-2), doi:10.1016/s0034-4257(01)00318-2.
* Zhu and Woodcock (2014) Zhu, Z., Woodcock, C.E., 2014. Automated cloud, cloud shadow, and snow detection in multitemporal landsat data: An algorithm designed specifically for monitoring land cover change. Remote Sensing of Environment 152, 217-234. URL: [https://doi.org/10.1016%2Fj.rse.2014.06.012](https://doi.org/10.1016%2Fj.rse.2014.06.012), doi:10.1016/j.rse.2014.06.012. | Heretofore, global burned area (BA) products are only available at coarse spatial resolution, since most of the current global BA products are produced with the help of active fire detection or dense time-series change analysis, which requires very high temporal resolution. In this study, however, we focus on automated global burned area mapping approach based on Landsat images. By utilizing the huge catalog of satellite imagery as well as the high-performance computing capacity of Google Earth Engine, we proposed an automated pipeline for generating 30-meter resolution global-scale annual burned area map from time-series of Landsat images, and a novel 30-meter resolution global annual burned area map of 2015 (GABAM 2015) is released. GABAM 2015 consists of spatial extent of fires that occurred during 2015 and not of fires that occurred in previous years. Cross-comparison with recent Fire_cci version 5.0 BA product found a similar spatial distribution and a strong correlation (\\(R^{2}=0.74\\)) between the burned areas from the two products, although differences were found in specific land cover categories (particularly in agriculture land). Preliminary global validation showed the commission and omission error of GABAM 2015 are 13.17% and 30.13%, respectively.
keywords: global burned area, Landsat, Google Earth Engine, time-series, temporal filtering +
Footnote †: journal: ISPRS Journal of Photogrammetry and Remote Sensing | Summarize the following text. | 301 |
arxiv-format/2101_02505v1.md | # Retrieval of Coloured Dissolved Organic Matter with Machine Learning Methods
## 1 Introduction
In the temperate and cold regions of the boreal zone humic waters are abundant, especially in lakes. These waters typically have fairly low total suspended matter (TSM) and chlorophyll\\(\\_\\)a (Chl-a) concentrations, even though some cases of \"black lakes\" with high Chl-a and TSM values have been reported too [1]. For instance, in Finland the humic matter concentration of lakes correlates with the share of peat land in the drainage area [2]. Humic lakes can also originate from peat dredging, e.g. in the Netherlands. CDOM absorption has an exponential shape and consequently decreases the water leaving reflectance in short wavelengths.
Information on humic substances is utilized in the application of official directives, lake management and climate change studies. Within the Water Framework Directive (WFD), in Sweden and Finland, water colour values \\(>30mg\\) Pt \\(l^{-1}\\) (corresponds approximately \\(a_{CDOM}\\)(400) \\(3.6m^{-}1\\)) are defined as humic. Waters with water colour \\(>90mg\\) Pt \\(l^{-1}\\) (\\(a_{CDOM}\\)(400) \\(11m^{-1}\\)) are classified as humus rich lakes [3]. An accurate measurement of the \\(a_{CDOM}\\) parameter from remote sensing is crucial in these types of water. However, it is known that CDOM is the most critical and uncertain ocean colour (OC) product.
The impact of high CDOM on reflectance is demonstrated in Figure 1. In the present case, Lake Garda represents lake water with very low CDOM concentrations, while in Lake Paajarvi the CDOM concentration is high. The level of Chl-a is approximately the same in both lakes, and total suspended matter (TSM) is slightly lower in Lake Garda. \\(R_{rs}\\) in the blue and green region of the spectrum is clearly higher in Lake Garda than in Lake Paajarvi. \\(R_{rs}\\) at \\(<600nm\\) is significantly smaller in the CDOM rich lake. At \\(>650nm\\), where the influence of \\(a_{CDOM}\\) is small, \\(R_{rs}\\) in Lake Garda is slightly lower due to lower TSM concentration, and higher in Lake Paajarvi due to higher scattering of organic and inorganic particles.
Several band ratios have been proposed as predictive models for estimating CDOM from spectral data. These parametric approaches only take into account a few spectral bands and thus they disregard the information contained in other bands. Non-parametric regression algorithms can alternatively exploit the information contained in all spectral bands. In this paper, we assess the performance of several machine learning (ML) algorithms in CDOM estimation over typical boreal waters: multivariate regression, random forests, kernel ridge regression and Gaussian processes.We will use a Hydrolight simulation dataset presented in the framework of the Case2eXtreme (C2X) project1[5]. Basis for the tests are simulated hyperspectral \\(R_{rs}\\) data that include extreme absorbing waters. We work with samples of medium to high CDOM concentrations at 440nm (\\(1-20m^{-1}\\)) and medium to low Chl-a concentrations (\\(<30mg/m^{3}\\)).
The remainder of the paper is organized as follows. SS2 describes the methods used in this work. SS3 gives an empirical evidence of performance of the proposed methods in comparison to standard bioptical models for the particular dataset used. We conclude in SS4 with some remarks and an outline future work.
## 2 Methods and Application
### Established approaches with band ratio algorithms
Reports on band ratio algorithms are mostly based on airborne and field measurements with negligible or only small atmospheric influence. Important wavelength regions for the band ratio algorithms for CDOM retrieval are the ones between 400-600 nm taking 660-720 nm as reference. Following [6][7], CDOM can be estimated by a ratio of reflectance at wavelength \\(>600nm\\) to reflectance in the 400-550 nm range. This ratio is valid in a wide range of water constituent combinations. [4][6] used in situ measured data in Finland, and derived the algorithms that work well for \\(a_{CDOM}\\) and TSM. The latter is estimated using a single band at 709 nm and not a ratio.
The calibration for the CDOM algorithm with two band ratios is the main objective of Kallio's work [6]. He calculated Hydrolight simulations made with concentration data from monitoring stations in Finland (5553 in total) and used them for the calculation of the coefficients. Wavebands at 490, 665 and 709 nm (based on MERIS channels), were compared to the in situ CDOM measured at 440 nm. The optimal regression had a polynomial form with coefficients varying depending on the particular ratio used \\(x\\):
\\[y=p_{1}*x^{2}+p_{2}*x+p_{3}. \\tag{1}\\]
The dataset used in this exercise comes from the simulated database of the C2X project and it is based on simulated remote sensing reflectance (\\(R_{rs}\\)), which is the ratio of water-leaving radiance to downwelling irradiance above the sea surface. Here, \\(R_{rs}\\) refers to clear atmosphere with Sun at zenith and viewing angle exactly perpendicular. Highly absorbing waters are characterized by very low water-leaving radiance (black waters). The maximum of \\(R_{rs}\\) is typically \\(<0.005sr^{-1}\\) and located between 550 and 605 nm for Case-2 absorbing (C2A) cases. For extremely Case-2 absorbing waters (C2AX), the maximum shifts towards the red spectral range \\(>600nm\\). The complete C2AX dataset used in the present work is illustrated in Figure 2.
This complete dataset was filtered before using it here for Chl-a values \\(>30mg/m^{3}\\); and only nadir view angles have been left. Hihgh Chl-a values caused the ratio to increase with low CDOM values and to decrease with high CDOM values. Those simulations were removed from the dataset since they are not frequent in nature.
Figure 1: Simulated \\(R_{rs}\\) in a typical humic lake in Finland (Pääääjärvi) and in Lake Garda, Italy. \\(R_{rs}\\) was simulated with the Hydrolight model [4]
### Machine learning approaches
Four machine learning algorithms for linear and non-linear regression are tested and compare to the polynomial regression explained above: (multivariate) linear regression (RLR), random forest (RF) [8], kernel ridge regression (KRR) [9], Gaussian process regression (GPR) [10].
In multivariate (or multiple) linear regression (LR) the output \\(y\\) (DCOM) is assumed to be a weighted sum of \\(B\\) input variables, \\(\\mathbf{x}:=\\left[x_{1},\\ldots,x_{B}\\right]^{\\top}\\), that is \\(\\hat{y}=\\mathbf{x}^{\\top}\\mathbf{w}\\). Maximizing the likelihood is equivalent to minimizing the sum of squared errors, and hence one can estimate the weights \\(\\mathbf{w}=[w_{1},\\ldots,w_{B}]^{\\top}\\) by least squares minimization. Very often one imposes some smoothness constraints to the model and also minimizes the weights energy, \\(\\|\\mathbf{w}\\|^{2}\\), thus leading to the regularized linear regression (RLR) method.
An alternative method is that of random forests (RFs) [8]. A RF model is an ensemble learning method for regression that operates by constructing a multitude of decision trees at training time and outputting the mean prediction of the individual trees. The training algorithm for random forests applies the general technique of bootstrap aggregating, or bagging, to tree learners.
Kernel methods constitute a family of successful methods for regression [11]. We aim to incorporate two instantiations: (1) the KRR is considered as the (non-linear) version of the RLR [9]; and (2) GPR is a probabilistic approximation to non-parametric kernel-based regression, where both a predictive mean and predictive variance can be derived [12]. Kernel methods offer the same explicit form of the predictive model, which establishes a relation between the input (e.g., spectral data) \\(\\mathbf{x}\\in\\mathbb{R}^{B}\\) and the output variable (CDOM) is denoted as \\(y\\in\\mathbb{R}\\). The prediction for a new radiance vector \\(\\mathbf{x}_{*}\\) can be obtained as:
\\[\\hat{y}=f(\\mathbf{x})=\\sum_{i=1}^{N}\\alpha_{i}K_{\\theta}(\\mathbf{x}_{i}, \\mathbf{x}_{*})+\\alpha_{o}, \\tag{2}\\]
where \\(\\{\\mathbf{x}_{i}\\}_{i=1}^{N}\\) are the spectra used in the training phase, \\(\\alpha_{i}\\) is the weight assigned to each one of them, \\(\\alpha_{o}\\) is the bias in the regression function, and \\(K_{\\theta}\\) is a kernel or covariance function (parametrized by a set of hyperparameters \\(\\boldsymbol{\\theta}\\)) that evaluates the similarity between the test spectrum \\(\\mathbf{x}_{*}\\) and all \\(N\\) training spectra. We used the automatic relevance determination (ARD) kernel function,
\\[K(\\mathbf{x},\\mathbf{x}^{\\prime})=\
u\\exp\\bigg{(}-\\sum_{b=1}^{B}(x_{b}-x_{b}^ {\\prime})^{2}/(2\\sigma_{b}^{2})\\bigg{)}+\\sigma_{n}^{2}\\delta_{ij},\\]
and learned the hyperparameters \\(\\boldsymbol{\\theta}=[\
u,\\sigma_{1},\\ldots,\\sigma_{B},\\sigma_{n}]\\) by marginal likelihood maximization. An operational MATLAB
Figure 2: The dataset includes the whole range of Chl-a (\\(0-200mg/m^{3}\\)), with low TSM (\\(1-10g/m^{3}\\)) and high CDOM (\\(1-20m^{-1}\\)). When removing the Chl-a \\(>30mg/m^{3}\\), the fluorescence peak of the dataset at \\(\\approx 700nm\\) decreases considerablytoolbox is available at [http://isp.uv.es/soft_regression.html](http://isp.uv.es/soft_regression.html).
### An illustrative exercise
In this exercise comparisons are made using the ratios as inputs, as well as the full spectral range found in the simulation dataset. This spectral range consisted in 6 bands corresponding to some of the MERIS 2 sensor wavelengths: 442.5, 490, 510, 560, 665, 708.75 nm, from blue to near-infrared (NIR). This is the common set found in many of the ocean colour sensors used for water quality retrievals. Several trials using different inputs are tested: ratio 1 and ratio 2 are considered separately in the first two tests, both ratios are used together in the third test and all wavelengths are the multi-input variables in the fourth test. Statistics used to check the validity of the methods are: mean error (ME), root mean squared error (RMSE), normalized mean squared error (nMSE), mean absolute error (MAE) and the coefficient of correlation (R).
Footnote 2: [https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/envisat/instruments/meris](https://earth.esa.int/web/guest/missions/esa-operational-eo-missions/envisat/instruments/meris)
## 3 Experimental Results
### Numerical comparison
The Table 1 offers an overview of the metrics for all four methods tested with the different input variables. When using only one input (\\(x_{1}=R_{rs}(665)/R_{rs}(490nm)\\), \\(x_{2}=R_{rs}(709)/R_{rs}(490nm)\\)) the ML methods tested do not improve the results strikingly. The polynomial regression metrics are very similar to the others and especially close to the KRR method with ratio 1 (\\(x_{1}\\)) or the RF with ratio 2 (\\(x_{2}\\)). Even if in this second case the reduction in the ME and RSME is evident, computation time will then be the determining factor, and the polynomial regression requires less time and it still quite efficient.
However, when using more than one variable, non-linear ML methods get more relevance, as it can be seen in the \"Both Ratios\" and \"All bands\" sections on the Table 1. The RF seems to be the best model when using the two ratios together, while the GPR work well when using all the six reflectance bands. In this last case, the \"winner\" method seems not to be so clear, since a simple RLR model improves already the results, although errors with RLR are double than using any other of the other three ML methods.
\\begin{table}
\\begin{tabular}{|l|c|c|c|c|c|} \\hline & **ME** & **RMSE** & **nMSE** & **MAE** & **R** \\\\ \\hline
**Ratio 1:**\\(x_{1}=665/490\\) & & & & & \\\\ \\hline Polyfit & 0.758 & 3.832 & 0.319 & 2.940 & 0.570 \\\\ RLR & 0.624 & 4073 & -0.042 & 3.174 & 0.438 \\\\ RF & 0.687 & 3.753 & -0.078 & 2.747 & 0.592 \\\\ KRR & 0.714 & 3.707 & -0.083 & 2.767 & 0.586 \\\\ GPR & 0.692 & 3.755 & -0.078 & 2.903 & 0.585 \\\\ \\hline
**Ratio 2:**\\(x_{2}=709/490\\) & & & & & \\\\ \\hline Polyfit & 0.732 & 3.710 & 0.263 & 2.846 & 0.603 \\\\ RLR & 0.634 & 3.969 & -0.054 & 3.071 & 0.484 \\\\ RF & 0.387 & 3.320 & -0.131 & 2.336 & 0.687 \\\\ KRR & 0.648 & 3.411 & -0.120 & 2.378 & 0.665 \\\\ GPR & 0.808 & 3.604 & -0.096 & 2.589 & 0.638 \\\\ \\hline
**Both ratios:**\\(\\mathbf{x}=[x_{1},x_{2}]\\) & & & & & \\\\ \\hline RLR & 0.703 & 3.900 & -0.061 & 2.956 & 0.520 \\\\ RF & 0.363 & 2.296 & -0.291 & 1.497 & 0.867 \\\\ KRR & 0.588 & 2.842 & -0.199 & 1.843 & 0.802 \\\\ GPR & 0.487 & 2.676 & -0.225 & 1.664 & 0.814 \\\\ \\hline
**All bands,**\\(\\mathbf{x}\\in\\mathbb{R}^{H}\\) & & & & & \\\\ \\hline RLR & 0.259 & 1.843 & -0.387 & 1.189 & 0.913 \\\\ RF & 0.137 & 0.735 & -0.786 & 0.444 & 0.987 \\\\ KRR & 0.045 & 0.812 & -0.743 & 0.453 & 0.984 \\\\ GPR & 0.014 & 0.646 & -0.842 & 0.358 & 0.990 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Results obtained obtained with empirical fitting and several machine learning methods. Several scores are shown: mean error (ME), root-mean-square error (RMSE), mean absolute error (MAE) and Pearson’s correlation coefficient (R).
### Analysis of the models
Figure 3 shows the partial dependence between the function (CDOM) and the target features (wavelengths) for the RLR model (red line) and the GPR model (black line). What can be inferred is that bands in the blue part of the spectrum (442.5, 490 nm) have a stronger influence in the CDOM derivation, while the green and red bands seem to be more neutral. The NIR band (709 nm) has a slight but determinant influence.
A second analysis of the models considered a permutation analysis by which the impact of the input features on the prediction error is evaluated in the context or absence of the other predictors. Figure 4 shows the influence of the wavelengths within the different models, which confirms the previous claim for all methods, with some variation in the KRR, where the weight of the green and red bands is higher.
### Validation of the models
One example of the application of the models to the test data and their validation is shown in Figure 5. Using all reflectance bands available, the plots show the summary of statistics of the residuals for the four methods (top) and the scatterplot of the linear regression between the observed and the predicted values for the method with the best RMSE result, in this case the GPR, which is also the best model in terms of ME and R (cf. Table 1).
Figure 5: Application of model on test data
Figure 3: Partial plots for the RLR (red) and the GPR (black) models.
A similar exercise has been done with the retrieval of Chl-a (results not showed here). The dataset used was the C2A, with counted with a larger range of Chl-a values (\\(1-200mg/m^{3}\\)) and still low TSM values (\\(0.2-10g/m^{3}\\)). The summary statistics showed that the best performing method is the KRR but followed closely by the RLR, which improves considerably the simpler ratio method (709/665nm) e.g. from R=0.789 to R=0.902 or RMSE between 21.918 and 31.224 \\(mg/m^{3}\\).
## 4 Conclusions
Four ML methods were tested using as training set simulated C2AX data, that is, extremely absorbing water with high CDOM concentration, Chl-a contents \\(<30mg/m^{3}\\) and low suspended matter. The traditional empirical algorithms are derived using band ratios and their correlation with measured or simulated \\(a_{CDOM}\\). The polynomial algorithms derived using this empirical relationship are compared with more sophisticated ML methods using several variables as input: similar two-band ratios or the six MERIS like wavebands available. Results show that multivariate linear regression methods are already very efficient when using more than 2 bands or ratios, that is, more information coming from different parts of the spectrum. GPR methods give a very good result and are considered the best in terms of error and correlation, but the computation time needed increases considerably.
## References
* [1] Titt Kutser, Birgot Pavel, Charles Verpoorter, Martin Ligi, Tuuli Soomets, Kaire Toming, and Gema Casal, \"Remote sensing of black lakes and using 810 nm reflectance peak for retrieving water quality parameters of optically complex waters,\" _Remote Sensing_, vol. 8, no. 6, pp. 497, 2016.
* [2] Pirkko Kortelainen, \"Content of total organic carbon in finnish lakes and its relationship to catchment characteristics,\" _Canadian Journal of Fisheries and Aquatic Sciences_, vol. 50, no. 7, pp. 1477-1483, 1993.
* [3] S. Koponen, K. Kallio, and Attila J., \"D5.5 boreal lakes case study results,\" Tech. Rep., GLaSS project, H2020, EU, 2015.
* [4] K. Alikas, S. Lautt, and A. Reinart, \"D3.4 adapated water quality algorithms,\" Tech. Rep., GLaSS project, H2020, EU, 2014.
* [5] M. Hieronymi, H. Krasemann, D. Mueller, C. Brockmann, A. Ruescas, K. Stelzer, B. Nechad, K. Ruddick, S. Simis, G. Tisltone, F. Steinmetz, and P. Regner, \"Ocean colour remote sensing of extreme case-2 waters,\" Living Planet Symposium, 2016.
Figure 4: RMSE permutation analysis of the models
* [6] Kari Kallio, \"Optical properties of finnish lakes estimated with simple bio-optical models and water quality monitoring data,\" _Hydrology Research_, vol. 37, no. 2, pp. 183-204, 2006.
* 215, 2015, Special Issue: Remote Sensing of Inland Waters.
* [8] L. Breiman and J. H. Friedman, \"Estimating optimal transformations for multiple regression and correlation,\" _Journal of the American Statistical Association_, vol. 80, no. 391, pp. 1580-598, 1985.
* [9] John Shawe-Taylor and Nello Cristianini, _Kernel Methods for Pattern Analysis_, Cambridge University Press, June 2004.
* [10] C. E. Rasmussen and C. K. I. Williams, _Gaussian Processes for Machine Learning_, The MIT Press, New York, 2006.
* [11] G. Camps-Valls and L. Bruzzone, Eds., _Kernel methods for Remote Sensing Data Analysis_, Wiley & Sons, UK, Dec. 2009.
* [12] G. Camps-Valls, J. Verrelst, J. Munoz Mari, V. Laparra, F. Mateo-Jimenez, and J. Gomez-Dans, \"A survey on gaussian processes for earth observation data analysis,\" _IEEE Geoscience and Remote Sensing Magazine_,, no. 6, June 2016. | The coloured dissolved organic matter (CDOM) concentration is the standard measure of humic substance in natural waters. CDOM measurements by remote sensing is calculated using the absorption coefficient (_a_) at a certain wavelength (e.g. \\(\\approx 440nm\\)). This paper presents a comparison of four machine learning methods for the retrieval of CDOM from remote sensing signals: regularized linear regression (RLR), random forest (RF), kernel ridge regression (KRR) and Gaussian process regression (GPR). Results are compared with the established polynomial regression algorithms. RLR is revealed as the simplest and most efficient method, followed closely by its nonlinear counterpart KRR.
Ana B. Ruescas\\({}^{1}\\), Martin Hieronymi\\({}^{2}\\), Sampsa Koponen\\({}^{3}\\), Kari Kallio\\({}^{3}\\) and Gustau Camps-Valls\\({}^{1}\\)+\\({}^{1}\\)Image Processing Laboratory (IPL), Universitat de Valencia, Spain
\\({}^{2}\\) Institute of Coastal Research, Helmholtz-Zentrum Geesthacht, Germany
\\({}^{3}\\)Finnish Environment Institute SYKE, Finland Remote Sensing, CDOM concentrations, Absorbing Waters, Linear Regression, Machine Learning Methods
Footnote †: The research was funded by the European Research Council (ERC) under the ERC-CoG-2014 SEDAL project (grant agreement 647423), and the Spanish Ministry of Economy and Competitiveness (MINECO) through the project TIN2015-64210-R. Especial thanks to Carsten Brockmann and the Case2eXtreme project team (funded by ESA). | Provide a brief summary of the text. | 365 |
arxiv-format/2002_04227v1.md | Hyperspectral Classification Based on 3D Asymmetric Inception Network with Data Fusion Transfer Learning
Haokui Zhang\\({}^{1,2^{\\dagger}}\\), Yu Liu\\({}^{2^{\\dagger}}\\), Bei Fang\\({}^{1}\\), Ying Li\\({}^{1}\\), Lingqiao Liu\\({}^{2}\\) and Ian Reid\\({}^{2}\\)
\\({}^{1}\\)_H. Zhang, B. Fang and Y. Li are with School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, 710072, [email protected]\\({}^{2}\\)Y. Liu L. Liu and I. Reid are with School of Computer Science, The University of Adelaide, 5005, North Terrace, SA [email protected]
## 1 Introduction
For a long time, how to extract the useful information from HSI dataset itself is a very challenging task. At first, researchers mainly focus on extracting spectral signatures while totally ignoring the spatial contexts. Later on, there are two categories of methods to extract the spectral-spatial information from HSI dataset. The first category of methods are to extract the spectral signatures and the spatial contexts separately [14, 15]. The second category of methods are to fuse the spectral signatures and the spatial contexts first, and then extract the concatenated information later [11]. Between those two categories of methods, it proves that the second category of methods are better in improving the performance of HSI classification.
Among traditional methods, handcrafted features usually are used, and they are expected to be discriminative and representative for capturing the characteristics of the HSI datasets. While in most cases, the extracted features are domain knowledge oriented, thus may lose some valuable details. For feature classification, support vector machine(SVM) [12] is employed because SVM is robust at representing the high-dimensional vectors, but the capacity for representation is still limited to finite dimensions.
Since 2012, with the renaissance of deep learning, the performance of many vision tasks have been dramatically improved. The reason for the improvements can be mainly summarized to two aspects, that are deep neural network structures and huge training datasets. One of the most absorbing and significant advantage of deep learning is the ability to use CNN to extract features in different granularities. The potential of CNN has also been utilized in HSI classification, and it has been proved works better than the traditional methods. There are several different network structures have been proposed to combine CNN into HSI classification, including residual network in 2D and 3D [20], deep belief network(DBN) [3], stacked autoencoder(SAE) [3]_elal_. All of the networks beat the traditional methods a bit, and which obviously point out that the irresistible trend of deep learning is also the right direction that HSI classification should be toward to.
Generally speaking, there are two constraints which limit the usage of state-of-the-art deep CNNs, which are already employed in vision tasks, directly into HSI classification. The first one is the different data formats between RGB image and HSI. In particular, comparing to the RGB image, which can be well represented by a 2D CNN to extract the features, it is much sensible to utilize a 3D CNN to preserve the abundant information being extracted from the spectral signatures and the spatial contexts of HSI. Moreover, all of those existing networks in HSI classification are quite shallow, almost less than five layers regarding the depth of the network layers, which is eligible comparing with the one thousand layers' ResNet [1] be used in other vision tasks. The reason behind that is the very limited HSI datasets. Specifically, both the capture and the annotation are cost-expensive and labor-exhaustive. Therefore, when deep CNNs are used in HSI classification, over-fitting is likely to occur.
In this paper, we propose a AINet and use two strategies to improve the HSI classification. Our contributions can be summarized as follows:
1. Firstly, a novel deep light-weight 3D CNN with asymmetric structure is delivered to handle HSI classification, which make it possible to use the existing small volume of HSI datasets to train the very deep neural network and fully exert the potential of CNN.
2. Secondly, data fusion transfer learning is exploited to conduct a better model initialization. Which is compensated for the data limitation and dramatically boost the training efficiency and classification performance.
## 2 Related Works
### HSI classification
Classifying each pixel into its corresponding category is a vital problem for hyperspectral image analysis, and the applications of HSI classification covers object recognition [17], mineral exploitation [13] and other relevant research fields.
**Conventional HSI Classification**
Generally speaking, conventional pixel-wise HSI classification mainly focus on two aspects, that are feature extraction and feature classification. Since spectral signatures usually composed of at least hundreds of bands and carry rich information about HSI, it is important to acquire the features that are discriminative and representative for HSI dataset. Regarding to feature extraction, there are two kinds of approaches for the purpose. The first kind of approaches use linear algorithms to select the representative spectral bands, which including mainford ranking [20], multitask joint sparse representation [21]_et al._To sum up, features are usually the extracted spectral bands from the raw HSI dataset and the physical meaning can be retained. The second kind of approaches employ non-linear algorithms to extract the discriminative features, which including principle component analysis (PCA), identity component analysis (ICA) et al. The step after feature extraction is feature classification, support vector machine (SVM) is usually utilized because it is robust to the high-dimensional vectors [17]. However, the limited capacity of SVM to simulate the distribution of spectral bands become a bottleneck for HSI classification.
Later on, with the development of technologies in remote sensing (RS), some works propose to integrate the spatial context into HSI classification. Considering the order for extracting spatial contexts, there are two branches: post-processing and spatial extraction. As its name, the post-processing approaches first extracting the spectral and spatial information sequentially, and then using the nearby spectral pixels as the smooth priors for the target spatial pixel, and a graphical model is used to conduct the classification [16]. On the contrary, the spatial extraction methods emphasize to extract a 3D cube from both the spectral and spatial dimension, and SVM is used to execute the classification. Comparing with the post-processing approaches, spatial extraction approaches can achieve better performance, which may due to the reason that post-processing approaches often lose important information during the feature extraction. In addition, both methods share two drawbacks, one is the engineering features just preserve part of the information and can not fully represent the HSI dataset, the other is the traditional methods have limited capacity to fit the abundant information hied in HSI dataset.
**DL Based HSI Classification**
Usually, a CNN model is composed of at least three convolutional layers for extracting features both in low-level and high level. In specific, with bottom-layers extracting the textures and edge details, and with the top-layers extracting the abstract shape information. However, traditional methods can only extract limited low-level features. The second advantage of CNN based methods compared with traditional methods for HSI classification is that, instead of separating the feature extraction and feature classification as two steps, the CNN structure integrating the feature extraction and feature classification into one framework through back-propagation, and since the extracted features directly contribute the final classification performance, deep learning methods achieve better performance than the traditional kernel based methods.
From the network structure perspective, there are several representative works for HSI classification. Stacked autoencoder(SAE) stack the extracted spectral and spatial features using layer-wise pretraining models [10]. Deep belief work(DBN) is also explored for HSI classification [10]. However, both of the methods require to compress the spatial contexts as the 1D vector, which inevitably lose the important information about HSI. 2D-CNN network is directly transfered from the one used for vision tasks to HSI classification, and some researchers [13, 16] propose to use two parallel 2D-CNNs to extract the spectral signatures and the spatial contexts, both of the feature extraction for the spatial contexts and the separation between spectral and spatial channels will cause information lose. Moreover, 3D-CNN [10, 13] is proposed to extract the spectral signatures and spatial contexts simultaneously from the HSI dataset, and conducting the classification on the 3D cube, but the 3D CNN models will suffer from over-fitting when the layers of the network become deeper, which mainly due to the very limited training dataset of HSI.
### 3D CNN Architectures
The first 3D CNN network for HSI classification is proposed by Chen _et al_ in 2016 [10], the \\(L_{2}\\)-norm regularization and dropout are used. However, the layers of the network is very shallow and it still come across the over-fitting problem when the available annotated datasets are scarce. A similar work is the spectral-spatial residual network(SSRN) [17], which bring the widely used residual blocks from other vision tasks into HSI classification, and combining the batch normalization to achieve a better performance. The disadvantage of SSRN is that it do not explicitly considering the contribution difference between spectral signatures and spatial contexts. our proposed asymmetric residual network beat SSRN with huge gains, which not only benefit from the much deeper and light-weight network design, but also due to the AI unit we come up with that is tailored for HSI dataset.
### Transfer Learning
Comparing with the thousands of million annotated datasets used in some vision tasks, the existing available HSI annotated datasets are quite scarce. Moreover, the serve unbalance among HSI datasets of intra-class and captured from different sensors also make it challenging to train the neural network for HSI classification. In vision community, one common solution for this problem is the transfer learning. The work principle behind transfer learning is that, in deep neural network, the bottom-level and middle-level features taking up the majority of the parameters stored in the CNN model, and usually capturing the textures and edges of the objects. And those low-level features designed for simple task like detection can be reused by much complex tasks such as segmentation and tracking. The most significant benefit of the usage of transfer learning is obtaining a better model initialization, which is very important for training the model with limited samples.
Our proposed network taking use of data fusion transfer learning strategy. In specific, the designed model is pre-trained on HSI datasets captured by different sensors with 3D pyramid pooling, and then fine-tuned on the target datasets to achieve a better performance.
## 3 Methodology
Among the deep learning models employed in the HSI literatures, 3D-CNNs are much suitable than 2D-CNNs for HSI classification due to fact that HSI with the 3D data format. In fact, different objects in HSI generally have different spectral structures. Convolving along the spectral dimension is very important. In addition, there are also some different objects which have similar spectral structures. For these objects, convolving along the spatial dimension to capture features is also beneficial.
For 2D-CNNs based methods, on the one hand, without spectral dimension reduction, the number of parameters of 2D-CNNs will be extremely large due to that HSIs generally have hundreds of bands. On the other hand, if the dimension reduction is conducted, it may destroy the information of spectral structure which is important for discriminating different objects.
Generally speaking, 3D-CNN based approaches have better performance than 2D-CNN based approaches [3, 17]. However, despite being much accurate, the existing 3D-CNN based approaches still have two shortcomings. 1) Comparing with 2D convolutions, 3D convolutions have more parameters and 3D CNN models are computation-intensive. 2) Being limited by the training samples in HSI datasets, 3D-CNN models employed in HSI classification almost consist of less than five convolution layers. Although a large number of experiments in computer vision have proved that deep depth of CNN is very important for improving the performance of tasks related to image processing.
In this section, we start with the description of the proposed AINet, and end with a introduction to the proposed data fusion transfer learning strategy.
### AINet for HSI Classification
**Network Structure** Figure 1 shows the overall framework of the proposed AINet for HSI classification. In order to fully utilize the spectral and spatial informations contained in HSI, we extract \\(L\\times S\\times S\\)-sized cubes from raw HSI as samples, where \\(L\\) and \\(S\\) respect the number of spectrum bands and the spatial size accordingly (Following [3], we set \\(S\\) to 27 in this paper). Then the samples are fed to AINet to extract deep spectral-spatial features and to calculate classification scores. Inspired by the design of ResNet [1], AINet employs a similar basic structure and introduces some key modifications for tailoring on HSI dataset. AINet starts with a 3D convolution layer, then stacks six AI units with increasing widths, and connects one 3D spatial pyramid pooling and one fully connected layer at the end. Specifically, the channels for the six AI units are 32, 64, 64, 128, 128 and 256 accordingly. In order to reduce the dimension of features, four Max pooling layers are added with kernel=[2, 2, 3], stride=[2, 2, 2] within the six AI units.
**3D Pyramid Pooling** Before the fully connected layer, there is a 3D pyramid pooling for mapping features with different sizes to vectors with fixed dimensions. Different HSI datasets are usually captured by different sensors and with various numbers of spectrum bands, for example, the Pavia University dataset has 103 bands and the Indian Pines dataset contains 200 bands. With 3D pyramid pooling layer, the same network can be applied on different HSI datasets without any modification. In this paper, 3D spatial pyramid pooling layer is composed of three level pooling (\\(1\\times 1\\times 1\\), \\(2\\times 1\\times 1\\), \\(3\\times 1\\times 1\\)). As the last AI unit has 256 channels, the outputs of 3D spatial pyramid pooling layer are \\(256\\times 6\\times 1\\times 1\\)-sized cubes.
**Training and Loss** We employ log_softmax [1] as the activation function in the fully connected layer. During training, we
Figure 1: Framework of AINet. On the left, the \\(L\\times S\\times S\\)-sized samples from the neighborhood window centered around each target pixel are extracted first, and then the samples are fed to AINet to extract deep spectral-spatial features. Finally, the classification scores are calculated by the classifier.
take negative log likelihood as the loss function, and add \\(L_{2}\\) regularization term with weight \\(1e-5\\) to the loss function for alleviating over-fitting. And the optimizer is stochastic gradient descent (SGD) with momentum [15]. For all of the experiments, the same setting is adopted, where momentum, weight decay, batch size, epochs and learning rate are 0.9, 1e-5, 20, 60 and 0.01 respectively. During the last 12 epochs, the learning rate is decreased to 0.001.
### AI Unit
Due to 3D convolution can learn spectral and spatial information from raw HSI dataset, 3D-CNN based methods can obtain the state-of-the-art performance for HSI classification. However, 3D convolutions are prone to over-fitting and computation-intensive compared with the 2D convolutions. In order to address these problem, we propose the asymmetric inception unit (AI unit), which is consist of the space inception unit and the spectrum inception unit. The structure of AI unit is illustrated in Figure 2. In the space inception unit, there are three space convolution paths. Path one has one point wise convolution layer only, path two consists of one point wise convolution layer and one 2D convolution layer with \\(1\\times 3\\times 3\\)-sized kernels, and path three has one point wise convolution layers and two 2D convolution layers. The outputs of each path are concatenated in channel, and are added to the output of the shortcut connection. Inspired by the Inception networks [19], we set the three paths with different widths. For each unit, we set the widths of three paths with a split ratio 1:2:1. In the last two paths, the width of point wise convolution layer is half of that of the other convolution layers. For instance, in the AI unit with 32 channels, the width of the first path is 8, and for the second path, the widths of the point wise convolution layer and \\(1\\times 3\\times 3\\)-sized convolution layer are 8 and 16 respectively, the widths of the three layers of the last path are 4, 8 and 8 accordingly. In the overall structure, the structure of spectrum inception unit is similar with the space inception unit, except that \\(1\\times 3\\times 3\\)-sized 2D convolution layers in space inception unit are replaced with \\(3\\times 1\\times 1\\)-sized 1D convolution layers.
In HSI datasets, the resolution of spectrum is much higher than that of space and the information of spectrum is much richer. Therefore, during spectral-spatial features extraction, we are paying more attention on spectral feature extraction. In the proposed AINet, there are six AI units. The four units located in the middle can be divided into two groups and each group stacks two units with equal width. Here, instead of stack two same AI units in each group, we stack one space inception unit and two spectrum inception units. This is different from some popular networks, such as ResNet[14] and MobileNet[16], which build the whole model by stacking same units. Figure 3 shows the difference between one AI unit and two AI units.
### Transfer Learning with Data Fusion
In RGB image classification, pretraining large-scale network on ImageNet dataset which has over 14 millions of hand-annotated images and over 20 thousands of categories is common, and it is very useful for improving the performance and overcoming the problem of limited training samples. During transfer learning, the diversity of the dataset used for pretraining is the key factor. For example, pretraining the same model on the dataset which has million images with one thousand categories always achieving better performance than the one pretraining on the dataset which has 10 million images with ten categories. We suspect that the model pertraining with the more diverse samples may acquire a better generalization ability.
In HSI classification, the labeled samples are quite limited. However, all of the HSI datasets just contain a few categories. For further improving the performance of HSI classification, we propose a data fusion transfer learning strategy. As shown in Figure 4, the strategy is composed of pretraining and fine tuning. During pretraining, the proposed network are trained on two source HSI datasets. Here, Pavia Center dataset and Salinas dataset are used as source HSI datasets for pretraining. Among the several public HSI datasets, those two datasets have the largest number of labeled samples. To be more specific, the model is initialized with Gaussian distribution on one source HSI dataset and pretrained for \\(N\\) epochs, and then the feature extraction part is fixed and the classi
Figure 3: Illustration of stacking two AI units. (a) AI unit \\(\\times 1\\); (b) AI unit \\(\\times 2\\). Instead of stacking two AI units with the same type, we stack one space inception unit and two spectrum inception units to form AI unit \\(\\times 2\\) as shown in (b).
Figure 2: Illustration of a AI unit. In AI unit, 3D convolution layer is replaced with two asymmetric inception units, that are space inception unit and spectrum inception unit. In space inception unit, the input cube is fed to three different paths. With path one has a point wise convolution layer only, path two consists of one point wise convolution layer and one 2D convolution layer, and path three has one point wise convolution layer and two 2D convolution layers. The outputs of each path are concatenated in channel, and are added to the output of the shortcut connection. The structure of spectrum inception unit is similar with the space inception unit, except that \\(1\\times 3\\times 3\\)-sized convolution layers are replaced with \\(3\\times 1\\times 1\\)-sized convolution layers in spectrum inception unit.
fier is reinitialized with Gaussian distribution. Later on, the feature extraction part and classifier on the other source HSI dataset are pretrained for \\(\\frac{N}{2}\\) epochs with a different learning rate. In this paper, \\(N\\) is set to 10 and the learning rate used for the feature extraction part is tenth of that used for the second pretraining HSI dataset.
After the model is pretrained on the two source HSI datasets, we transfer the whole mode except the classifier to the fine-tuning model built for the target HSI dataset as initialization, and then fine tuning the transfer part and new classifier with the same learning rate as the one used for training the second source HSI dataset.
### Experiments
### Datasets and Experiments Setting
In this paper, we compare the proposed AINet with other four CNN based approaches for HSI classification on three public HSI datasets, including Pavia University, Indian Pines and KSC. For the experiments with transfer learning, Pavia Center dataset and Salinas dataset are employed as the source datasets. The false-color composite and ground truth of each dataset are shown in Figure 5. A brief introduction of each dataset is given in the following part and more information can be found on the website2.
Footnote 2: [http://www.ehu.eus/ccwintco/index.php](http://www.ehu.eus/ccwintco/index.php)ples, the number of parameters used in the convolution layers, the depth of CNN models, overall accuracy (OA), average accuracy (AA) and kappa coefficient (\\(K\\)) are reported. OA is the ratio between the number of correctly classified samples in test set and the total number of test set. AA is the mean of the OA of all the categories. \\(K\\) is a coefficient which measures inter-rater agreement for qualitative items [19].
From tables 1\\(\\sim\\)3, we can see that, the proposed AINet achieves best classification performance on all of the datasets. For instance, in Indian Pines dataset, OA of AINet is 99.14, which is 9.15% better than that of 2D-CNN, 1.58% better than that of 3D-CNN and 0.74 better than that of SSRN. Through the experiments, it is easy to find that all of the 3D-CNN based HSI classification methods obtained better performance than 2D-CNN. From 3D-CNN, SSRN to AINet, the depth of the models are increasing and the classification accuracy goes up. In specific, the depths of the three models are 4, 12, 32 accordingly. Although AINet is much deeper than SSRN in depth, the parameters of AINet is slightly more than that of SSRN and much less than that of 3D-CNN.
### Results of Transfer Learning
In this part, we combine the proposed AINet with data fusion transfer learning to further improve the classification performance. For each dataset, we choose \\(\\{15,30\\}\\) samples from each class as training samples to test the affect brought by the number of the training samples. The experiment results using transfer learning are shown in Figure 6 and Figure 7, where AINet represents our model trained only on the target dataset, AINet+T1 represents our model pretrained on Pavia Center, which captured from the same sensor as Pavia University dataset, and fine-tuning on the target dataset. AINet+T2 represents our model pretrained on Salinas dataset, which captured from the same sensor as the Indian Pines dataset, and fine-tuning on the target dataset. AINet+T3 and AINet+T4 are our models pretrained on both Pavia Center dataset and Salina datasets, with different orders accordingly, and fine-tuning on the target dataset.
From the overall trends of Figure 6 and Figure 7, we can draw two conclusions. Firstly, the performance of our AINet is benefit from the transfer learning, especially when the available training samples are relatively small, the performance gains are huge. Secondly, pretrained on two different datasets achieve much gains than the ones just pretrained on a single dataset. Which we infer the benefits mainly come from the various classes and categories of the pretrain datasets.
From Figure 6, there are two points that can be concluded: Firstly, from the performance of AINet+T, AINet+T1, AINet+T2 in (a),(b),(c), we can conclude that AINet pretrained on other HSI dataset, will lead performance increasing for the target HSI dataset, no matter if the source and target dataset are captured by the same sensor or not. Secondly, from the performance of AINet+T, AINet+T1, AINet+T2 in (a),(b), we can conclude that when pretraining on other HSI dataset, the performance gains brought by the same sensor is much larger than the one brought by different sensors for small training samples.
From Figure 7, when the training samples become larger, the performance of HSI classification is still benefit from transfer learning. But notice that the pink bars in Figure 7, which have a decreasing in performance, the reason behind that is still unclear and need the further investigation. One possible reason we suspect is that on the pretrained dataset, a different learning rate is used comparing with the one used for fine-tuning on the target dataset, and which may not lead to the exactly same converge direction for the training process between the fine-tuning dataset and target dataset.
## 4 Conclusions
This paper proposes a 3D asymmetric inception network for hyperspectral image classification named AINet. Firstly, AINet using a 3D CNN light-weight while still very deep,
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline models & 1D-CNN & 2D-CNN & 3D-CNN & SSRN & AINet \\\\ \\hline \\# train & 1765 & 1765 & 1765 & 1765 & 1765 \\\\ \\# param. & **25920** & 0.183M & 44.893M & 0.453M & 0.487M \\\\ depth & 6 & 4 & 4 & 12 & **32** \\\\ \\hline OA & 87.81 & 89.99 & 97.56 & 98.40 & **99.14** \\\\ AA & 93.12 & 97.19 & 99.23 & 98.52 & **99.47** \\\\ \\(K\\) & 85.30 & 87.95 & 97.02 & 98.14 & **99.00** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Classification results for the Indian Pines dataset.
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline models & 1D-CNN & 2D-CNN & 3D-CNN & SSRN & AINet \\\\ \\hline \\# train & 459 & 459 & 459 & 459 & 459 \\\\ \\# param. & **14904** & 0.183M & 5.849M & 0.453M & 0.487M \\\\ depth & 5 & 4 & 4 & 12 & **32** \\\\ \\hline OA & 89.23 & 94.11 & 96.31 & 98.65 & **99.01** \\\\ AA & 83.32 & 91.98 & 94.68 & 97.78 & **98.65** \\\\ \\(K\\) & 86.91 & 93.44 & 95.90 & 98.54 & **98.90** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Classification results for the KSC dataset.
Figure 6: Transfer learning experiments with 15 training samples per class. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center.
Figure 7: Transfer learning experiments with 30 training samples per class. (a) Pavia University; (b) Indian Pines; (c) Kennedy Space Center.
which can exert the potential of the deep learning for extracting the representative features, and meanwhile alleviate the problem brought by the limited annotation datasets. Secondly, considering the property of hyperspectral image, spectral signatures are emphasized than spatial contexts. Moreover, data fusion transfer learning strategy is utilized for a better model initialization and saving the training time. In the future, there are two topics we are keen to pursue, to investigate the reduction of the training time brought by transfer learning is the first one, and another one is taking use of some policies to overcome the data imbalance in HSI classification.
## References
* [Benediktsson _et al._2005] Jon Atli Benediktsson, Jon Aevar Palmason, and Johannes R Sveinsson. Classification of hyperspectral data from urban areas based on extended morphological profiles. _IEEE Transactions on Geoscience and Remote Sensing_, 43(3):480-491, 2005.
* [Chen _et al._2014] Yushi Chen, Zhouhan Lin, Xing Zhao, Gang Wang, and Yanfeng Gu. Deep learning-based classification of hyperspectral data. _IEEE Journal of Selected topics in applied earth observations and remote sensing_, 7(6):2094-2107, 2014.
* [Chen _et al._2015] Yushi Chen, Xing Zhao, and Xiuping Jia. Spectral-spatial classification of hyperspectral data based on deep belief network. _IEEE Journal of Selected topics in applied earth observations and remote sensing_, 8(6):2381-2392, Jun. 2015.
* [Chen _et al._2016] Yushi Chen, Hanlu Jiang, Chunyang Li, Xiuping Jia, and Pedram Ghamisi. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. _IEEE Transactions on Geoscience and Remote Sensing_, 54(10):6232-6251, 2016.
* [de Brebisson and Vincent2015] Alexandre de Brebisson and Pascal Vincent. An exploration of softmax alternatives belonging to the spherical loss family. _arXiv preprint arXiv:1511.05042_, 2015.
* [He _et al._2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, Jun. 2016.
* [Howard _et al._2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_, 2017.
* [Jia _et al._2015] Sen Jia, Xiujun Zhang, and Qingquan Li. Spectral-spatial hyperspectral image classification using \\(l_{1/2}\\) regularized low-rank representation and sparse representation-based graph cuts. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 8(6):2473-2484, 2015.
* [Krizhevsky _et al._2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_, pages 1097-1105, 2012.
* [Li _et al._2017a] Wei Li, Guodong Wu, Fan Zhang, and Qian Du. Hyperspectral image classification using deep pixel-pair features. _IEEE Transactions on Geoscience and Remote Sensing_, 55(2):844-853, 2017.
* [Li _et al._2017b] Ying Li, Haokui Zhang, and Qiang Shen. Spectral-spatial classification of hyperspectral imagery with 3d convolutional neural network. _Remote Sensing_, 9(1):67, 2017.
* [Melgani and Bruzzone2004] Farid Melgani and Lorenzo Bruzzone. Classification of hyperspectral remote sensing images with support vector machines. _IEEE Transactions on geoscience and remote sensing_, 42(8):1778-1790, 2004.
* [Qian _et al._2013] Yuntao Qian, Minchao Ye, and Jun Zhou. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. _IEEE Transactions on Geoscience and Remote Sensing_, 51(4):2276-2291, 2013.
* [Szegedy _et al._2017] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In _Proc. AAAI Conf. Intell._, volume 4, page 12, 2017.
* [Tarabalka _et al._2009] Yuliya Tarabalka, Jon Atli Benediktsson, and Jocelyn Chanussot. Spectral-spatial classification of hyperspectral imagery based on partitional clustering techniques. _IEEE Transactions on Geoscience and Remote Sensing_, 47(8):2973-2987, 2009.
* [Tarabalka _et al._2010] Yuliya Tarabalka, Mathieu Fauvel, Jocelyn Chanussot, and Jon Atli Benediktsson. Svm and mrf-based method for accurate classification of hyperspectral images. _IEEE Geoscience and Remote Sensing Letters_, 7(4):736-740, 2010.
* [Thompson and Walter1988] W Douglas Thompson and Stephen D Walter. A reappraisal of the kappa coefficient. _J. Clin. Epidemiol._, 41(10):949-958, Jan. 1988.
* [Wang _et al._2016] Qi Wang, Jianzhe Lin, and Yuan Yuan. Salient band selection for hyperspectral image classification via manifold ranking. _IEEE transactions on neural networks and learning systems_, 27(6):1279-1289, 2016.
* [Yokoya _et al._2016] Naoto Yokoya, Jonathan Cheung-Wai Chan, and Karl Segl. Potential of resolution-enhanced hyperspectral data for mineral mapping using simulated enmap and sentinel-2 images. _Remote Sensing_, 8(3):172, 2016.
* [Yuan _et al._2016] Yuan Yuan, Jianzhe Lin, and Qi Wang. Hyperspectral image classification via multitask joint sparse representation and stepwise mrf optimization. _IEEE transactions on cybernetics_, 46(12):2966-2977, 2016.
* [Zhang _et al._2012] Lefei Zhang, Liangpei Zhang, Dacheng Tao, and Xin Huang. On combining multiple features for hyperspectral remote sensing image classification.
_IEEE Transactions on Geoscience and Remote Sensing_, 50(3):879-893, 2012.
* [Zhang _et al._2016] Liangpei Zhang, Lefei Zhang, and Bo Du. Deep learning for remote sensing data: A technical tutorial on the state of the art. _IEEE Geoscience and Remote Sensing Magazine_, 4(2):22-40, 2016.
* [Zhong _et al._2018] Zilong Zhong, Jonathan Li, Zhiming Luo, and Michael Chapman. Spectral-spatial residual network for hyperspectral image classification: A 3-d deep learning framework. _IEEE Transactions on geoscience and remote sensing_, 56(2):847-858, Feb. 2018. | Hyperspectral image(HSI) classification has been improved with convolutional neural network(CNN) in very recent years. Being different from the RGB datasets, different HSI datasets are generally captured by various remote sensors and have different spectral configurations. Moreover, each HSI dataset only contains very limited training samples and thus it is prone to overfitting when using deep CNNs. In this paper, we first deliver a 3D asymmetric inception network, AlNet, to overcome the overfitting problem. With the emphasis on spectral signatures over spatial contexts of HSI data, AlNet can convey and classify the features effectively. In addition, the proposed data fusion transfer learning strategy is beneficial in boosting the classification performance. Extensive experiments show that the proposed approach beat all of the state-of-art methods on several HSI benchmarks, including Pavia University, Indian Pines and Kennedy Space Center(KSC). Code can be found at: [https://github.com/UniLauX/AlNet](https://github.com/UniLauX/AlNet) 1.
Footnote 1: Code is avaliable at: [https://github.com/UniLauX/AlNet](https://github.com/UniLauX/AlNet) | Summarize the following text. | 256 |
ieee/b5906384_2cee_49c0_8511_afc042601a87.md | # A Fine-Grained Object Detection Model for Aerial Images Based on YOLOv5 Deep Neural Network
ZHANG Rui
XIE Cong
DENG Liwei
Manuscript Received Mar. 15, 2022; Accepted June 21, 2022. This work was supported by the National Science Foundation of Heilongjiang Province (LH2019F024) and the Key R&D Program Guidance Projects of Heilongjiang Province (GZ20210065). (Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin 150080, China)
######
Fine-grain object detection, High-resolution aerial images, Oriented object detection, YOLOv5. 2022 20
classify rotating targets in remote sensing images, most studies have used angular representations of rotating bounding boxes based on five parameters, including the rotation angle of an instance, its center position, width, and height. However, most of these methods suffer from boundary discontinuity, i.e., loss discontinuity due to periodic angle and regression inconsistencies, which causes instability in the training process, which may further make the prediction of orientation inaccurate. For the detection of rotating objects, the accuracy of angle prediction is very important.
In addition, some deficiencies in the quantity and quality of existing remote sensing datasets compared to natural scene-centered datasets have hindered the development of the field of aerial remote sensing for fine object recognition. In particular, most advanced detectors use datasets that contain coarse-grained or very little fine-grained information, making it difficult for some good-looking detectors to meet practical applications. Compared with ordinary object detection, fine-grained object detection can distinguish similarity targets well and generalize better in real scenarios.
Most rotation detectors are derived from general object detectors, unlike anchor-based horizontal detectors, which mainly use rotation anchors for regression. The anchor regression method based on rotation is mainly divided into 90-degree regression based on five parameters (Five parameters including target center, width and height, and angle), 180-degree regression based on five parameters, and quadrilateral regression method based on eight parameters.
Although parametric regression-based rotation detection methods have achieved competitive performance in different vision tasks and have been the cornerstone of many excellent detection algorithms. However, there is a general boundary discontinuity problem, which is usually due to angle periodicity and angle order, but no fundamental solution is given regardless of which parameters representation is used for the bounding box. Several targeted works have been proposed to solve the boundary problem of angular regression and classification inconsistency. For example, SCRDet [14] proposes an IoU-Smooth L1 loss to address the abrupt loss caused by angular periodicity, thus reducing the model learning difficulty. RoI transformer [15] extracts rotation-invariant features to maintain consistent accuracy for the subsequent classification and regression. S\\({}^{2}\\)A-Net [16] used a feature alignment module and an orientation detection module achieves a better balance of speed and accuracy, alleviating the problem of generally inconsistent classification scores and regression accuracy. R\\({}^{3}\\)Det [17] proposed a one-stage rotation detector with end-to-end improvements for accurate and fast recognition to address the feature misalignment problem. Unfortunately, these methods are essentially regression detection and do not consider the root cause of the problem.
As a fast and efficient one-stage algorithm, YOLOv5 shows its great potential for capturing small targets in natural images. Although the algorithm cannot be used directly to obtain satisfactory results by accurately estimating the orientation and position information of the rotated object. However, the YOLOv5 positive and negative sample sampling approach differs from the traditional anchor-based iou, which by directly using the ratio of width to height as a discriminant, enables training to produce optimal results in shape matching. As for angle regression, it can be solved by adding an additional classification task. As long as the angle is controlled well, make sure the horizontal anchor can envelop the ground truth (gt) rotation bounding box. And the parameters in the long-edge definition method based on 180-degree regression only have the boundary problem of angle \\(\\theta\\), and CSL [18] is just able to handle the boundary problem of \\(\\theta\\) again.
In this paper, we proposed an improved YOLOv5-based method named YOLOv5_csl for fine-grained object detection method in aerial and remote sensing images. As shown in Fig.1, we use the YOLOv5 algorithm as the baseline to capture the angles of arbitrarily oriented, messy targets in aerial images by the circular smooth label method. Increasing the perceptual field of the model by adding a self-attention mechanism facilitates better fitting of targets in the dataset and improves the detection accuracy of subtle targets. We have conducted extensive experiments on two widely available public evaluation datasets DOTA [19] and FAIR1M [20], and the results show that our proposed YOLOv5_csl method is very effective method for aerial object detection.
Our special contributions of this work are as follows:
1) Using YOLOv5 as a benchmark, an angle classification module csl is added on this basis, so that the improved algorithm can estimate the angle of the rotating target well.
2) By adding an attention mechanism module, it is convenient for the model to better utilize the global context information to select important regions in the image and improve the recognition accuracy of fine-grained targets.
3) To reduce the number of parameters of the model and improve the practicability of the improved algorithm, we use the latest activation function Acon to replace the activation functions of the backbone and neck parts in the original YOLOv5 network.
The rest of this article is organized in this way. The second section describes the related work, the third part explains the main methods of this paper, and the fourth part conducts the experiments and shows the results. The fifth part provides a discussion and a summary of the whole paper.
## 2 Related Work
In this section, we first review the methods related to rotating object detectors associated with aerial and remote sensing images and summarize the boundary problem solutions among them. Then, we investigate the role of the attention mechanism module for object detection. Meanwhile, in order to reduce the number of parameters and improve the accuracy, we investigate the effect of different activation functions on improving object detection accuracy.
**1. Rotating object detectors in aerial images**
Rotating object detectors are one of the most effective tools for the semantic understanding of aerial images [21]. Since aerial images are overhead images with a certain height, they usually contain arbitrarily oriented, densely distributed, and anomalous aspect ratios targets. Therefore, to improve YOLOv5 as a rotating object detector, an additional angle parameter is needed. Mainstream rotation detectors are obtained by introducing different orientation bounding box (OBB) representations, directly improved by generic detectors, where angle-based regression methods dominate.
This method directly regresses five shape parameters including center position, width, height, and rotation angle to obtain the bounding boxes representation of a rotating object, which is the latest method for aerial object detection. Anchor boxes in general object detectors or directional anchor boxes in aerial object detectors is considered vital factors to achieve better performance. Several methods [22], [23] set different angles, aspect ratios, and scale sizes for the oriented anchor box to better estimate the angle-based OBB. The region-based suggestion network (RPN) [24] of Yang _et al._ regresses the horizontal box and then predicts the angle-based OBB using the IoU-Smooth L1 loss function to cope with the drastic changes in the loss values. To obtain a rotated feature representation, Ding _et al._ proposed an ROI transformer to convert horizontal boxes into rotated boxes. Han _et al._ used a single-stage network to achieve feature alignment between object-oriented and axis-aligned features in a fully convolutional manner. These methods suffer from the difficulty of loss discontinuity due to the periodicity of angles, which compromises the robustness of the detector and may lead to inaccurate prediction of oriented objects.
Since the learned object parameters are periodic, the boundary at the periodically changing boundary will lead to a sudden change in the loss value and increase the learning difficulty of the model, and the boundary problem is shown in Fig.2. There are three main methods used to solve the boundary problem.
First, finding a new way of defining the rotating object, which does not contain parameters with periodic variation, yet can represent the periodically rotating target, fundamentally eliminates the boundary problem. For example, the idea of anchor free/mask, P-RSDet [25]
Figure 1: Architecture of the proposed detector.
Figure 2: Boundary problems with rotation detection.
represents an arbitrary object based on the polar coordinate system, BBA-Vectors [26] represents a directed rectangle based on vectors, and Oriented RepPoints [27] represents an arbitrarily shaped object based on point sets. Although this approach can effectively avoid the boundary problem, it will introduce additional parameters for computing positive and negative samples, which leads to an increase in computational cost. Secondly, starting from the loss function. For example, when using smoothl1 to consider each parameter separately, the loss function is given the same periodicity as the angle, so that the difference between \\(\\theta\\) at the boundary can be large, but the loss variation is very small; Alternatively, the influence of all regression parameters is considered comprehensively, and the boundary problem can be circumvented by using the rotated IoU loss function, although RIoU is not derivable, and the approximate derivable work can refer to KLD [28], GWD [29]. Finally, the complex regression problem is replaced using the simplest classification, which is the approach chosen in this paper.
To improve YOLOv5 as a rotation object detector, an angle regression needs to be added. Referring to Yang _et al_. we treat the target angle prediction as a classification problem to better constrain the angle range of the prediction results. The circular smooth label is specifically used to deal with the periodicity of the angles and improve the error tolerance between two adjacent angles. After practical calculations, it is shown that the precision loss caused by this fine-grained angle discretization is negligible. That is, in YOLOv5, an additional angle classification task is added to achieve the recognition of rotating targets.
## 2 Attention mechanism module
In the field of fine-grained visual recognition (FGVR), learning the fine representation of objects plays a crucial role. Compared to image recognition, fine-grained visual recognition is more challenging and practical. FGVR methods contain the following three main categories: feature encoding methods, location-based methods, and attention-based methods. Attention-based methods avoid manual region annotation by using the attention property of CNN's feature map to select important regions in an image. In recent years, Self-Attention-based modules have achieved comparable or even better performance than their CNN counterparts on many vision tasks.
Since CNN is a strong local model, it usually uses a \\(3\\times 3\\) convolution to scan the whole image and extract information layer by layer. In contrast, the self-attention module employs a weighted value averaging operation based on the input feature context to dynamically compute attention weights through the similarity function between relevant pixels. This flexibility allows the attention module to focus on different regions adaptively and capture more features. Thus, the attention module can compensate for the problem that CNN is too local and not global enough to capture global contextual information, i.e., the attention module can allow the model to see more broadly. Earlier studies, such as SENet [30], CBAM [31], showed that self-attention can be used as an enhancement of the convolution module. Recently, self-attention has been proposed as an independent block to replace traditional convolution in CNN modules such as SAN [32], BoTNet [33]. Some work aims to design a more flexible feature extractor by aggregating information from a larger range of pixels.
## 3 The role of the activation function
An important factor in designing a high-performing object detection model is the selection of an appropriate activation function. Activation function can be represented by properties such as derivatives, monotonicity, upper and lower bounds, etc. In this regard, ReLU [34], Leaky-ReLU [35], and Mish [36] are widely used for activation in dense object detection models. Through continuous iterations of the technique, swish activations searched by neural architecture search (NAS) techniques have achieved the highest accuracy on the challenging ImageNet benchmark [37]. Many practices have shown that it can simplify optimization and obtain better performance [38].
It has been experimentally [39] pointed out that swish can be cleverly expressed as a smooth approximation of ReLU, whose approximation formula is simply summarized as Acon. The transformed functions (ACON-A, ACON-B, ACON-C) are smoothly differentiable, where swish is only one of their cases (ACON-A). It is conceptually simple and does not add any computational overhead, but significantly improves the accuracy. Therefore, to reduce the number of model parameters to improve performance, we replaced the activation functions of the backbone and neck parts of YOLOv5 with Acon.
## 1 Overview
We outline our approach, as shown in Fig.1 above. The example is a one-stage rotation detector based on YOLOv5 modified. The figure shows the core improvement point of this work, which is the addition of a rotation detection branch to the multitasking pipeline of the prediction part, where the rotation detection branch is a CSL-based prediction. It is worth noting that the attention module and the new activation function added in the backbone section are not represented in the network structure diagram. In this paper, we adopt a very simple way to apply YOLOv5 to the aerial image domain and improve the detection performance of the model. Our focus is not on designing a completely new rotation detection network; Instead, we hope that the analysis in this paper will promote a better understanding of the improvement from horizontal to rotation detection.
## 2 Why choose YOLOv5 as the baseline
The object detector based on deep learning is usually composed of Data loader (image preprocessing), Backbone (extracting target features), Neck (collecting combined target features), Head (prediction part), Loss function, etc. In the network architecture of YOLOv5, the superior learning ability of the cross stage partial network (CSPNet) is combined to form CSPDarkNet53 to further improve the network performance. Compared with the ResNet network, this significantly reduces the network parameters, while enriching the residual feature information and improving the feature learning capability.
Neck, as one of the core components of the detection network, is mainly used to fuse information from different features, which makes the model with richer features and stronger representation. Neck determines the number of head, and targets at different scales are assigned to different heads for learning. YOLOv5's neck still uses a multi-scale feature fusion structure: SPP+PANet. SPP uses \\(1\\times 1,\\ 5\\times 5,\\ 9\\times 9,\\ 13\\times 13\\) maximum pooling for multi-scale fusion, which increases the acceptance range of backbone features more effectively than using only one pooling operation and significantly separates the most important features of the target's surroundings. To better improve the feature extraction capability, YOLOv5's FPN layer is followed by a feature pyramid containing two PAN structures. The FPN layer conveys high-level semantic information top-down to facilitate image classification, while the feature pyramid conveys bottom-up localization information to facilitate differentiation of location and scale variations. Therefore, YOLOv5 has great potential in small target detection accuracy, and it is very beneficial to use YOLOv5 as the baseline algorithm for rotation detector improvement.
## 3 Rotation detector design
We aim to design a rotation detector for fine-grained objects in high-resolution remote sensing images based on YOLOv5, which is centered on the use of a simple and effective method. It is well known that a rotating bounding box in any direction can be represented by a rectangular box with an angle parameter. This can be divided into angle regression in the range of 90 degrees and angle regression in the range of 180 degrees because of the different angles of the objects. As shown in Fig.3, \\(\\theta\\) in the 90-degree range represents the acute angle to the \\(x\\)-axis while \\(\\theta\\) in the 180-degree range is from the long side of the rectangle to the \\(x\\)-axis.
As mentioned in the previous analysis, the main reason of the boundary problem due to regression-based methods is that the desired prediction results do not correspond to the actual defined bounded range. Therefore, the CSL method directly alters the undesirable prediction by means of an angle classification task that can control the range of prediction results. The simplest way to achieve this task is to group the angles of the targets into a unique category, with the number of angle categories depending on the maximum interval of all target angles. However, the conversion from regression to classification may result in some loss of accuracy. Take the five-parameter method with a 90-degree range as an example: we need to first calculate the maximum loss and average loss of accuracy (obeying a normal distribution) to determine the impact of this loss on the final result. The formula is as follows:
\\[\\text{Max}(loss)=\\gamma/2 \\tag{1}\\]
\\[E(loss)=\\int_{a}^{b}x\\times\\frac{1}{b{-}a}\\text{d}x=\\int_{0}^{\\gamma/2}x \\times\\frac{1}{\\gamma/2{-}0}\\text{d}x=\\frac{\\gamma}{4} \\tag{2}\\]
where \\(\\gamma\\) represents the entire angle division range, taking 1 degree per class as an example, the maximum accuracy and desired loss are 0.5 and 0.25, respectively. This has little impact on subsequent evaluations, and not all objects have such dramatic aspect ratios. Therefore, it is feasible to transform the angle prediction method into a classification problem. However, just treating in the simplest possible way would result in angle classification loss that cannot measure the angular distance between the prediction and the label. For example, if gt is 0 degrees, the loss value is the same when we predict 1 degree and \\(-90\\) degrees, but predict
Figure 3: Two definitions of the bounding box.
ing to 1 degree is acceptable.
The discrete equation and prediction equation for the angle is as follows:
\\[\\theta_{\\rm{distance}}=\\frac{\\theta_{\\rm{continuous}}}{\\gamma} \\tag{3}\\]
\\[\\theta_{\\rm{output}}=-\\gamma\\left(0.5+\\arg\\max\\left(sigmoid\\left(\\theta_{\\rm{ predict}}\\right)\\right)\\right) \\tag{4}\\]
As in the above question, the specific expression of the circular smooth label is as follows:
\\[CSL(x)=\\left\\{\\begin{array}{ll}f(x),&\\theta-r<x<\\theta+r\\\\ 0,&\\mbox{otherwise}\\end{array}\\right. \\tag{5}\\]
where \\(f(x)\\) is the window function, and the window radius is controlled by \\(r\\). The window function has four main properties: periodicity, symmetry, maximum, and monotonicity. Since the introduction of the window function, the model can measure the angular distance between the predicted label and the ground truth bounding box. That is, within a certain range, the closer the predicted value is to the real value, the smaller the loss of the predicted value. For those categories with obvious angle information, angle classification is easier. Yang _et al_. demonstrate that Gaussian-based functions work best, and 90-degree CSL-based methods are generally inferior to 180-degree CSL-based methods.
The CSL-based method has greater advantages in the obvious angle feature category, and it can be easily combined with the 5-parameter-based rotating object detection method. Although Yang _et al_. have achieved great success in this work, they are based on RetinaNet for verification. At present, with the continuous improvement of YOLOv5, its accuracy in small targets has been further improved but limited by the detection method, it cannot be directly applied to aerial images. After the above analysis, we choose to add an angle classification loss function to the yolov5 algorithm, so that yolov5 can detect objects in any direction in aerial images. To be able to train the model, we use the minimum circumscribed rectangle method in OpenCV to convert the data labels into the normalized form of center point and width and height required for YOLOv5 training. But generate a 180-degree range angle for each object. It is worth noting that the size of the window radius should be moderate. If it is too small, the angle information cannot be learned, and if it is too large, the deviation will increase.
**4. Add attention mechanism MHSA**
Our ultimate goal is to achieve fine-grained rotating target detection, and the above improvements only solve the rotating target detection problem. Therefore, we also need to consider a solution for fine-grained target classification, whose main difficulty is reflected in the great similarities between categories. We find that this difficulty can be facilitated by enhancing the expressiveness of the model to better distinguish these similarities. In recent years, the attention mechanism module has significantly improved the performance of the baseline in object detection by fusing features from different convolutional kernels, while reducing the number of parameters and computation time. To this end, we try to add a multi-head self-attention (MHSA) layer to the last three bottleneck blocks of the residual structure of yolov5's backbone. This similar approach is not our first proposal, the BoTNet network structure has been successfully applied to target detection in this way. This hybrid structure aggregates the respective advantages of CNN and self-attention, utilizes convolution for spatial downsampling, and focuses self-attention on lower resolutions, which can efficiently process large images. The design of MHSA is simple, as shown in Fig.4.
Since our ultimate goal is to improve YOLOv5 as a high-performance rotation detector, the dataset has larger resolution images. To reduce the complexity after adding the self-attention module, we also only choose to use self-attention on the lowest resolution feature maps. And the training result of BoTNet on 1024 is significantly better than that of ResNet on 1280. Therefore, this is very beneficial for the subsequent use of a single GPU to train large images, and the 1024 size can be used instead of the 1280 size to reduce memory usage.
## 5 Replace activation function with Acon
The smooth activation function allows better information to penetrate deeper into the neural network, resulting in better accuracy and generalization. The activation function used by YOLOv5 is ReLU, which is essentially a max function with the following formula:
\\[ReLU(x)=\\max(0,x) \\tag{6}\\]
The smooth, differentiable variant of the max function, which we call smooth maximum, has the following equation:
Figure 4: MHSA structure principle.
\\[S_{\\beta}(x_{1},\\dots,x_{n})=\\frac{\\sum\
olimits_{i=1}^{n}x_{i}e^{\\beta\\times x _{i}}}{\\sum\
olimits_{i=1}^{n}e^{\\beta\\times x_{i}}} \\tag{7}\\]
where \\(\\beta\\) is a smoothing factor, when approaches infinity, smooth maximum becomes a standard max function, and when \\(\\beta\\) is 0, it is an arithmetic average operation. Considering the ReLU in smooth form, substituting \\(S_{\\beta}(0,x)\\) into the formula, we get \\(S_{\\beta}(0,x)\\!=\\!x\\!\\times\\!\\sigma(\\beta\\!\\times\\!x)\\), \\(\\sigma\\) represents the sigmoid function, and this result is the swish activation function. So the swish activation function is a smooth approximation of the ReLU function. Of course, the above smooth form can also be extended to the Leaky-ReLU. The variant form of Leaky-ReLU is:
\\[ACON\\!-\\!B(x)\\!=\\!S_{\\beta}(x,px)\\!=\\!(1\\!-\\!p)x\\sigma(\\beta(1-p)x)+px \\tag{8}\\]
Since ACON-C is the most widely used form, its expression is as follows:
\\[ACON-C(x) =S_{\\beta}(p_{1}x,p_{2}x)\\] \\[=(p_{1}-p_{2})x\\sigma[\\beta(p_{1}-p_{2})x]+p_{2}x \\tag{9}\\]
Fig.5 shows the above three activation function curve. We use this form as a replacement in the backbone of YOLOv5. In the code implementation, \\(p_{1}\\) and \\(p_{2}\\) are adaptively adjusted using two learnable parameters.
Since the activation function of the ACON series controls whether to activate the neuron through the value of \\(\\beta\\) (\\(\\beta\\) is 0, not activated). Therefore, ACON requires an adaptive function that computes \\(\\beta\\). First, the \\(H\\) and \\(W\\) dimensions are averaged separately, and then through two convolutional layers, all pixels in each channel share a weight. The formula is as follows:
\\[\\beta_{c}=\\sigma W_{1}W_{2}\\sum_{h=1}^{H}\\sum_{w=1}^{W}x_{c,h,w} \\tag{10}\\]
At the same time, to save the number of parameters, a scaling parameter \\(r\\) is added between \\(W_{1}(C,C/r)\\) and \\(W_{2}(C/r,C)\\), which is set to 16. Although this improvement brings a certain number of parameters, it has definite improvements for both large and small networks.
## 6 Other tricks to improve performance
The small scale of remote sensing image data makes it difficult for datasets to support deep learning-based methods in terms of sample size and real-world complexity. Therefore, we increase the scale and generality of the dataset mainly through data augmentation. Yolov5 greatly enriches the detection dataset by randomly adding a mosaic of four images, scaling the size randomly, and then stitching them with a random distribution. Since remote sensing images need to be cut, it will aggravate the uneven distribution of objects. The distribution ratio of small objects is very low. During the network training process, small objects are not involved most of the time, and the gradient direction is mainly dominated by large objects. However, the stitching function of Mosaic can greatly improve the uneven distribution of objects in the FAIR1M dataset. Due to the random nature of its stitching, the improvement becomes more obvious as the training time increases. In addition, Mosaic enhancement can directly calculate the data of four graphs during training, which can achieve better performance by using a single GPU, which is a good solution to the pain point of high GPU resource demand.
There is a serious imbalance in the number of categories in the FAIR1M dataset used in our experiments, as shown in Fig.6. Therefore, to make the trained model more generalizable, we need to adopt some methods to alleviate the number imbalance. Yolov5's approach to alleviating inter-class imbalance is an adaptive image sampling strategy. The sampling weight of each image is considered according to the number of object categories, the frequency of each category of objects in the image, and then the index is generated according to the list of sampling weights. But this approach is not very efficient, so we use category filtering to keep the number of each category roughly balanced. It should be noted that since our improved YOLOv5 rotation detector treats angle as a classification task, this scheme can circumvent the boundary problem in regression tasks, but also introduces the problem of inter-class imbalance in classification tasks. Although CSL can learn angle distance information, the mitigation effect is very limited, and the setting of the window radius has a large impact.
## 7 Experiments
Dataset: To demonstrate the effectiveness of our
Figure 5: ACON and Leaky-ReLU function image.
improved YOLOv5 network algorithm for fine-grained object detection in aerial remote sensing images, we have conducted extensive experiments on FAIR1M. It is a large-scale fine-grained object recognition dataset for evaluating the detection performance of oriented fine-grained objects in aerial imagery, containing over 15,000 images and 1 million instances with image sizes ranging from \\(600\\times 800\\) to \\(10\\,000\\times 10\\,000\\). All images of FAIR1M are annotated using an oriented bounding box based on four vertices \\(\\{(x_{i},y_{i}),i=1,2,3,4\\}\\), involving 37 subcategories of 5 categories with different scales, shapes, and orientations. For the coarse-grained airplane category, it consists of 11 specific categories from 34 airports worldwide, including COMAC ARJ21, COMAC C919, Airbus A320, Airbus A220, Airbus A330, Airbus A350, Boeing 737, Boeing 747, Boeing 777, Boeing 787, and other-airplane. In addition, four broad categories of vehicles, ships, courts and roads are included, all of which are classified into fine-grained types based on their specific functions. The rest of the categories can be viewed in Fig.6 and are not described in detail here. The whole dataset is divided into test set, validation set and training set with the proportions of 1/3, 1/6 and 1/2, respectively.
Evaluation will be based on a quantitative comparison between the dataset's predictions and ground truth. In this paper, we use the mean average precision (mAP) of specific model targets as the accuracy evaluation metric to facilitate a fair comparison with the original YOLOv5 network, and the object recognition accuracy of three other categories is not included in the total score. For the given target ground-truth box and the generated prediction box, select TP, FP, and FN according to the intersection over union (IoU) threshold of 0.5, and then obtain the precision and recall according to the following formulas. After obtaining the precision rate and recall rate, according to the calculation method of Pascal VOC 2012, the AP of each category is obtained, and then the final indicator mAP is calculated. Note that the IoU threshold can be increased to 0.9 or 0.95 for comparison to highlight our high accuracy localization advantage.
\\[\\text{Precision}=\\frac{TP}{TP+FP},\\quad\\text{Recall}=\\frac{TP}{TP+FN} \\tag{11}\\]
Throughout the experiment, we select the full training and validation sets to train the improved detector, while the unannotated test set is used for evaluation to maintain the fairness of the experiments. Before training, some necessary preprocessing work is also performed. First, some of the images in the FAIR1M dataset are too large to directly feed the images into the object detection network for training and detecting. Second, since large image contains a large number of small objects, if the compression method is adopted directly by default, the size of the small objects will be further reduced and their features may even disappear in the deep extraction. In this paper, we crop the image to \\(800\\times 800\\) size with a stride of 256 to ensure that the objects will not be changed. Finally, to keep the balance among the categories as much as possible, we use an adaptive function designed by ourselves to eliminate some samples with too many categories. Finally, we convert the labels in the 8-point form to the YOLOv5 trained label format using 180-degree based for training our model.
Figure 6: FAIR1M dataset instance distribution.
In the training phase, we use yolov5m.yaml as the backbone feature extraction network with the input image size of \\(800\\times 800\\), total training epochs set to 350, and the batch size per epoch is set to 16. The initial learning rate of the SGD optimizer is 0.01, the momentum and weight decays are 0.937 and 0.0005, respectively, and other hyperparameters are set by default. Furthermore, due to the addition of the angle loss function, we add the angle loss gain angle and the angle cross-entropy loss function angle_pw to the hyperparameters, with the weights set to 0.8 and 1.0, respectively. To avoid overfitting of the model during training, we mainly use two data augmentation methods: horizontal rotation and mosaic. By default, all our experiments use a 2080Ti GPU for training and inference, with NMS included in post-processing. It is worth noting that the ablation study is simply comparing the accuracy of the models on the test set.
**1. Exploring the window radius**
In this paper, our core improvement is to use CSL to learn the angle information of fine-grained objects in remote sensing images. In fact, the radius of the window function in the CSL formulation has a great influence on the learning of angle information. To obtain the most suitable window radius, we conducted experiments on the improved YOLOv5 network. In the actual verification process, we choose the verification radius to be 0, 2, 4, 6, 8, respectively. Table 1 shows the performance of the models trained under different radius on the test set. In addition, to ensure that other tricks do not affect the experimental results, we pre-trained the model using yolov5m.pt and turn off mosaic data augmentation and anchor adaptation in the hyperparameters. Fig.7 compares visualizations using different window function radius. When the window radius is 2, the detector can learn any direction and scale information well. As the radius increases and optimizes, the learning effect of the detector gradually diminishes.
## 2 Mosaic data augmentation and resolution
To verify the effectiveness of mosaic data enhancement for small target detection, we choose to use two methods without mosaic and with mosaic data enhancement on the Gaussian radius with the best detection effect. As shown in Table 2, the analysis shows that the detection accuracy with mosaic data augmentation is one to two percent higher than that without mosaic data augmentation, although this increases the training convergence time a little. As we all know, the higher the image resolution, the richer the information the model can learn, and the higher the detection accuracy of the target. To verify the effect of image size on the detection effect, we set train input size to 640 and 800 and removed some images with too many categories to keep the difference between categories, the experimental results are shown in Table 2. It can be seen that the detection result of the training set size of 800 is better than that of the training set size of 640. In fact, when the training set size is set to 1280, the detection effect will also be improved accordingly. In this paper, due to the limitations of the experimental equipment, the validation of this size is not performed.
## 3 The performance of attentional mechanisms and activation function
To verify our attention mechanism and new activation function improvements on yolov5_csl, we also conduct corresponding experiments. First, we compared the effects of the models adding and without the MHSA at
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|} \\hline Based method & \\(r=0\\) & \\(r=2\\) & \\(r=4\\) & \\(r=6\\) & \\(r=8\\) \\\\ \\hline YOLOv5\\_v6.0 & 32.76 & 33.26 & 32.28 & 32.09 & 31.60 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Comparison of different radius
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Tricks & \\multicolumn{3}{c|}{Different tricks setting} \\\\ \\hline Mosaic & – & ✓ & ✓ \\\\ \\hline Input\\_size: 640 & ✓ & ✓ & – \\\\ \\hline Input\\_size: 800 & – & – & ✓ \\\\ \\hline mAP (\\%) & 36.8 & 37.3 & 37.9 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Mosaic and different size effects
Figure 7: Visualization of detection results under different window function radius.
tention mechanism on FAIR1M. The experimental results are shown in Table 3. As can be seen from the table, the mAP is about 1 percent point higher after adding the MHSA attention mechanism. Then, we also compared the effect of using the ReLU activation function and Acon activation function on the detection accuracy, as shown in Table 4. It is worth noting that with the Acon activation function, the improvement in detection accuracy is not obvious, and the training convergence time increases.
To find the most suitable hyperparameters, we also carried out corresponding experiments. The first is to verify the influence of batch size and learning rate on the training results. The other parameter settings remain unchanged, the batch size becomes 32, and the learning rate becomes twice the original. The test results of the trained model show that the corresponding increase in batch size and learning rate has a certain improvement in training speed, but it is almost negligible. In addition, we also verified the two network structures officially released by yolov5 one by one, and the experimental results are shown in Table 5. The experimental results show that with the increase of the depth of the network, the test results are steadily improved. It is worth noting that although the detection accuracy of the large model will further increase, it also requires sufficient memory space to support the calculation, and the training convergence time will be further extended. Finally, we also verify the effectiveness of the focal loss for mitigating the fine-grained object class imbalance, and the results are shown in Table 5. We found that after adding the loss function focal loss, the detection accuracy of the target is reduced.
Because we have improved on the yolov5_v6.0 version, we first compared the performance of the two algorithms before and after the improvement on the FAIR1M dataset. The annotation for each instance of the FAIR1M dataset can be represented as \\(\\{(x_{i},y_{i}),\\)\\(i=1,2,3,4\\}\\), where \\((x_{i},y_{i})\\) represents the vertex co-ordinates of the rectangular bounding box. For this reason, we need to convert this label format to the traditional horizontal box labeling method for the original yolov5 training. The experimental comparison is shown in Table 6, and the visual comparison results are shown in Fig.8. To further investigate the performance of our improved detection, we compare the improved al
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline Method & Backbone & mAP (\\%) \\\\ \\hline Yolov5\\_v6.0 & Yolov5m & 34.7 \\\\ \\hline Yolov5\\_csl(ours) & Yolov5m & 37.8 \\\\ \\hline \\end{tabular}
\\end{table}
Table 6: Comparison of before and after improvement
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Method & Backbone & MHSA & mAP (\\%) \\\\ \\hline Yolov5\\_csl & Yolov5m & – & 37.8 \\\\ \\hline Yolov5\\_csl & Yolov5m & ✓ & 38.5 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Attention module performance
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline Method & Backbone & Batchsize & lr & Focal loss & mAP (\\%) \\\\ \\hline – & Yolov5m & 16 & 0.01 & – & 37.6 \\\\ \\hline – & – & 32 & 0.02 & – & 37.8 \\\\ \\hline Yolov5\\_csl & – & 16 & 0.01 & ✓ & 36.4 \\\\ \\hline – & Yolov5l & 16 & 0.01 & – & 39.7 \\\\ \\hline \\end{tabular}
\\end{table}
Table 5: Network structure and loss function performance
Figure 8: Comparison of detection effect of two algorithms before and after improvement.
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Method & Backbone & MHSA & mAP (\\%) \\\\ \\hline Yolov5\\_csl & Yolov5m & – & 37.8 \\\\ \\hline Yolov5\\_csl & Yolov5m & ✓ & 38.5 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Activation function detection resultsgorithm with other state-of-the-art (SOTA) methods on the FAIR1M and DOTA datasets. Extensive experiments demonstrate that although our yolov5_cl method is only an improvement based on classification methods, it exhibits surprising performance. As shown in Table 7, the competitive performance of the yolov5_cls based methods was 39.20% and 72.68% on the FAIR1M and DOTA dataset, respectively. Therefore, our method successfully applies the idea of yolov5 to remote sensing images from theory to reality, and the detection accuracy and speed are also competitive in this field. Fig.9 shows the visual results.
## 5 Discussion
In this paper, we use the simplest rotation detection method combined with the industry-popular YOLOv5 algorithm to detect fine-grained objects in high-resolution aerial images. This simple and effective method enables YOLOv5 not only to be applied to natural images but also to aerial remote sensing images, which greatly expands the application range of the YOLOv5 algorithm. Although our method is very simple and hardly increases the number of parameters of the model, there are some limitations of this method. For example, there is a certain gap between our improved YOLOv5 algorithm and the SOTA rotation detection algorithm, which is limited to a certain extent by the equipment and data conditions we use. But compared with other methods, our improved algorithm has lower requirements on hardware equipment, faster convergence speed, and the algorithm is cost-effective. Through the analysis of some key parameters, we also verified the optimal range of parameter combinations, which helps others reproduce our results faster and make better improvements.
## 6 Conclusions
As a heuristic work for fine-grained target detection in aerial images, this paper is of great reference. This modification by angle classification enables fine-grained recognition of remote sensing objects in dense and arbitrary orientations without changing the structure of the YOLOv5 network. Although the detection performance is not too high, it has an important impact on future work. In this paper, our proposed yolov5_cl algorithm can be effectively applied to remote sensing images to meet our needs in practical tasks.
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Method & Backbone & Dataset & mAP (\\%) \\\\ \\hline Yolov5\\_cl & Yolov5m & FAIR1M & 39.20 \\\\ \\hline Yolov5\\_cl & Yolov5m & DOTA & 72.68 \\\\ \\hline S\\({}^{2}\\)anet & R50\\_FPN & DOTA & 74.12 \\\\ \\hline ReDet & R50\\_FPN & DOTA & 76.25 \\\\ \\hline \\end{tabular}
\\end{table}
Table 7: Compare results with other methods
Figure 9: Sample object detection results of our improved yolov5_cl algorithm on FAIR1M.
Moreover, this simple improvement can be applied to many advanced detection algorithms, extending their application range with good accuracy. Our model achieved mAP of 39.2 and 72.68 on the FAIR1M and DOTA datasets, respectively. At present, this detection method mainly has the problem of low classification confidence, and also includes false detection and missed detection of small objects, which is the main reason that affects the accuracy of the yolov5_csl algorithm in the FAIR1M dataset. In the future, we may consider using the latest transformer as our backbone and solve the problem of class imbalance in the dataset to improve the competitiveness of our algorithm.
## References
* [1] G. Cheng, P. Zhou, and J. Han, \"Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing image,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol.54, no.12, pp.7405-7415, 2016.
* [2] K. Li, G. Wan, G. Cheng, L. Meng, _et al._, \"Object detection in optical remote sensing images: A survey and a new benchmark,\" _ISPRS Journal of Photogrammetry Remote Sensing_, vol.159, pp.296-307, 2020.
* [3] T. Y. Lin, M. Maire, S. Belongie, _et al._, \"Microsoft COCO: Common objects in context,\" in _Proceedings of European Conference on Computer Vision_, Springer, Cham, pp.740-755, 2014.
* [4] M. Everingham, L. Van Gool, C. K. Williams, _et al._, \"The PASCAL visual object classes (VOC) challenge,\" _International Journal of Computer_, vol.88, no.2, pp.303-338, 2010.
* [5] A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, \"Yolov4: Optimal speed and accuracy of object detection,\" _arXiv preprint_, arXiv: 2004.10934, 2020.
* [6] K. Duan, S. Bai, L. Xie, _et al._, \" Centernet: Keypoint triplets for object detection,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, Seoul, Korea, pp.6568-6577, 2019.
* [7] R. Girshick, J. Donahue, T. Darrell, _et al._, \" Rich feature hierarchies for accurate object detection and semantic segmentation,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, Venice, Italy, pp.580-587, 2014.
* [8] T.Y. Lin, P. Goyal, R. Girshick, _et al._, \"Focal loss for dense object detection,\" in _Proceedings of the IEEE International Conference on Computer Vision_, Venice, Italy, pp.2980-2988, 2017.
* [9] W. Liu, D. Anguelov, D. Erhan, _et al._, \" SSD: Single shot multibox detector,\" in _Proceedings of the European Conference on Computer Vision_, Springer, Cham, pp.21-37, 2016.
* [10] J. Redmon, S. Divvala, R. Girshick, _et al._, \"You only look once: Unified, real-time object detection,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, Las Vegas, NV, USA, pp.779-788, 2016.
* [11] S. M. Azimi, E. Vig, R. Bahmanyar, _et al._, \"Towards multi-class object detection in unconstrained remote sensing imagery,\" in _Proceedings of Asian Conference on Computer Vision_, Springer, Cham, pp.150-165, 2019.
* [12] G. Zhang, S. Lu, and W. Zhang, \" CAD-Net: A context-aware detection network for objects in remote sensing imagery,\" _IEEE Transactions on Geoscience Remote Sensing_, vol.57, no.12, pp.10015-10024, 2019.
* [13] J. Han, J. Ding, N. Xue, _et al._, \"ReDet: A rotation-equivariant detector for aerial object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, Nashville, TN, USA, pp.2768-2795, 2021.
* [14] X. Yang, J. Yang, J. Yan, _et al._ \" SCRDet: Towards more robust detection for small, cluttered and rotated objects,\" in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, Seoul, Korea, pp.8231-8240, 2019.
* [15] J. Ding, N. Xue, Y. Long, _et al._, \"Learning roi transformer for oriented object detection in aerial images,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, Long Beach, CA, USA, pp.2844-2853, 2019.
* [16] J. Han, J. Ding, J. Li, _et al._, \" Align deep features for oriented object detection,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol.60, pp.1-11, 2021.
* [17] X. Yang, J. Yan, Z. Feng, _et al._, \" R3Det: Refined single-stage detector with feature refinement for rotating object,\" in _Proceedings of the 35th AAAI Conference on Artificial Intelligence_, Virtual Event, pp.3163-3171, 2021.
* [18] X. Yang and J. Yan. \" Arbitrary-oriented object detection with circular smooth label,\" in _Proceedings of European Conference on Computer Vision 2020, LNCS, vol.12353_, Springer, Cham, pp.677-694, 2020.
* [19] G. S. Xia, X. Bai, J. Ding, _et al._, \" DOTA: A large-scale dataset for object detection in aerial images,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, Salt Lake City, UT, USA, pp.3974-3983, 2018.
* [20] X. Sun, P. Wang, Z. Yan, _et al._, \" FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery,\" _ISPRS Journal of Photogrammetry Remote Sensing_, vol.184, pp.116-130, 2022.
* [21] X. Yang, H. Sun, K. Fu, _et al._, \" Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks,\" _Remote Sensing_, vol.10, no.1, article no.132, 2018.
* [22] K. Fu, Z. Chang, Y. Zhang, _et al._, \" Rotation-aware and multi-scale convolutional neural network for object detection in remote sensing images,\" _ISPRS Journal of Photogrammetry Remote Sensing_, vol.161, pp.294-308, 2020.
* [23] Z. Liu, H. Wang, L. Weng, _et al._, \" Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds,\" _IEEE Geoscience Remote Sensing Letters_, vol.13, no.8, pp.1074-1078, 2016.
* [24] S. Ren, K. He, R. Girshick, _et al._, \" Faster R-CNN: Towards real-time object detection with region proposal networks,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol.39, no.6, pp.1137-1149, 2017.
* [25] L. Zhou, H. Wei, H. Li, _et al._, \" Objects detection for remote sensing images based on polar coordinates,\" _arXiv preprint_, arXiv: 2001.02988, 2020.
* [26] J. Yi, P. Wu, B. Liu, _et al._, \"Oriented object detection in aerial images with box boundary-aware vectors,\" in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, Waikoloa, HI, USA, pp.2149-2158, 2021.
* [27] W. Li, Y. Chen, K. Hu, _et al._, \"Oriented reppoints for aerial object detection,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, New Orleans, Louisiana, pp.1829-1838, 2022.
* [28] X. Yang, X. Yang, J. Yang, _et al._, \"Learning high-precision bounding box for rotated object detection via kullback-lei-bler divergence,\" _Advances in Neural Information Processing Systems_, vol.34, pp.18381-18394, 2021.
* [29] X. Yang, J. Yan, Q. Ming, _et al._, \"Rethinking rotated object detection with Gaussian Wasserstein distance loss,\" in _Proceedings of the International Conference on Machine Learning_, Vienna, Austria, pp.11830-11841, 2021.
* [30] J. Hu, L. Shen, and G. Sun, \" Squeeze-and-excitation networks,\" in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, Salt Lake City, UT, USA, pp.7132-7141, 2018.
* [31] S. Woo, J. Park, J. Y. Lee, _et al._, \"CBAM: Convolutional block attention module,\" in _Proceedings of the European Conference on Computer Vision (ECCV)_, Munich, Germany, pp.3-19, 2018.
* [32] H. Zhao, J. Jia, and V. Koltun, \"Exploring self-attention for image recognition,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, Seattle, WA, USA, pp.10073-10082, 2020.
* [33] A. Srinivas, T. Y. Lin, N. Parmar, _et al._, \"Bottleneck transformers for visual recognition,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, Nashville, TN, USA, pp.16514-16524, 2021.
* [34] A. F. Agarap, \" Deep learning using rectified linear units (ReLU),\" _arXiv preprint_, arXiv: 1803.08375, 2018.
* [35] B. Xu, N. Wang, T. Chen, _et al._, \"Empirical evaluation of rectified activations in convolutional network,\" _arXiv preprint_, arXiv: 1505.00853, 2015.
* [36] D. Misra, \" Mish: A self regularized non-monotonic activation function,\" _arXiv preprint_, arXiv: 1908.08681, 2019.
* [37] J. Deng, W. Dong, R. Socher, _et al._, \"A large-scale hierarchical image database,\" in _Proceedings of IEEE Computer Vision and Pattern Recognition_, Miami, FL, USA, pp.248-255, 2009.
* [38] M. Tan and Q. Le, \" Efficientnet: Rethinking model scaling for convolutional neural networks,\" in _Proceedings of the International Conference on Machine Learning_, Long Beach, California, USA, pp.6105-6114, 2019.
* [39] N. Ma, X. Zhang, M. Liu, _et al._, \"Activate or not: Learning customized activation,\" in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, Nashville, TN, USA, pp.8028-8038, 2021.
\\begin{tabular}{c c} & ZHANG Rui was born in Harbin, China, in 1970. In 2006, she graduated from Harbin University of Science and Technology, majoring in measurement technology and instrument, and got a Ph.D. degree. In 2011, she completed postdoctoral research in Harbin Institute of Technology. She has been engaged in research work in fields of power quality monitoring, signal processing, target detection, machine learning for a long time. (Email: [email protected]) \\\\ \\end{tabular} \\begin{tabular}{c c} & XIE Cong was born in Sichuan Province, China, in 1996. He received an M.S. degree candidate in electronic information engineering at Harbin University of Science and Technology. His research interests include digital image processing and deep learning. (Email: [email protected]) \\\\ \\end{tabular}
\\begin{tabular}{c c} & DENG Liwei (corresponding author) was born in 1983. He received the M.S. degree from Harbin University of Science and Technology, Harbin, China, in 2010, and Ph.D. degree from Harbin Institute of Technology, Harbin, China, in 2014. He is currently a Associate Professor with Harbin University of Science and Technology, Harbin, China. His research interests include control science and engineering, fractional order system, digital imaging processing, and deep learning algorithm. (Email: [email protected]) | Many advanced object detection algorithms are mainly based on natural scenes object and rarely dedicated to fine-grained objects. This seriously limits the application of these advanced detection algorithms in remote sensing object detection. How to apply horizontal detection in remote sensing images has important research significance. The mainstream remote sensing object detection algorithms achieve this task by angle regression, but the periodicity of angle leads to very large losses in this regression method, which increases the difficulty of model learning. Circular smooth label (CSL) solved this problem well by transforming the regression of angle into a classification form. YOLOv5 combines many excellent modules and methods in recent years, which greatly improves the detection accuracy of small objects. We use YOLOv5 as a baseline and combine the CSL method to learn the angle of arbitrarily oriented targets, and distinguish the fine-grained between instance classes by adding an attention mechanism module to accomplish the fine-grained target detection task for remote sensing images. Our improved model achieves an average category accuracy of 39.2% on the FAIR1M dataset. Although our method does not achieve satisfactory results, this approach is very efficient and simple, reducing the hardware requirements of the model. | Give a concise overview of the text below. | 237 |
arxiv-format/1810_05782v1.md | # A Cloud Detection Algorithm for Remote Sensing Images Using Fully Convolutional Neural Networks
Sorour Mohajerani, Thomas A. Krammer, Parvaneh Saeedi
School of Engineering Science
Simon Fraser University, Burnaby, BC, Canada
Email: {smohajer,tkrammer,psaeedi}@sfu.ca
## I Introduction
Creating an accurate measure of cloud cover is a crucial step in the collection of satellite imagery. The presence of cloud and its coverage level in an image could affect the integrity and the value of that image in most remote sensing applications that rely on optical satellite imagery. Moreover, transmission and storage of images with high cloud coverage seem to be unnecessary and perhaps even wasteful. Therefore, accurate identification of cloud regions in satellite images is an active subject of research. Since clouds share similar reflection characteristics with some other ground objects/surfaces such as snow, ice, and white man-made objects, identification of the cloud and its separation from non-cloud regions is a challenging task. The existence of additional data such as multispectral bands could assist a more accurate cloud identification process by utilizing temperature and water content information that are provided through additional bands. The difficulty in automation of cloud segmentation becomes more significant when access to spectral bands is limited to Red, Green, Blue, and Near-infrared (Nir) only. Such limitation exists in the data of many satellites such as HJ-1 and GF-2, as they do not provide more spectral band data.
In recent years, many cloud detection algorithms have been developed. These methods can be divided into three main categories: threshold-based approaches [1, 2], handcrafted approaches [3, 4], and deep-learning based [5].
Function of Mask (FMask) [1] and Automated Cloud-Cover Assessment (ACCA) [2] algorithms are among the most known and reliable threshold-based algorithms for cloud identification. They use a decision tree to label each pixel as cloud or non-cloud. In each branch of the tree, the decision is made based on the result of a thresholding function that utilizes one or more spectral bands of data. Haze optimized Transformation (HOT) is among the group of handcrafted methods, which isolates haze and thick clouds from other pixels using the relationship between spectral responses of two visible bands. [4], as another handcrafted approach, incorporates an object-based Support Vector Machine (SVM) classifier to identify clouds from non-cloud regions using local cloud patterns. With the recent advances made in deep-learning algorithms for image segmentation, several methods have been developed for cloud detection using deep-learning approaches. Xie et al. [5] trained a convolutional Neural network (CNN) from multiple small patches. This network classified each image patch into three classes of thin cloud, thick cloud, and non
Fig. 1: An Example of errors in default ground truths of the Landsat 8 images: (a) True-color image, (b) Default ground truth for clouds(c) Icy/snowy regions, which are erroneously labeled as cloud, are highlighted with red, (d) Corrected ground truth using snow/ice removal framework.
cloud and as the output it created a probability map for each class. A major problem in cloud detection based on deep-learning is the lack of accurately annotated ground truth. Most default ground truths, obtained through automatic/semi-automatic approaches are not accurate enough. For instance, they label icy or snowy areas as clouds. Such erroneous ground truth limits their use for training new systems based on deep-learning. Fig. 1 illustrates an example of these errors in a default ground truth.
Although the above mentioned methods have shown limited good results for scenes including thick clouds, they cannot deliver robust and accurate results in scenes where snow is present alongside of the cloud.
Here, we propose a new method based on both thresholding and deep-learning to identify the cloud regions and separate them from icy/snow ones in multi-spectral Landsat 8 images. Our threshold based method utilizes band 2 in Landsat 8 and image gradient to detect regions of snow. We augment the existing Landsat 8 ground truth images by first identifying the icy/snowy regions and second removing them from the ground truth data that is used for the training of our deep-learning system. Our proposed deep-learning system is a Fully Convolutional Neural Network (FCN) that is trained using cropped patches of the training set images. The weights of the trained network are used to detect cloud pixels in an end-to-end manner. Unlike FMask and ACCA, this approach is not blind to the existing global and local cloud contexts in the image. In addition, since only four spectral bands--Red, Green, Blue (band 2), and Nir (RGBNir)--are required for the system training and prediction, this architecture can be simply utilized for detection of clouds in images obtained from many other satellites as well as air-born systems.
## II Proposed Method
### _Landsat 8 Images_
Landsat 8 multi-spectral data consists of nine spectral bands collected from Operational Land Imager (OLI) sensor and two thermal bands obtained by Thermal Infrared Sensor (TIRS) sensor each measuring a different range of wavelengths. Table I summarizes the specification of these bands. In this paper, we only use four spectral bandsBand 2 to Band 5. Also, there is a Quality Assessment (QA) band, which is developed by the Landsat 8 Cloud Cover Assessment (CCA) system and the FMask algorithm [6]. The default cloud/snow ground truths of an image can be extracted from QA band.
\\begin{table}
\\begin{tabular}{|c|c|} \\hline
**Spectral Bands** & **Wavelength (um)** \\\\ \\hline Band 1 - Ultra Blue & 0.435 - 0.451 \\\\ \\hline Band 2 - Blue & 0.452 - 0.512 \\\\ \\hline Band 3 - Green & 0.533 - 0.590 \\\\ \\hline Band 4 - Red & 0.636 - 0.673 \\\\ \\hline Band 5 - Near Infrared (Nir) & 0.851 - 0.879 \\\\ \\hline Band 6 - Shortwave Infrared 1 & 1.566 - 1.651 \\\\ \\hline Band 7 - Shortwave Infrared 2 & 2.107 - 2.294 \\\\ \\hline Band 8 - Panchromatic & 0.503 - 0.676 \\\\ \\hline Band 9 - Cirrus & 1.363 - 1.384 \\\\ \\hline Band 10 - Thermal Infrared (TIRS) 1 & 10.60 - 11.19 \\\\ \\hline Band 11 - Thermal Infrared (TIRS) 2 & 11.50 - 12.51 \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Landsat 8 Spectral Bands.
Fig. 2: The proposed network for detecting clouds. The depth of the feature map in encoding path is increased from 4 (input channels are RGBNir) to 1024. This depth is then decreased from 1024 to 1 (gray scale probability map) in the decoding path. Meanwhile, the spatial size of the feature map is reduced from \\(192\\times 192\\) to \\(6\\times 6\\) in the encoding path and, then, it is increased to \\(192\\times 192\\) in the decoding path. The copy layer between Encode \\(i\\) block and Decode \\(j\\) block, concatenates the output of second convolution layer in Encode \\(i\\) block to the output of convolution transposed layer in Decode \\(j\\) block.
### _Snow/Ice Removal Framework_
To augment/correct the cloud ground truths of the Landsat 8 training data we, first, apply a snow/ice removal approach. To do so, each Landsat 8 spectral band image is first divided into three distinct regions; snow, cloud, and clear using the information provided with Landsat 8 QA band. The gradient magnitude for each pixel, is then obtained. Once calculated, the average image gradient magnitude for each of the snow, cloud, and clear regions is determined. Comparing these averages in four spectral bands reveals a considerable difference between the snow region and the rest of the image. Since Band 2 exhibited the greatest proportional difference between the average gradient magnitude of the snow region and rest of the image, we utilized this band for snow/ice removal framework. After determining the image gradient of Band 2, a global threshold is applied to isolate pixels with greater values and produce a binary snow mask. By removing detected snow regions from the default cloud ground truth extracted from Landsat 8 QA band, a corrected and more accurate binary cloud mask is obtained. Fig. 1(d) illustrates a corrected cloud ground truth image using the above snow/ice removal framework.
### _Cloud Detection Framework_
Once the ground truths are corrected, we utilize them in a deep-learning framework to identify cloud pixels in an image. In FCNs the spatial size of the output image is same as the input image. This characteristic allows these type of CNNs to be used in pixel-wise labeling tasks such as image segmentation. The proposed CNN in this paper has a FCN architecture, which is inspired by U-Net [7]. U-Net is introduced to segment specific regions in Electron Microscopic (EM) stack images. This network is widely used in many other computer vision applications [8, 9]. It basically has a fully convolutional encoder (contracting) path, which is connected to a fully convolutional decoder (expanding) path. Some skip connections attaches the encoding blocks in contracting path to the analogous decoding blocks in the expanding path.
The block diagram of the proposed network is shown in Fig. 2. It has six encoding and five decoding blocks. In each of these blocks there are two convolution layers to extract the semantic features of the image. In a convolution layer, a \\(3\\times 3\\) kernel is convolved with the input of the layer. A Rectified Linear Unit (ReLU) [10] is then applied to generate the output. In the encoding path, the output of a convolution layer is followed by a maxpooling layer to reduce the spatial size of the feature map. In the decoding path, the spatial size of the feature map is gradually increased to reach to the original input size of the network using convolution transposed layers in decoding blocks. Image features, obtained from an encoding block, is utilized in the analogous decoding block--using a copy layer. By applying repetitive encoding and decoding blocks, low-level features of the image at the very beginning layers of the network are evolved to high-level semantic contexts at the output probability map of the network.
The spatial dimensions of the input images in the proposed network is \\(192\\times 192\\times 4\\) pixels. Since each of the spectral band of the Landsat 8 is very large--in order of \\(8000\\times 8000\\) pixels--we have to cut them into smaller image patches. Therefore, each spectral band image is cropped into \\(384\\times 384\\) non-overlapping patches. Before training, these patches are resized to \\(192\\times 192\\). Then four patches corresponding to Red, Green, Blue, and Nir bands are stacked on the top of each other to create a 4D input and then this input is fed to the network. To reduce the vulnerability of the approach to misleading patterns with similarities to the cloud patterns, we augmented the input patches with geometric transformations such as horizontal flipping, rotation, and zooming. In the very last convolution layer of the network, a \\(sigmoid\\) activation function is utilized to extract the output probability map. The following soft _Jaccard_ loss function [11, 12] is implemented to optimize the network through Adam gradient descent [13] approach:
\\[L(h,y)\\!=\\!-\\frac{\\sum\\limits_{i=1}^{n}h_{i}y_{i}+\\epsilon}{\\sum\\limits_{i=1}^{ n}h_{i}+\\sum\\limits_{i=1}^{n}y_{i}-\\sum\\limits_{i=1}^{n}h_{i}y_{i}+\\epsilon}, \\tag{1}\\]
Here, \\(h\\) is the ground truth and \\(y\\) is the probability map that is obtained from output of the _sigmoid_ function in the last layer of the network. \\(n\\) is the total number of pixels in the ground truth. \\(y_{i}\\) and \\(h_{i}\\) are the \\(i\\)th pixel of \\(y\\) and \\(h\\). \\(\\epsilon\\) is a small real number to avoid division by zero. The learning process is started from the weights that are constructed from a uniform random distribution between \\([-1,1]\\). We set the initial learning rate of the training as \\(10^{-4}\\).
The training process is done for 600 epochs. After this number of epochs the network is converged to an appropriate local minimum. The obtained weights are then utilized for the prediction purposes. Before prediction, non-overlapping \\(384\\times 384\\) patches are extracted from each of the four spectral bands of the given test image. Then these patches are resized to \\(192\\times 192\\) and stacked together. Once the cloud features corresponding to each patch are obtained, the output cloud probability map is resized to \\(384\\times 384\\) pixels. These resized patches are then stitched up together to create a cloud probability map for the entire image. By doing a simple thresholding the binary cloud mask of the input image is obtained.
## III Experimental Results
### _Dataset_
We have created a new dataset for cloud detection purposes. This dataset includes two sets of training and test images. The training set contains 4600 patches that are cut from 18 Landsat 8 images. Each image has 4 bands. It also includes ground truth patches (extracted from the Landsat 8 QA band) which we refer to it by the default ground truth. In the test side, the test set holds 5100 patches (obtained from 20 images that are different from those for the training set) with 4 bands.
Along with these image patches, we have the manually created ground truth of the clouds.
Before training our system, we have applied our proposed threshold-based snow/ice removal method (Section II.B) to automatically identify the snowy/icy regions and remove them from the default ground truths. We then trained our system twice once using this corrected ground truths and once using the default ground truths. In both cases we run the system on the test set and compare the outputs with the manually created ground truths to highlight the improvement. It is important to mention that both the training and the test images are selected to cover many scene elements such as vegetation, bare soil, buildings, urban areas, water, snow, ice, haze, different types of cloud patterns, etc. and the average percentage of cloud coverage in both of the train and test sets is around 50%. This dataset is publicly available to the community by request.
### _Evaluation Metrics_
The performance of the proposed algorithm is determined by evaluating the overall accuracy, recall, precision, and _Jaccard_ index for the masks it produces. These measures are defined as follows:
\\[\\begin{split}\\text{Jaccard Index}=\\frac{TP}{TP+FN+FP},\\\\ \\text{Precision}=\\frac{TP}{TP+FP},\\\\ \\text{Recall}=\\frac{TP}{TP+FN},\\\\ \\text{Overall Accuracy}=\\frac{TP+TN}{TP+TN+FP+FN},\\end{split} \\tag{2}\\]
Where TP, TN, FP, and FN are true positive, true negative, false positive, and false negative respectfully. The _Jaccard_ index relates both recall and precision and is a measure of the similarity between two sets.
### _Numerical and Visual Results_
Table II demonstrates experimental results of the proposed method on our test set. As shown, the _Jaccard_ index of the cloud masks obtained by augmented ground truths is improved by 4.36%. This indeed highlights the effectiveness of the proposed snow/ice removal framework in our proposed method. Also recall measure is increased by 3.62%. This measure indicates that the number of cloud pixels labeled
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Method** & **Jaccard** & **Precision** & **Recall** & **Overall Accuracy** \\\\ \\hline \\hline FCN without snow/ice correction & 62.63 & 72.59 & 79.39 & 87.81 \\\\ \\hline FCN with snow/ice correction & **65.36** & **73.54** & **82.26** & **88.30** \\\\ \\hline Improvement Percentage & 4.36 & 1.30 & 3.62 & 0.56 \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: System performance measures (in %).
Fig. 3: Examples of the cloud masks obtained by the proposed method: (a),(e) True-color input images, (b), (f) Manual ground truths, (c), (g) Predicted cloud mask without snow/ice correction, (d), (h) Predicted cloud masks with snow/ice correction.
correctly as cloud is increased. Some visual examples of the predicted shadow masks from sample images of our test set are displayed in Fig.3.
## IV Conclusion
In this paper a deep-learning based approach is proposed to detect the cloud pixels in Landsat 8 images using only four spectral bands of RGBNir. Our pixel-level segmentation framework extracts the semantic local and global features of the clouds in an image by a high accuracy. This framework can be utilized for other segmentation tasks in the applications of remote sensing images of satellites or airborne sensors. We also introduce a novel cloud detection dataset with accurately annotated cloud pixels. In our future work, we will focus on improving the networks field of view to identify more cloud context out of the images.
## References
* 277, 2015.
* [2] R. R. Irish, J. L. Barker, S. Goward, and T. Arvidson, \"Characterization of the landsal-7 't emu- automated cloud-cover assessment (acca) algorithm,\" vol. 72, pp. 1179-1188, 10 2006.
* 187, 2002.
* [4] Y. Yuan and X. Hu, \"Bag-of-words and object-based classification for cloud extraction from satellite imagery,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 8, no. 8, pp. 4197-4205, Aug 2015.
* [5] F. Xie, M. Shi, Z. Shi, J. Yin, and D. Zhao, \"Multilevel cloud detection in remote sensing images based on deep learning,\" _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, vol. 10, no. 8, pp. 3631-3640, Aug 2017.
* [6] D. of the Interior U.S. Geological Survey. Landsat 8 (8) data users handbook. [Online]. Available: [https://landsat.usgs.gov/documents/](https://landsat.usgs.gov/documents/) LandsatDataUsersHandbook.pdf
* [7] O. Ronneberger, P. Fischer, and T. Brox, \"U-net: Convolutional networks for biomedical image segmentation,\" _CoRR_, vol. abs/1505.04597, 2015.
* MICCAI 2016_, S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells, Eds. Cham: Springer International Publishing, 2016, pp. 424-432.
* [9] H. Wang, X. Yang, L. Ma, and R. Liang, \"Fingerprint pore extraction using a-net based fully convolutional network,\" in _Biometric Recognition_, J. Zhou, Y. Wang, Z. Sun, Y. Xu, L. Shen, J. Feng, S. Shan, Y. Qiao, Z. Guo, and S. Yu, Eds. Cham: Springer International Publishing, 2017, pp. 279-287.
* [10] V. Nair and G. E. Hinton, \"Rectified linear units improve restricted boltzmann machines,\" in _Proceedings of the 27th international conference on machine learning (ICML-10)_, 2010, pp. 807-814.
* [11] W. Waeegman, K. Dembryzki, A. Jachnik, W. Cheng, and E. Hullermeier, \"On the bayes-optimality of f-measure maximizers,\" _J. Mach. Learn. Res._, vol. 15, no. 1, pp. 3333-3388, Jan. 2014.
* [12] Y. Yuan, M. Chao, and Y. C. Lo, \"Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance,\" _IEEE Transactions on Medical Imaging_, vol. 36, no. 9, pp. 1876-1886, Sept 2017.
* [13] D. P. Kingma and J. Ba, \"Adam: A method for stochastic optimization,\" _CoRR_, vol. abs/1412.6980, 2014. | This paper presents a deep-learning based framework for addressing the problem of accurate cloud detection in remote sensing images. This framework benefits from a Fully Convolutional Neural Network (FCN), which is capable of pixel-level labeling of cloud regions in a Landsat 8 image. Also, a gradient-based identification approach is proposed to identify and exclude regions of snow/ice in the ground truths of the training set. We show that using the hybrid of the two methods (threshold-based and deep-learning) improves the performance of the cloud identification process without the need to manually correct automatically generated ground truths. In average the _Jaccard_ index and recall measure are improved by 4.36% and 3.62%, respectively.
Cloud detection, remote sensing, Landsat 8, image segmentation, deep-learning, CNN, FCN, U-Net. | Condense the content of the following passage. | 172 |
arxiv-format/2405_20725v1.md | # GI-NAS: Boosting Gradient Inversion Attacks through Adaptive Neural Architecture Search
Wenbo Yu\\({}^{1}\\), Hao Fang\\({}^{2}\\), Bin Chen\\({}^{1}\\), Xiaohang Sui\\({}^{1}\\), Chuan Chen\\({}^{3}\\),
**Hao Wu\\({}^{2}\\), Shu-Tao Xia\\({}^{2}\\), Ke Xu\\({}^{4}\\)**
\\({}^{1}\\)School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen
\\({}^{2}\\)Tsinghua Shenzhen International Graduate School, Tsinghua University
\\({}^{3}\\)School of Computer Science and Engineering, Sun Yat-Sen University
\\({}^{4}\\)Department of Computer Science and Technology, Tsinghua University
{yuwenbo, suixiaohang}@stu.hit.edu.cn; {fang-h23, wu-h22}@mails.tsinghua.edu.cn;
[email protected]; [email protected]; [email protected];
[email protected]
The first two authors contributed equally to this work. Wenbo Yu performed this work while pre-admitted to Tsinghua Shenzhen International Graduate School.Corresponding author.
## 1 Introduction
Federated Learning (FL) [1; 2; 3] serves as an efficient collaborative learning framework where multiple participants cooperatively train a global model and only the computed gradients are exchanged. By adopting this distributed paradigm, FL systems fully leverage the huge amounts of data partitioned across various clients for enhanced model efficacy and tackle the separateness of data silos [4]. Moreover, since merely the gradients instead of the intimate data are uploaded to the server, the user privacy seems to be safely guaranteed as the private data is only available at the client side.
However, FL systems are actually not so reliable as expected. Extensive studies have discovered that even the conveyed gradients can disclose the sensitive information of users. Early works [5; 6; 7; 8] involve inferring the existence of certain samples in the dataset (i.e., Membership Inference Attacks [9]) or further revealing some properties of the private training set (i.e., Property Inference Attacks [10]) from the uploaded gradients. But unlike the above inference attacks that only partially reveallimited information of the private data, Gradient Inversion Attacks stand out as a more threatening privacy risk as they can completely reconstruct the sensitive data by inverting the gradients.
Zhu _et al._[11] first formulate gradient inversion as an optimization problem and recover the intimate training images by minimizing the gradient matching loss (i.e., the distance between the dummy gradients and the real gradients) with image pixels regarded as trainable parameters. Ensuing works improve the attack performance on the basis of Zhu _et al._[11] by designing label extraction techniques [12], switching the distance metric and introducing regularization [13], or considering larger batch sizes settings [14], but still restrict their optimizations in the pixel space. To fill this gap, recent studies [15; 16; 17] propose to explore various search algorithms within the Generative Adversarial Networks (GAN) [18; 19] to leverage the rich prior knowledge encoded in the pre-trained generative models. While incorporating these explicit priors indeed improves the attack performance, it is usually tough to pre-prepare such prerequisites in realistic FL scenarios where the data distribution at the client side is likely to be complex or unknown to the attackers. Therefore, Zhang _et al._[20] propose to employ an over-parameterized convolutional network for gradient inversion and directly optimize the excessive network parameters on a fixed input, which does not require any explicit prior information but still outperforms GAN-based attacks. The reason behind this is that the structure of a convolutional network naturally captures image statics prior to any learning [21] and Zhang _et al._[20] leverage this characteristic as implicit prior knowledge. However, Zhang _et al._[20] only utilize a fixed network for all the attack settings, regardless of the specific batch to recover. An intuitive question naturally arises: _can we adaptively search the architecture of the over-parameterized network to better capture the implicit prior knowledge for each reconstructed batch?_
In Figure 1, we randomly select \\(5\\) over-parameterized networks with different architectures to attack \\(3\\) different batches. All the networks are optimized for the same number of iterations on ImageNet [22]. The results indicate that the PSNR performance varies significantly when changing the architectures. For the same batch, various architectures can hold remarkably different implicit priors on it. Besides, since the optimal models for Batch \\(1\\), Batch \\(2\\), and Batch \\(3\\) are respectively Model \\(3\\), Model \\(4\\), and Model \\(1\\), there exists no network that can consistently perform the best on all the given batches. Thus, the changeless adoption of architecture by Zhang _et al._[20] lacks optimality under dynamic scenarios and it is of great significance to adaptively select the most suitable architecture for each batch.
Inspired by the above phenomenons, we propose a novel gradient inversion method, named **G**radient **I**nversion via **N**eural **A**rchitecture **S**earch (GI-NAS), to better match each batch with the optimal model architecture. Specifically, we first enlarge the potential search space for the over-parameterized network by designing different upsampling modules and skip connection patterns. To reduce the computational overhead, we utilize a training-free search strategy that compares the initial gradient matching loss for a given batch over all the candidates and selects the best of them for the final optimization. We further provide experimental evidence that such a metric highly correlates with the real performance. We also consider more rigorous and realistic scenarios where the victims may hold high-resolution images and large-sized batches for training, and evaluate advanced defense strategies.
Figure 1: Results when attacking \\(3\\) different batches by \\(5\\) different models on ImageNet. (a) shows the quantitative comparison. (b) presents the qualitative comparison, where the images of the first row, the second row, and the third row are respectively from Batch \\(1\\), Batch \\(2\\), and Batch \\(3\\).
Extensive experiments validate that GI-NAS can achieve state-of-the-art performance compared to existing gradient inversion methods. Our main contributions can be summarized as follows:
* We systematically analyze existing methods, emphasize the necessity of equipping image batch recovery with the optimal model structure, and propose GI-NAS to boost gradient inversion through neural architectural search.
* We expand the model search space by considering different upsampling units and skip connection modes, and utilize a training-free search that regards the initial gradient matching loss as the search metric. We also provide experimental evidence that such a metric highly correlates with the real performance.
* Numerous experimental results demonstrate that GI-NAS outperforms state-of-the-art gradient inversion methods, even under extreme settings with high-resolution images, large-sized batches, and advanced defense strategies. The ablation study further demonstrates the significance of our optimal architecture search.
## 2 Related Work
**Gradient Inversion Attacks and Defenses.** Zhu _et al._[11] first propose to restore the private samples via iterative optimization for pixel-level reconstruction, yet limited to low-resolution and single images. Geiping _et al._[13] empirically decompose the gradient vector by its norm magnitude and updating direction, and succeed on high-resolution ImageNet [22] through the angle-based loss design. Furthermore, Yin _et al._[14] extend the attacks to the batch-level inversion through the group consistency regularization and the improved batch label inference algorithm [12]. With strong handcrafted explicit priors (e.g., fidelity regularization, BN statistics), they accurately realize batch-level reconstruction with detailed semantic features allocated to all the individual images in a batch. Subsequent studies [15; 16; 17] leverage pre-trained GAN models as generative priors [23; 24; 25] to enhance the attacks. Besides, Hatamizadeh _et al._[26] apply gradient inversion to ViT [27] and uncover its vulnerability through component-wise analysis. Current defenses focus on gradient perturbation to alleviate the impact of gradient inversion with degraded gradients [28; 29; 30]. Gaussian Noise [31] and Gradient Clipping [32] are common techniques in Differential Privacy (DP) that effectively constrain the attackers from learning through the released gradients. Gradient Sparsification [33; 34] prunes the gradients through a given threshold, and Soteria [35] edits the gradients from the aspect of learned representations. These defenses significantly reduce the information carried in the gradients and can be a great challenge to gradient inversion attacks.
**Neural Architecture Search (NAS).** By automatically searching the optimal model architecture, NAS algorithms prove to be significantly effective in multiple visual tasks such as image restoration [36; 37], semantic segmentation [38; 39; 40], and image classification [41; 42; 43; 44]. In image classification tasks, Zoph _et al._[42; 44] regularize the search space to a convolutional cell and construct a better architecture stacked by these cells using an RNN controller. For semantic segmentation tasks, Liu _et al._[39] search for the optimal network on the level of hierarchical network architecture and extend NAS to dense image prediction. In terms of image restoration tasks, the HiNAS proposed by Zhang _et al._[45] firstly employs the gradient-based NAS on the denoising task. Suganuma _et al._[36] exploit a better Convolutional Autoencoders (CAE) with standard network components through the evolutionary search, while Chu _et al._[37] discover a competitive lightweight model for image super-resolution via both micro and macro architecture search.
Existing NAS algorithms adopt different search strategies to exploit the architecture search space, including evolutionary methods [46; 47; 48; 49; 50; 51; 52], Bayesian optimization [53; 54; 55], Reinforcement Learning (RL) [56; 57; 58; 42; 44], and gradient-based search [59; 60; 61; 62; 63]. Different RL-based methods vary in the way to represent the agent's policy and the optimization process. Zoph _et al._[42] utilize the RNN network to sequentially encode the neural architecture and train the network with the REINFORCE policy gradient algorithm [64]. Baker _et al._[56] adopt Q-learning [65; 66] to train the policy network and realize competitive model design. Current evolutionary approaches [48; 36] explore neural structures through mutations on layers and hyper-parameters and differ in their evolution strategies. Notably, recent works [67; 68; 69] propose training-free NAS methods to mitigate the issue of huge computational expense. Instead of training from scratch, they evaluate the searched networks by some empirically designed metrics that can reflect model effectiveness. In this paper, we adopt the initial gradient matching loss as the training free metric for our network search and provide experimental evidence in the later section that such a search metric highly correlates with the actual performance.
## 3 Problem Formulation
**Basics of Gradient Inversion.** We consider the training process of a classification model \\(f_{\\theta}\\) parameterized by \\(\\theta\\) in FL scenarios. The real gradients \\(\\mathbf{g}\\) are calculated from a private batch (with real images \\(\\mathbf{x}\\) and real labels \\(\\mathbf{y}\\)) at the client side. The universal goal of gradient inversion attacks is to search for some fake images \\(\\mathbf{\\hat{x}}\\in\\mathbb{R}^{B\\times H\\times W\\times C}\\) with labels \\(\\mathbf{\\hat{y}}\\in\\{0,1\\}^{B\\times L}\\) so that (\\(\\mathbf{\\hat{x}}\\), \\(\\mathbf{\\hat{y}}\\)) can be close to (\\(\\mathbf{x}\\), \\(\\mathbf{y}\\)) as much as possible, where \\(B\\), \\(H\\), \\(W\\), \\(C\\), and \\(L\\) are respectively batch size, image height, image width, number of channels, and number of classes. This can be realized by minimizing the gradient matching loss [11]:
\\[\\mathbf{\\hat{x}}^{*},\\mathbf{\\hat{y}}^{*}=\\operatorname*{arg\\,min}_{\\mathbf{ \\hat{x}},\\mathbf{\\hat{y}}}\\mathcal{D}(\
abla_{\\theta}\\mathcal{L}(f_{\\theta}( \\mathbf{\\hat{x}}),\\mathbf{\\hat{y}}),\\mathbf{g}), \\tag{1}\\]
where \\(\\mathcal{D}(\\cdot,\\cdot)\\) is the distance metric (e.g., \\(l_{2}\\)-norm loss, cosine-similarity loss) for the gradient matching loss and \\(\\mathcal{L}(\\cdot,\\cdot)\\) is the loss function of the global model \\(f_{\\theta}\\).
Previous works [12; 14] in this field have revealed that the ground truth labels \\(\\mathbf{y}\\) can be directly inferred from the uploaded gradients \\(\\mathbf{g}\\). Therefore, the formulation in (1) can be simplified as:
\\[\\mathbf{\\hat{x}}^{*}=\\operatorname*{arg\\,min}_{\\mathbf{\\hat{x}}}\\mathcal{D}(F (\\mathbf{\\hat{x}}),\\mathbf{g}), \\tag{2}\\]
where \\(F(\\mathbf{\\hat{x}})=\
abla_{\\theta}\\mathcal{L}(f_{\\theta}(\\mathbf{\\hat{x}}), \\mathbf{\\hat{y}})\\) calculates the gradients of \\(f_{\\theta}\\) provided with \\(\\mathbf{\\hat{x}}\\).
The key challenge of (2) is that gradients only provide limited information of private data and there can even exist a pair of different data having the same gradients [70]. To mitigate this issue, subsequent works incorporate various regularization terms (e.g., total variation loss [13], group consistency loss [14]) as prior knowledge. Therefore, the overall optimization becomes:
\\[\\mathbf{\\hat{x}}^{*}=\\operatorname*{arg\\,min}_{\\mathbf{\\hat{x}}}\\mathcal{D}(F (\\mathbf{\\hat{x}}),\\mathbf{g})+\\lambda\\mathcal{R}_{prior}(\\mathbf{\\hat{x}}), \\tag{3}\\]
where \\(\\mathcal{R}_{prior}(\\cdot)\\) is the introduced regularization that can establish some image priors for the attacks, and \\(\\lambda\\) is the weight factor.
**GAN-based Gradient Inversion.** Nevertheless, the optimization of (3) is still limited in the pixel space. Given a well pre-pretrained GAN, an instinctive idea is to swift the optimization from the pixel space to the GAN latent space:
\\[\\mathbf{z}^{*}=\\operatorname*{arg\\,min}_{\\mathbf{z}}\\mathcal{D}(F(G_{\\omega}( \\mathbf{z})),\\mathbf{g})+\\lambda\\mathcal{R}_{prior}(G_{\\omega}(\\mathbf{z})), \\tag{4}\\]
where \\(G_{\\omega}\\) and \\(\\mathbf{z}\\in\\mathbb{R}^{B\\times l}\\) are respectively the generator and the latent vector of the pre-trained GAN. By reducing the optimization space from \\(\\mathbb{R}^{B\\times H\\times W\\times C}\\) to \\(\\mathbb{R}^{B\\times l}\\), (4) overcomes the uncertainty of directly optimizing the extensive pixels and exploits the abundant prior knowledge encoded in the pre-trained GAN. Based on this, recently emerged GAN-based attacks [15; 16] explore various search strategies within the pre-trained GAN to utilize its expression ability.
**Gradient Inversion via Over-parameterized Networks.** But as previously mentioned, incorporating such GAN priors is often impractical in realistic scenarios where the distribution of \\(\\mathbf{x}\\) is likely to be mismatched with the training data of the pre-trained GAN. Furthermore, it has already been discovered in [20] that explicitly introducing regularization in (3) or (4) may not necessarily result in convergence towards \\(\\mathbf{x}\\), as even ground truth images can not guarantee minimal loss when \\(\\mathcal{R}_{prior}(\\cdot)\\) is added. To mitigate these issues, Zhang _et al._[20] propose to leverage an over-parameterized network as implicit prior knowledge:
\\[\\phi^{*}=\\operatorname*{arg\\,min}_{\\phi}\\mathcal{D}(F(G_{over}(\\mathbf{z_{0}} ;\\phi)),\\mathbf{g}), \\tag{5}\\]where \\(G_{over}\\) is the over-parameterized convolutional network with excessive parameters \\(\\phi\\), and \\(\\mathbf{z_{0}}\\) is the randomly generated but fixed latent code. Note that the regularization term is omitted in (5). This is because the architecture of \\(G_{over}\\) itself can serve as implicit regularization, for convolutional networks have been proven to possess implicit priors that prioritize clean images rather than noise as shown in [21]. Thus, the generated images that highly resemble the ground truth images can be obtained through \\(\\mathbf{\\hat{x}}^{*}=G_{over}(\\mathbf{z_{0}};\\phi^{*})\\). However, only a changeless over-parameterized network is employed for all the attack settings in [20]. As previously shown in Figure 1, although the network is over-parameterized, the attack performance exhibits significant differences when adopting different architectures. Therefore, we propose to further exploit such implicit priors by searching the optimal over-parameterized network \\(G_{opt}\\) for each batch. We will discuss how to realize this in Section 4.
## 4 Method
Our proposed GI-NAS attack is carried out in two stages. In the first stage, we conduct our architecture search to decide the optimal model \\(G_{opt}\\). Given that the attackers may only hold limited resources, we utilize the initial gradient matching loss as the training-free search metric to reduce the computational overhead. In the second stage, we iteratively optimize the parameters of the selected model \\(G_{opt}\\) to recover the sensitive data. Figure 2 illustrates the overview of our method.
### Training-free Optimal Model Search
#### 4.1.1 Model Search Space Design
One crucial factor for NAS is that the potential search space is large and diverse enough to cover the optimal design. Therefore, we adopt U-Net [71], a typical convolutional neural architecture as the fundamental of our model search, since the skip connection patterns between its encoders and decoders can provide adequate alternatives of model structure. Besides, the configurations of the upsampling modules (e.g., kernel size, activation function) can also enable numerous possibilities when combined. Following previous methods [72; 73], we enlarge the search space of our model from two aspects, namely _Upsampling Modules_ and _Skip Connection Patterns_.
**Search Space for Upsampling Modules.** We decompose the upsampling operations into five key components: feature upsampling, feature transformation, activation function, kernel size, and dilation rate. Then, we allocate a series of possible options to each of these components. When deciding on feature upsampling, we choose from commonly used interpolation techniques, such as bilinear interpolation, bicubic interpolation, and nearest-neighbour interpolation. As for feature transformation, we choose from classical convolution techniques, such as 2D convolution, separable convolution, and depth-wise convolution. As regards activation function, we select from ReLU, LeakyReLU, PReLU, etc. Furthermore, we supply kernel size and dilation rate with more choices,
Figure 2: Overview of the proposed GI-NAS attack. We leverage a two-stage strategy for private batch recovery. In the first stage, we traverse the model search space and calculate the initial gradient matching loss (i.e., our training-free metric) of each model based on the fixed input \\(\\mathbf{z_{0}}\\). We regard the model that achieves the minimal initial loss as our best model, for its performance at the start can stand from numerous candidates. In the second stage, we adopt the architecture of the previously found best model and optimize its excessive parameters to reconstruct the private data.
such as \\(1\\times 1\\), \\(3\\times 3\\), or \\(5\\times 5\\) for kernel size and \\(1\\), \\(3\\), or \\(5\\) for dilation rate. The combination of these flexible components can contribute to the diversity of upsampling modules.
**Search Space for Skip Connection Patterns.** We assume that there are \\(t\\) levels of encoders and decoders in total, and denote them as \\(e_{1},e_{2}, ,e_{t}\\) and \\(d_{1},d_{2}, ,d_{t}\\). As shown in Figure 3, We consider different skip connection patterns between encoders and decoders. To represent each of these patterns, we define a skip connection matrix \\(\\mathbf{A}\\in\\{0,1\\}^{t\\times t}\\) that serves as a mask to determine whether there will be new residual connections [74] between pairs of encoders and decoders. More concretely, \\(\\mathbf{A}_{ij}=1\\) indicates that there exists a skip connection from \\(e_{i}\\) to \\(d_{j}\\) and \\(\\mathbf{A}_{ij}=0\\) means that there isn't such a skip connection. As the shapes of feature maps across different network levels can vary significantly (e.g., \\(64\\times 64\\) for the output of \\(e_{i}\\) and \\(256\\times 256\\) for the input of \\(d_{j}\\)), we introduce connection scale factors to tackle this inconsistency and decompose all the possible scale factors into a series of \\(2\\times\\) upsampling operations or downsampling operations with shared weights. By allocating \\(0\\) or \\(1\\) to each of the \\(t^{2}\\) bits in \\(\\mathbf{A}\\), we broaden the search space of skip connection patterns as there are \\(2^{t^{2}}\\) possibilities altogether and we only need to sample a portion of them.
#### 4.1.2 Optimal Model Selection
We build up our model search space \\(\\mathcal{M}\\) by combining the possibilities of the aforementioned upsampling modules and skip connection patterns. We assume that the size of our model search space is \\(n\\) and the candidates inside it are denoted as \\(\\mathcal{M}=\\{G_{1},G_{2}, ,G_{n}\\}\\). We first sample the latent code \\(\\mathbf{z_{0}}\\) from the Gaussian distribution and freeze its values on all the models for fair comparison. We then traverse the model search space and calculate the initial gradient matching loss of each individual \\(G_{r}\\left(1\\leq r\\leq n\\right)\\):
\\[\\mathcal{L}_{grad}(G_{r})=\\mathcal{D}(\\mathcal{T}(F(G_{r}(\\mathbf{z_{0}};\\phi_ {r}))),\\mathbf{g}), \\tag{6}\\]
where \\(\\mathcal{L}_{grad}(\\cdot)\\) is the gradient matching loss, \\(\\mathcal{T}(\\cdot)\\) is the estimated gradient transformation [15], and \\(\\phi_{r}\\) are the parameters of \\(G_{r}\\). Here we introduce \\(\\mathcal{T}(\\cdot)\\) to estimate the gradient transformation following the previous defense auditing work [15], since the victims may apply defense strategies to \\(\\mathbf{g}\\) (e.g., Gradient Clipping [31]) and only release disrupted forms of gradients. Empirically, the model that can perform the best at the start and stand out from numerous candidates is likely to have better implicit architectural priors with respect to the private batch. Therefore, we regard the model that achieves the minimal initial loss as our best model \\(G_{opt}\\) and update our selection during the traversal. Since only the initial loss is calculated and no back-propagation is involved, this search process is training-free and hence computationally efficient.
### Private Batch Recovery via the Optimal Model
After deciding on the optimal model \\(G_{opt}\\), we iteratively optimize its parameters to minimize the gradient matching loss:
\\[\\gamma_{k+1}=\\gamma_{k}-\\eta\
abla_{\\gamma_{k}}\\mathcal{L}_{grad}(G_{opt}), \\tag{7}\\]
Figure 3: The design of search space for skip connection patterns. Different skip connection patterns are determined by the skip connection matrix \\(\\mathbf{A}\\in\\{0,1\\}^{t\\times t}\\). \\(\\mathbf{A}_{ij}=1\\) indicates that there exists a skip connection from \\(e_{i}\\) to \\(d_{j}\\) and \\(\\mathbf{A}_{ij}=0\\) means that there isn’t such a skip connection.
where \\(\\gamma\\) are the parameters of \\(G_{opt}\\), \\(\\eta\\) is the learning rate, and \\(k\\) is the number of iterations. Once the above process converges and we obtain \\(\\gamma^{*}\\) that satisfy the minimum loss, the private batch can be reconstructed by \\(\\mathbf{\\hat{x}}^{*}=G_{opt}(\\mathbf{z_{0}};\\gamma^{*})\\).
## 5 Experiments
### Setup
**Experimental Details.** We evaluate our method on CIFAR-10 [75] and ImageNet [22] with the resolution of \\(32\\times 32\\) and \\(256\\times 256\\). Unlike many previous methods [16; 17] that scale down the ImageNet images to \\(64\\times 64\\), here we emphasize that we adopt the high-resolution version of ImageNet. Thus, our setting is more rigorous and realistic. Following previous gradient inversion works [11; 20], we adopt ResNet-\\(18\\)[74] as our global model and utilize the same preprocessing procedures. We set the search space size as \\(n=5000\\) and randomly generate the alternative models by arbitrarily changing the options of upsampling modules and skip connection patterns.
**State-of-the-art Baselines for Comparison.** We implement the following gradient inversion methods: (1) _IG (Inverting Gradients)_[13]: pixel-level reconstruction with angle-based loss function; (2) _GI (GradInversion)_[14]: realizing batch-level restoration via multiple regularization priors; (3) _GGL (Generative Gradient Leakage)_[15]: employing strong GAN priors to produce high-fidelity images under severe defense strategies; (4) _GIAS (Gradient Inversion in Alternative Space)_[16]: searching the optimal latent code while optimizing in the generator parameter space; (5) _GION (Gradient Inversion via Over-parameterized Networks)_[20]: designing an over-parameterized convolutional network with excessive parameters and employing a fixed network architecture as implicit regularization. Note that when implementing GAN-based methods such as GGL and GIAS, we adopt BigGAN [76] pre-trained on ImageNet, which may result in mismatched priors when the target data is from CIFAR-10. However, such mismatch can be very common in realistic FL scenarios.
**Evaluation Metrics.** We utilize four metrics to measure the reconstruction results: (1) _PSNR \\(\\uparrow\\) (Peak Signal-to-Noise Ratio)_; (2) _SSIM \\(\\uparrow\\) (Structural Similarity Index Measure)_; (3) _FSIM \\(\\uparrow\\) (Feature Similarity Index Measure)_; (4) _LPIPS \\(\\downarrow\\) (Learned Perceptual Image Patch Similarity)_[77]. Note that \"\\(\\downarrow\\)\" indicates that the lower the metric, the better the attack performance while \"\\(\\uparrow\\)\" indicates that the higher the metric, the better the attack performance.
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Metric} & \\multicolumn{5}{c}{CIFAR-10} & \\multicolumn{5}{c}{ImageNet} \\\\ \\cline{2-13} & IG & GI & GGL & GIAS & GION & **GI-NAS** & IG & GI & GGL & GIAS & GION & **GI-NAS** \\\\ \\hline PSNR \\(\\uparrow\\) & 16.3188 & 15.4613 & 12.4938 & 17.3687 & 30.8652 & **35.9883** & 7.9419 & 8.5070 & 11.6255 & 10.0602 & 21.9942 & **23.2578** \\\\ SSIM \\(\\uparrow\\) & 0.5710 & 0.5127 & 0.3256 & 0.6239 & 0.9918 & **0.9983** & 0.0815 & 0.1157 & 0.2586 & 0.2408 & 0.6188 & **0.6848** \\\\ FSIM \\(\\uparrow\\) & 0.7564 & 0.7311 & 0.6029 & 0.7800 & 0.9960 & **0.9991** & 0.5269 & 0.5299 & 0.5924 & 0.5719 & 0.8198 & **0.8513** \\\\ LPIPS \\(\\downarrow\\) & 0.4410 & 0.4878 & 0.5992 & 0.4056 & 0.0035 & **0.0009** & 0.7194 & 0.7168 & 0.6152 & 0.6563 & 0.4605 & **0.3952** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Quantitative comparison of GI-NAS to state-of-the-art gradient inversion methods on CIFAR-10 (\\(32\\times 32\\)) and ImageNet (\\(256\\times 256\\)) with the default batch size \\(B=4\\).
Figure 4: Qualitative comparison of GI-NAS to state-of-the-art gradient inversion methods on CIFAR-10 (\\(32\\times 32\\)) with the default batch size \\(B=4\\).
### Comparison with State-of-the-art Methods
Firstly, we compare GI-NAS with state-of-the-art gradient inversion methods on CIFAR-10. From Table 1, we conclude that GI-NAS achieves the best results with significant performance improvement. For instance, we realize a \\(5.12\\) dB PSNR increase than GION, and our LPIPS value is \\(74.3\\%\\) smaller than that of GION. These prove that in contrast to GION that optimizes on a fixed network, our NAS strategy indeed comes into effect and better leverages the implicit architectural priors.
We also discover that the GAN-based method GGL underperforms the previous GAN-free methods (i.e., IG and GI). This is because GGL utilizes BigGAN [76] pre-trained on ImageNet for generative priors in our settings, which has an inherent distribution bias with the target CIFAR-10 data domain. Besides, GGL only optimizes the latent vectors and cannot dynamically handle the mismatch between the training data of GAN and the target data. In contrast, GIAS optimizes both the latent vectors and the GAN parameters, which can reduce the distribution divergence and alleviate such mismatch to some extent. Therefore, although also GAN-based, GIAS exhibits much better performance than GGL on CIFAR-10. Figure 4 shows the qualitative comparison of different methods on CIFAR-10. We notice that GI-NAS outperforms all the compared methods in terms of pixel-level recovery. Although GION also achieves results relatively close to the ground truth images, it still suffers from a few flaws and artifacts in some areas compared to GI-NAS. This again verifies the necessity of searching the optimal architecture. We also note that there are huge differences between the images generated by GGL and the original images due to the mismatched GAN priors that we have previously discussed.
**High-Resolution Images Recovery.** We then consider a more challenging situation and compare various methods on ImageNet with the resolution of \\(256\\times 256\\). As shown in Table 1, most methods encounter significant performance decline when attacking high-resolution images. The amplification of image pixels greatly increases the complexity of reconstruction tasks and thus obstructs the optimization process for optimal images. But GI-NAS still achieves the best attack results, with a PSNR increase of \\(1.26\\) dB than GION. From Figure 5, we notice that GI-NAS can recover the rich semantic features in the original images, with fewer speckles or smudges than GION. This is because GI-NAS has discovered the architecture that better suits each batch and thus handles the details better.
**Extension to Larger Batch Sizes.** As shown in Table 2, we extend GI-NAS to larger batch sizes on ImageNet, which is more in line with the actual training process of FL systems. We observe that the performance of most methods degrades as the batch size increases, while GI-NAS is insusceptible
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline Metric & Batch Size & IG & GI & GGL & GIAS & GION & **GI-NAS** & Batch Size & IG & GI & GGL & GIAS & GION & **GI-NAS** \\\\ \\hline PSNR \\(\\uparrow\\) & & 7.6011 & 8.3419 & 11.5454 & 9.6216 & 20.4841 & **20.8451** & & 7.1352 & 7.9079 & 10.9910 & 9.6601 & 20.6845 & **21.7933** \\\\ SSIM \\(\\uparrow\\) & 8 & 0.0762 & 0.1113 & 0.2571 & 0.2218 & 0.5681 & **0.6076** & & 0.0549 & 0.0892 & 0.2583 & 0.2270 & 0.5868 & **0.6236** \\\\ FSIM \\(\\uparrow\\) & & 0.5103 & 0.5365 & 0.5942 & 0.5765 & 0.7908 & **0.8105** & & 0.4794 & 0.4943 & 0.5811 & 0.5618 & 0.7978 & **0.8163** \\\\ LPIPS \\(\\downarrow\\) & & 0.7374 & 0.7251 & 0.6152 & 0.6621 & 0.5162 & **0.4613** & & 0.7540 & 0.7475 & 0.6261 & 0.6667 & 0.5219 & **0.4651** \\\\ \\hline PSNR \\(\\uparrow\\) & & 7.7991 & 8.0233 & 11.4766 & 9.2563 & 20.3078 & **21.4267** & & 7.1085 & 7.8752 & 10.9897 & 10.0563 & 16.1275 & **20.0011** \\\\ SSIM \\(\\uparrow\\) & & 0.0704 & 0.0952 & 0.2561 & 0.2139 & 0.5507 & **0.6414** & & 0.0554 & 0.06763 & 0.2558 & 0.2361 & 0.3415 & **0.5485** \\\\ FSIM \\(\\uparrow\\) & & 0.5077 & 0.5165 & 0.5872 & 0.5832 & 0.7840 & **0.8157** & & 0.4940 & 0.4951 & 0.5889 & 0.5762 & 0.6823 & **0.7819** \\\\ LPIPS \\(\\downarrow\\) & & 0.7424 & 0.7362 & 0.6165 & 0.6656 & 0.5313 & **0.4613** & & 0.7493 & 0.7427 & 0.6296 & 0.6674 & 0.6291 & **0.5265** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Quantitative comparison of GI-NAS to state-of-the-art gradient inversion methods on ImageNet (\\(256\\times 256\\)) with larger batch sizes when \\(B>4\\).
Figure 5: Qualitative comparison of GI-NAS to state-of-the-art gradient inversion methods on ImageNet (\\(256\\times 256\\)) with the default batch size \\(B=4\\).
to batch sizes and continuously generates high-quality images even at \\(B=32\\). Besides, GI-NAS is able to acquire consistent performance gains on the basis of GION at all the given batch sizes. This further provides evidence for the necessity and effectiveness of our batch-level architecture search.
**Attacks under Defense Strategies.** Next, we evaluate how these attacks perform when defenses are applied on ImageNet. Following previous works [15; 11], we consider four strict defense strategies: (1) _Gaussian Noise_[31] with a standard deviation of \\(0.1\\); (2) _Gradient Clipping_[31] with a clipping bound of \\(4\\); (3) _Gradient Sparsification_[33] with a pruning rate of \\(90\\%\\); (4) _Representative Perturbation (Soteria)_[35] with a pruning rate of \\(80\\%\\). For fair comparison, we apply the estimated gradient transformation \\(\\mathcal{T}(\\cdot)\\) described in (6) to all the attack methods. From Table 3, we discover that although the gradients have been disrupted by the imposed defense strategies, GI-NAS still realizes the best reconstruction effects in almost all the tested cases. The only exception is that GGL outperforms GI-NAS in terms of SSIM and LPIPS when the defense strategy is Gaussian Noise [31]. This is because the gaussian noise with a standard deviation of \\(0.1\\) can severely corrupt the gradients and the information carried inside the gradients is no longer enough for batch recovery. However, GGL only optimizes the latent vectors and can still generate natural images that possess some semantic features by the pre-trained GAN. Thus, GGL can still obtain not bad performance even though the generated images are quite dissimilar to the original ones.
### Further Analysis
**Effectiveness of the Training-free Metric.** In Figure 6, we analyze how the initial gradient matching loss correlates with the real performance when attacking the same batch by different models on ImageNet. The Kendall's \\(\\tau\\)[78] between these two indicators is -\\(0.491\\), which means that they are highly relevant and a smaller initial loss is more likely to result in better PSNR performance.
**Ablation Study.** In Table 4, We report the PSNR results of GION and different variants of GI-NAS. GI-NAS\\({}^{*}\\) optimizes on a fixed over-parameterized network, which means that it is essentially the same as GION. GI-NAS\\({}^{\\dagger}\\) only searches the skip connection patterns while GI-NAS\\({}^{\\ddagger}\\) only searches the upsampling modules. We observe that the performance of GI-NAS\\({}^{*}\\) is very close to that of GION, as both of them adopt a changeless architecture. GI-NAS\\({}^{\\dagger}\\) and GI-NAS\\({}^{\\ddagger}\\) indeed improve the attacks on the basis of GION or GI-NAS\\({}^{*}\\), which validates the contributions of individual search types. GI-NAS combines the above two search types and thus performs the best among all the variants.
\\begin{table}
\\begin{tabular}{l c c c} \\hline \\hline Method & Upsampling Search & Connection Search & CIFAR-10 & ImageNet \\\\ \\hline GION & ✗ & ✗ & 30.8652 & 21.9942 \\\\ GI-NAS\\({}^{*}\\) & ✗ & ✗ & 30.6514 & 22.0126 \\\\ GI-NAS\\({}^{\\dagger}\\) & ✗ & ✓ & 35.4336 & 22.4597 \\\\ GI-NAS\\({}^{\\ddagger}\\) & ✓ & ✗ & 35.3542 & 22.3992 \\\\
**GI-NAS** & ✓ & ✓ & **35.9883** & **23.2578** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 4: Ablation Study. We report the PSNR results of GION and different variants of GI-NAS on CIFAR-10 (\\(32\\times 32\\)) and ImageNet (\\(256\\times 256\\)) with the default batch size \\(B=4\\).
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c c c c} \\hline \\hline Metric & Defense & IG & GI & GGL & GIAS & GION & **GL-NAS** & Defense & IG & GI & GGL & GIAS & GION & **GI-NAS** \\\\ \\hline PSNR \\(\\uparrow\\) & 7.7020 & 7.4553 & 9.7381 & 6.8520 & 9.2449 & **11.2567** & & & 8.6540 & 8.6519 & 9.8019 & 9.7894 & 13.7358 & **15.4331** \\\\ SSIM \\(\\uparrow\\) & Gaussian & 0.0311 & 0.0197 & **0.2319** & 0.1045 & 0.0277 & 0.0416 & Gradient & 0.0669 & 0.0627 & 0.2315 & 0.2515 & 0.2387 & **0.3154** \\\\ FSIM \\(\\uparrow\\) & Noise & 0.4518 & 0.3804 & 0.5695 & 0.4652 & 0.5714 & **0.5845** & Sparsification & 0.5073 & 0.4696 & 0.5624 & 0.5664 & 0.6365 & **0.6786** \\\\ LPIPS \\(\\uparrow\\) & 0.7775 & 0.8220 & **0.6639** & 0.7282 & 0.8033 & 0.7805 & & & 0.7480 & 0.7681 & 0.6589 & 0.6604 & 0.6909 & **0.6535** \\\\ \\hline PSNR \\(\\uparrow\\) & 8.0690 & 7.2342 & 11.2315 & 9.7556 & 22.2016 & **23.2074** & & & 6.5675 & 6.6693 & 11.5690 & 0.7588 & 22.9617 & **23.7917** \\\\ SSIM \\(\\uparrow\\) & Gradient & 0.0881 & 0.0659 & 0.2531 & 0.2547 & 0.6266 & **0.7023** & & & 0.0323 & 0.0382 & 0.2623 & 0.2425 & 0.6698 & **0.7143** \\\\ FSIM \\(\\uparrow\\) & Clipping & 0.5362 & 0.5434 & 0.5900 & 0.5794 & 0.8229 & **0.8635** & & & 0.4929 & 0.4454 & 0.5964 & 0.5697 & 0.8430 & **0.8658** \\\\ LPIPS \\(\\downarrow\\) & 0.7227 & 0.7230 & 0.6117 & 0.6485 & 0.4603 & **0.3743** & & & 0.7506 & 0.7703 & 0.6031 & 0.6631 & 0.4253 & **0.3655** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Quantitative comparison of GI-NAS to state-of-the-art gradient inversion methods on ImageNet (\\(256\\times 256\\)) under various defense strategies with the default batch size \\(B=4\\).
Figure 6: Correlation between the search metric and the actual performance.
Conclusion
In this paper, we propose GI-NAS, a novel gradient inversion method that makes deeper use of the implicit architectural priors for gradient inversion. We first systematically analyze existing gradient inversion methods and emphasize the necessity of adaptive architecture search. We then build up our model search space by designing different upsampling modules and skip connection patterns. To reduce the computational overhead, we leverage the initial gradient matching loss as the training-free search metric to select the optimal model architecture and provide experimental evidence that such a metric highly correlates with the real performance. Extensive experiments prove that GI-NAS can achieve state-of-the-art performance compared to existing methods, even under more practical FL scenarios with high-resolution images, large-sized batches, and advanced defense strategies. The ablation study further demonstrates the significance of our optimal architecture search.
## References
* [1]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2017) A deep learning approach to gradient inversion. In Proceedings of the 2017 IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1-10. Cited by: SS2.
* [2]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-633. Cited by: SS2.
* [3]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-633. Cited by: SS2.
* [4]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-633. Cited by: SS2.
* [5]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [6]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [7]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [8]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [9]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [10]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [11]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [12]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [13]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [14]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [15]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [16]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [17]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-638. Cited by: SS2.
* [18]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [19]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [20]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [21]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [22]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [23]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [24]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [25]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [26]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [27]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [28]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [29]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [30]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [31]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [32]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [33]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [34]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [35]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [36]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [37]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [38]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [39]M. Abadi, A. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [40]A. Agarwal, S. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [41]A. Agarwal, S. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [42]A. Agarwal, S. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [43]A. Agarwal, S. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [44]A. Agarwal, S. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [45]A. Agarwal, S. Agarwal, S. Agarwal, and A. Courville (2018) Deep learning approach to gradient inversion. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp. 619-639. Cited by: SS2.
* [46]A. Agarwal* [15] Li, Zhuohang and Zhang, Jiaxin and Liu, Luyang and Liu, Jian. Auditing privacy defenses in federated learning via generative gradient leakage. _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10132-10142, 2022.
* [16] Jeon, Jinwoo and Lee, Kangwook and Oh, Sewoong and Ok, Jungseul and others. Gradient inversion with generative image prior. _Advances in neural information processing systems_, pages 29898-29908, 2021.
* [17] Fang, Hao and Chen, Bin and Wang, Xuan and Wang, Zhi and Xia, Shu-Tao. GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization. _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4967-4976, 2023.
* [18] Goodfellow, Ian and Pouget-Abadie, Jean and Mirza, Mehdi and Xu, Bing and Warde-Farley, David and Ozair, Sherjil and Courville, Aaron and Bengio, Yoshua. Generative adversarial networks. _Communications of the ACM_, pages 139-144, 2020.
* [19] Creswell, Antonia and White, Tom and Dumoulin, Vincent and Arulkumaran, Kai and Sengupta, Biswa and Bharath, Anil A. Generative adversarial networks: An overview. _IEEE signal processing magazine_, pages 53-65, 2018.
* [20] Zhang, Chi and Xiaoman, Zhang and Sothiwat, Ekanut and Xu, Yanyu and Liu, Ping and Zhen, Liangli and Liu, Yong. Generative Gradient Inversion via Over-Parameterized Networks in Federated Learning. _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5126-5135, 2023.
* [21] Ulyanov, Dmitry and Vedaldi, Andrea and Lempitsky, Victor. Deep image prior. _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 9446-9454, 2018.
* [22] Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255, 2009.
* [23] Yu, Wenbo and Chen, Bin and Zhang, Qinshan and Xia, Shu-Tao. Editable-DeepSC: Cross-Modal Editable Semantic Communication Systems. _arXiv preprint arXiv:2310.10347_, 2023.
* [24] Fang, Hao and Qiu, Yixiang and Yu, Hongyao and Yu, Wenbo and Kong, Jiawei and Chong, Baoli and Chen, Bin and Wang, Xuan and Xia, Shu-Tao. Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses. _arXiv preprint arXiv:2402.04013_, 2024.
* [25] Tan, Yuqi and Peng, Yuang and Fang, Hao and Chen, Bin and Xia, Shu-Tao. WaterDiff: Perceptual Image Watermarks Via Diffusion Model. _ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 3250-3254, 2024.
* [26] Hatamizadeh, Ali and Yin, Hongxu and Roth, Holger R. and Li, Wenqi and Kautz, Jan and Xu, Daguang and Molchanov, Pavlo. GradViT: Gradient Inversion of Vision Transformers. _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10021-10030, 2022.
* [27] Han, Kai and Wang, Yunhe and Chen, Hanting and Chen, Xinghao and Guo, Jianyuan and Liu, Zhenhua and Tang, Yehui and Xiao, An and Xu, Chunjing and Xu, Yixing and others. A survey on vision transformer. _IEEE transactions on pattern analysis and machine intelligence_, pages 87-110, 2022.
* [28] Huang, Yangsibo and Gupta, Samyak and Song, Zhao and Li, Kai and Arora, Sanjeev. Evaluating gradient inversion attacks and defenses in federated learning. _Advances in Neural Information Processing Systems_, pages 7232-7241, 2021.
* [29] Zhang, Rui and Guo, Song and Wang, Junxiao and Xie, Xin and Tao, Dacheng. A survey on gradient inversion: Attacks, defenses and future directions. _arXiv preprint arXiv:2206.07284_, 2022.
* [30] Dong, Tian and Zhao, Bo and Lyu, Lingjuan. Privacy for free: How does dataset condensation help privacy? _International Conference on Machine Learning_, pages 5378-5396, 2022.
* [31] Geyer, Robin C and Klein, Tassilo and Nabi, Moin. Differentially private federated learning: A client level perspective. _arXiv preprint arXiv:1712.07557_, 2017.
* [32] Wei, Wenqi and Liu, Ling and Wu, Yanzhao and Su, Gong and Iyengar, Arun. Gradient-leakage resilient federated learning. _2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)_, pages 797-807, 2021.
* [33] Aji, Alham Fikri and Heafield, Kenneth. Sparse communication for distributed gradient descent. _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_, pages 440-445, 2017.
* [34] Lin, Yujun and Han, Song and Mao, Huizi and Wang, Yu and Dally, Bill. Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training. _International Conference on Learning Representations_, 2018.
* [35] Sun, Jingwei and Li, Ang and Wang, Binghui and Yang, Huanrui and Li, Hai and Chen, Yiran. Soteria: Provable defense against privacy leakage in federated learning from representation perspective. _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 9311-9319, 2021.
* [36] Suganuma, Masanori and Ozay, Mete and Okatani, Takayuki. Exploiting the potential of standard convolutional autoencoders for image restoration by evolutionary search. _International Conference on Machine Learning_, pages 4771-4780, 2018.
* [37] Chu, Xiangxiang and Zhang, Bo and Ma, Hailong and Xu, Ruijun and Li, Qingyuan. Fast, accurate and lightweight super-resolution with neural architecture search. _2020 25th International conference on pattern recognition (ICPR)_, pages 59-64, 2021.
* [38] Nekrasov, Vladimir and Chen, Hao and Shen, Chunhua and Reid, Ian. Fast neural architecture search of compact semantic segmentation models via auxiliary cells. _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 9126-9135, 2019.
* [39] Liu, Chenxi and Chen, Liang-Chieh and Schroff, Florian and Adam, Hartwig and Hua, Wei and Yuille, Alan L and Fei-Fei, Li. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 82-92, 2019.
* [40] Chen, Liang-Chieh and Collins, Maxwell and Zhu, Yukun and Papandreou, George and Zoph, Barret and Schroff, Florian and Adam, Hartwig and Shlens, Jon. Searching for efficient multi-scale architectures for dense image prediction. _Advances in neural information processing systems_, 2018.
* [41] Real, Esteban and Aggarwal, Alok and Huang, Yanping and Le, Quoc V. Regularized evolution for image classifier architecture search. _Proceedings of the aaai conference on artificial intelligence_, pages 4780-4789, 2019.
* [42] Zoph, Barret and Le, Quoc V. Neural architecture search with reinforcement learning. _arXiv preprint arXiv:1611.01578_, 2016.
* [43] Real, Esteban and Moore, Sherry and Selle, Andrew and Saxena, Saurabh and Suematsu, Yutaka Leon and Tan, Jie and Le, Quoc V and Kurakin, Alexey. Large-Scale Evolution of Image Classifiers. _International conference on machine learning_, pages 2902-2911, 2017.
* [44] Zoph, Barret and Vasudevan, Vijay and Shlens, Jonathon and Le, Quoc V. Learning transferable architectures for scalable image recognition. _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 8697-8710, 2018.
* [45] Zhang, Haokui and Li, Ying and Chen, Hao and Shen, Chunhua. Memory-efficient hierarchical neural architecture search for image denoising. _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 3657-3666, 2020.
* [46] Angeline, Peter J and Saunders, Gregory M and Pollack, Jordan B. An evolutionary algorithm that constructs recurrent neural networks. _IEEE transactions on Neural Networks_, pages 54-65, 1994.
* [47] Liu, Hanxiao and Simonyan, Karen and Vinyals, Oriol and Fernando, Chrisantha and Kavukcuoglu, Koray. Hierarchical representations for efficient architecture search. _arXiv preprint arXiv:1711.00436_, 2017.
* [48] Miikkulainen, Risto and Liang, Jason and Meyerson, Elliot and Rawal, Aditya and Fink, Dan and Francon, Olivier and Raju, Bala and Shahrzad, Hormoz and Navruzyan, Arshak and Duffy, Nigel and others. Evolving deep neural networks. _Artificial intelligence in the age of neural networks and brain computing_, pages 269-287, 2024.
* [49] Floreano, Dario and Durr, Peter and Mattiussi, Claudio. Neuroevolution: from architectures to learning. _Evolutionary intelligence_, pages 47-62, 2008.
* [50] Stanley, Kenneth O and Miikkulainen, Risto. Evolving neural networks through augmenting topologies. _Evolutionary computation_, pages 99-127, 2002.
* [51] Stanley, Kenneth O and D'Ambrosio, David B and Gauci, Jason. A hypercube-based encoding for evolving large-scale neural networks. _Artificial life_, pages 185-212, 2009.
* [52] Jozefowicz, Rafal and Zaremba, Wojciech and Sutskever, Ilya. An empirical exploration of recurrent network architectures. _International conference on machine learning_, pages 2342-2350, 2015.
* [53] Bergstra, James and Yamins, Daniel and Cox, David. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. _International conference on machine learning_, pages 115-123, 2013.
* [54] Mendoza, Hector and Klein, Aaron and Feurer, Matthias and Springenberg, Jost Tobias and Hutter, Frank. Towards automatically-tuned neural networks. _Workshop on automatic machine learning_, pages 58-65, 2016.
* [55] Domhan, Tobias and Springenberg, Jost Tobias and Hutter, Frank. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. _Twenty-fourth international joint conference on artificial intelligence_, 2015.
* [56] Baker, Bowen and Gupta, Otkrist and Naik, Nikhil and Raskar, Ramesh. Designing neural network architectures using reinforcement learning. _International Conference on Learning Representations_, 2016.
* [57] Cai, Han and Chen, Tianyao and Zhang, Weinan and Yu, Yong and Wang, Jun. Efficient architecture search by network transformation. _Proceedings of the AAAI conference on artificial intelligence_, 2018.
* [58] Zhong, Zhao and Yan, Junjie and Wu, Wei and Shao, Jing and Liu, Cheng-Lin. Practical block-wise neural network architecture generation. _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2423-2432, 2018.
* [59] Liu, Hanxiao and Simonyan, Karen and Yang, Yiming. Darts: Differentiable architecture search. _arXiv preprint arXiv:1806.09055_, 2018.
* [60] Shin, Richard and Packer, Charles and Song, Dawn. Differentiable neural network architecture search. _International Conference on Learning Representations Workshop_, 2018.
* [61] Xie, Sirui and Zheng, Hehui and Liu, Chunxiao and Lin, Liang. SNAS: stochastic neural architecture search. _arXiv preprint arXiv:1812.09926_, 2018.
* [62] Cai, Han and Zhu, Ligeng and Han, Song. Proxylessnas: Direct neural architecture search on target task and hardware. _arXiv preprint arXiv:1812.00332_, 2018.
* [63] Ahmed, Karim and Torresani, Lorenzo. Maskconnect: Connectivity learning by gradient descent. _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 349-365, 2018.
* [64] Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_, pages 229-256, 1992.
* [65] Watkins, Christopher JCH and Dayan, Peter. Q-learning. _Machine learning_, pages 279-292, 1992.
* [66] Clifton, Jesse and Laber, Eric. Q-learning: Theory and applications. _Annual Review of Statistics and Its Application_, pages 279-301, 2020.
* [67] Mellor, Joe and Turner, Jack and Storkey, Amos and Crowley, Elliot J. Neural architecture search without training. _International conference on machine learning_, pages 7588-7598, 2021.
* [68] Zhang, Miao and Su, Steven and Pan, Shirui and Chang, Xiaojun and Huang, Wei and Haffari, Gholamreza. Differentiable architecture search without training nor labels: A pruning perspective. _arXiv preprint arXiv:2106.11542_, 2021.
* [69] Chen, Wuyang and Gong, Xinyu and Wang, Zhangyang. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. _arXiv preprint arXiv:2102.11535_, 2021.
* [70] Zhu, Junyi and Blaschko, Matthew B. R-gap: Recursive gradient attack on privacy. _International Conference on Learning Representations_, 2020.
* [71] Ronneberger, Olaf and Fischer, Philipp and Brox, Thomas. U-net: Convolutional networks for biomedical image segmentation. _Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18_, pages 234-241, 2015.
* [72] Chen, Yun-Chun and Gao, Chen and Robb, Esther and Huang, Jia-Bin. Nas-dip: Learning deep image prior with neural architecture search. _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVIII 16_, pages 442-459, 2020.
* [73] Arican, Metin Ersin and Kara, Ozgur and Bredell, Gustav and Konukoglu, Ender. Isnas-dip: Image-specific neural architecture search for deep image prior. _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1960-1968, 2022.
* [74] He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian. Deep residual learning for image recognition. _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [75] Krizhevsky, Alex and Hinton, Geoffrey and others. Learning multiple layers of features from tiny images. 2009.
* [76] Brock, Andrew and Donahue, Jeff and Simonyan, Karen. Large Scale GAN Training for High Fidelity Natural Image Synthesis. _International Conference on Learning Representations_, 2018.
* [77] Zhang, Richard and Isola, Phillip and Efros, Alexei A and Shechtman, Eli and Wang, Oliver. The unreasonable effectiveness of deep features as a perceptual metric. _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 586-595, 2018.
* [78] Kendall, Maurice G. A new measure of rank correlation. _Biometrika_, pages 81-93, 1938.
* [79] Kingma, Diederik P and Ba, Jimmy. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [80] Liu, Yilin and Li, Jiang and Pang, Yunkui and Nie, Dong and Yap, Pew-Thian. The devil is in the upsampling: Architectural decisions made simpler for denoising with deep image prior. _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 12408-12417, 2023.
* [81] Cheng, Zezhou and Gadelha, Matheus and Maji, Subhransu and Sheldon, Daniel. A bayesian perspective on the deep image prior. _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 5443-5451, 2019.
## Appendix A Pseudocode of GI-NAS
We first elaborate on the pseudocode of GI-NAS in Algorithm 1. In the first loop, we select the optimal model \\(G_{opt}\\) by comparing the initial gradient matching loss among all the candidates. In the second loop, we optimize the parameters of the selected model \\(G_{opt}\\) to reconstruct the original data.
```
0:\\(n\\): the size of the model search space; \\(s\\): the random seed used to generate the model search space; \\(\\mathbf{g}\\): the uploaded real gradients; \\(m\\): the maximum iteration steps for the final optimization;
0:\\(\\mathbf{\\hat{x}}^{*}\\): the generated images;
1: Build up the model search space according to \\(n\\) and \\(s\\): \\(\\mathcal{M}=\\{G_{1},G_{2}, ,G_{n}\\}\\) with the parameters \\(\\Phi=\\{\\phi_{1},\\phi_{2}, ,\\phi_{n}\\}\\)
2:\\(\\mathbf{z_{0}}\\leftarrow\\mathcal{N}(0,1)\\), \\(loss_{min}\\leftarrow+\\infty\\)
3:for\\(i\\gets 1\\) to \\(\\mathbf{z}\\)do
4:\\(loss_{i}\\leftarrow\\mathcal{D}(\\mathcal{T}(F(G_{i}(\\mathbf{z_{0}};\\phi_{i})) ),\\mathbf{g})\\) // calculate the initial gradient matching loss
5:if\\(loss_{i}<loss_{min}\\)then
6:\\(loss_{min}\\gets loss_{i}\\), \\(G_{opt}\\gets G_{i}\\), \\(\\gamma_{1}\\leftarrow\\phi_{i}\\) // update the selection of \\(G_{opt}\\)
7:endif
8:endfor
9:for\\(k\\gets 1\\) to \\(m\\)do
10:\\(\\gamma_{k+1}\\leftarrow\\gamma_{k}-\\eta\
abla_{\\gamma_{k}}\\mathcal{D}(\\mathcal{T }(F(G_{opt}(\\mathbf{z_{0}};\\gamma_{k}))),\\mathbf{g})\\)
11:endfor
12:\\(\\gamma^{*}\\leftarrow\\gamma_{m+1}\\)
13:return\\(\\mathbf{\\hat{x}}^{*}=G_{opt}(\\mathbf{z_{0}};\\gamma^{*})\\)
```
**Algorithm 1** Gradient Inversion via Neural Architecture Search
## Appendix B More Experimental Results
### Attacks at Larger Batch Sizes on CIFAR-10
We present the attack results at larger batch sizes on CIFAR-10 in Table 5. We conclude that GI-NAS outperforms all the compared methods with significant performance gains. For instance, GI-NAS achieves an improvement of \\(9.17\\) dB in terms of PSNR compared to GION at \\(B=48\\). GI-NAS still exhibits great superiority over existing gradient inversion methods when facing larger batch sizes.
### Attacks under Defense Strategies on CIFAR-10
We also report the attack results under various defense strategies on CIFAR-10 in Table 6. We apply the estimated gradient transformation \\(\\mathcal{T}(\\cdot)\\) described in (6) to all the attacks and utilize the same defense strategies as in the earlier section of this paper: (1) _Gaussian Noise_[31] with a standard deviation of \\(0.1\\); (2) _Gradient Clipping_[31] with a clipping bound of \\(4\\); (3) _Gradient Sparsification_[33] with a pruning rate of \\(90\\%\\); (4) _Representative Perturbation (Soteria)_[35] with a pruning rate of \\(80\\%\\). We observe that GI-NAS again surpasses all the compared methods in almost all the circumstances with significant performance gains. For instance, GI-NAS achieves a PSNR increase of \\(9.52\\) dB than GION when the defense strategy is Gradient Sparsification [33]. The only exception is that GGL achieves better SSIM and LPIPS performance than GI-NAS when the defense strategy is
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c c c} \\hline \\hline Metric & Batch Size & IG & GI & GGL & GIAS & GION & **GI-NAS** & Batch Size & IG & GI & GGL & GIAS & GION & **GI-NAS** \\\\ \\hline PSNR \\(\\uparrow\\) & & 10.2647 & 11.2593 & 9.9964 & 11.1952 & 24.8746 & **30.5315** & & 9.5446 & 10.2000 & 9.1200 & 9.6376 & 29.5312 & **38.7054** \\\\ SSIM \\(\\uparrow\\) & & 0.2185 & 0.2431 & 0.1658 & 0.2638 & 0.9816 & **0.9948** & & 0.1574 & 0.1697 & 0.1552 & 0.1652 & 0.9932 & **0.9983** \\\\ FSIM \\(\\uparrow\\) & & 0.5254 & 0.5350 & 0.4988 & 0.5569 & 0.9869 & **0.9965** & & 0.5112 & 0.5167 & 0.4869 & 0.5126 & 0.9949 & **0.9987** \\\\ LPIPS \\(\\downarrow\\) & & 0.6254 & 0.6225 & 0.6655 & 0.5804 & 0.0177 & **0.0035** & & 0.6660 & 0.6571 & 0.7213 & 0.6554 & 0.0072 & **0.0020** \\\\ \\hline PSNR \\(\\uparrow\\) & & 9.5802 & 10.5863 & 9.4104 & 9.8669 & 28.6832 & **33.0586** & & 9.2363 & 9.9500 & 8.8575 & 9.2982 & 28.9723 & **31.8026** \\\\ SSIM \\(\\uparrow\\) & & 0.1595 & 0.2020 & 0.1481 & 0.1724 & 0.9892 & **0.9974** & & 0.1402 & 0.1643 & 0.1516 & 0.2180 & 0.9878 & **0.9941** \\\\ FSIM \\(\\uparrow\\) & & 0.5186 & 0.5325 & 0.4995 & 0.5180 & 0.9882 & **0.9984** & & 0.4961 & 0.5283 & 0.4801 & 0.5365 & 0.9865 & **0.9941** \\\\ LPIPS \\(\\downarrow\\) & & 0.6598 & 0.6626 & 0.6922 & 0.6362 & 0.0218 & **0.0017** & & 0.6865 & 0.6593 & 0.7248 & 0.7297 & **0.0227** & **0.0071** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Quantitative comparison of GI-NAS to state-of-the-art gradient inversion methods on CIFAR-10 (\\(32\\times 32\\)) with larger batch sizes when \\(B>4\\).
Gaussian Noise [31]. This can be attributed to the fact that the gaussian noise have severely damaged the gradients and the information carried inside the gradients is no longer suitable for gradient inversion. However, GGL can still generate natural images that possess some semantic features by the pre-trained GAN and obtain not bad performance even though the generated images are quite different from the original ones. This is also consistent with the defense auditing results on ImageNet that have been presented in previous part of this paper.
### More Qualitative Results
We provide more qualitative results in Figure 7 and Figure 8. We observe that our reconstructed images are the closest to the original images, with less blemishes or stains than the results of GION. This proves that our adaptive network architecture search indeed helps the attackers obtain better implicit architectural priors and thus better remove the local noises.
### Attacks on More FL Global Models
We show the performance of different methods when attacking more FL global models (e.g., LeNet-Zhu [11], ResNet-50 [74]) in Table 7. We note that GI-NAS realizes the best reconstruction results in all the tested cases. These results further demonstrate the reliability and generalizability of GI-NAS.
We also find that the attack results on different FL global models are distinctively dissimilar even when applying the same gradient inversion method. This implies that future study may have a deeper look into the correlations between the gradient inversion robustness and the architecture of FL global model, and thus design more securing defense strategies from the aspect of network architecture.
## Appendix C More Implementation Details
The learning rate \\(\\eta\\) in (7) is set as \\(1\\times 10^{-3}\\). We utilize the signed gradient descent and adopt Adam optimizer [79] when updating the parameters of \\(G_{opt}\\). We choose the negative cosine similarity function as the distance metric \\(\\mathcal{D}(\\cdot,\\cdot)\\) when calculating the gradient matching loss \\(\\mathcal{L}_{grad}(\\cdot)\\) in (6). We conduct all the experiments on NVIDIA GeForce RTX 3090 GPUs for small batch sizes and on A100 GPUs for large batch sizes.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c} \\hline \\hline Metric & Defense & IG & GI & GGL & GIAS & GION & **GI-NAS** & Defense & IG & GI & GGL & GIAS & GION & **GI-NAS** \\\\ \\hline PSNR \\(\\uparrow\\) & 8.8332 & 8.2797 & 10.3947 & 9.8146 & 9.7658 & **10.5192** & & & 9.6183 & 10.9486 & 10.4323 & 12.1328 & 30.3518 & **39.8739** \\\\ SSIM \\(\\uparrow\\) & Gaussian & 0.1177 & 0.0976 & **10.816** & 0.1301 & 0.2080 & 0.0375 & Gradient & 1.9592 & 0.2168 & 0.1862 & 0.3272 & 0.9920 & **0.9987** \\\\ FSIM \\(\\uparrow\\) & Noise & 0.4844 & 0.8060 & 0.5123 & 0.4843 & 0.9545 & **0.6300** & Sparsification & 0.5151 & 0.5329 & 0.5223 & 0.6015 & 0.9951 & **0.9987** \\\\ LPIPS \\(\\downarrow\\) & 0.7258 & 0.7077 & **0.6255** & 0.7129 & 0.6752 & 0.6740 & & & 0.6895 & 0.6855 & 0.6298 & 0.5582 & 0.0047 & **0.0014** \\\\ \\hline PSNR \\(\\uparrow\\) & 16.6203 & 11.5888 & 10.1373 & 17.9859 & 32.4744 & **36.2490** & & & 9.4103 & 10.2939 & 10.4217 & 15.5825 & 31.1003 & **34.1804** \\\\ SSIM \\(\\uparrow\\) & Gradient & 0.5952 & 0.2851 & 0.1760 & 0.6703 & 0.9933 & **0.9985** & & & 1.4041 & 0.1707 & 0.1889 & 0.5405 & 0.9900 & **0.9973** \\\\ FSIM \\(\\uparrow\\) & Clipping & 0.7464 & 0.4976 & 0.5070 & 0.7751 & 0.9966 & **0.9992** & Soteria & 0.5086 & 0.5086 & 0.5144 & 0.7133 & 0.9952 & **0.9986** \\\\ LPIPS \\(\\downarrow\\) & 0.3811 & 0.6413 & 0.6205 & 0.3054 & 0.0029 & **0.0008** & & & 0.7008 & 0.6940 & 0.6335 & 0.3748 & 0.0038 & **0.0014** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 6: Quantitative comparison of GI-NAS to state-of-the-art gradient inversion methods on CIFAR-10 (\\(32\\times 32\\)) under various defense strategies with the default batch size \\(B=4\\).
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multicolumn{3}{c}{CIFAR-10} & \\multicolumn{3}{c}{ImageNet} \\\\ \\cline{2-5} & LeNet-Zhu & ResNet-50 & ConvNet-32 & LeNet-Zhu & ResNet-50 & ConvNet-32 \\\\ \\hline IG & 11.9297 & 10.9667 & 12.3516 & 7.7175 & 10.9113 & 10.9081 \\\\ GI & 9.8537 & 10.8969 & 12.3560 & 6.9530 & 10.8914 & 10.8886 \\\\ GGL & 12.0546 & 9.3792 & 9.3348 & 11.8195 & 9.5354 & 10.4048 \\\\ GIAS & 24.7742 & 7.7656 & 8.0951 & 16.5229 & 11.0040 & 8.6696 \\\\ GION & 27.8141 & 28.3125 & 11.9893 & 19.2478 & 9.2080 & 23.9602 \\\\
**GI-NAS** & **30.1388** & **40.3566** & **13.3505** & **21.1847** & **11.2320** & **25.7824** \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 7: PSNR results of GI-NAS and state-of-the-art gradient inversion methods when attacking various FL global models on CIFAR-10 (\\(32\\times 32\\)) and ImageNet (\\(256\\times 256\\)) with the default batch size \\(B=4\\).
Figure 7: More qualitative results of different methods on ImageNet (\\(256\\times 256\\)) with the default batch size \\(B=4\\).
Figure 8: More detailed qualitative comparisons between GION and GI-NAS on ImageNet (\\(256\\times 256\\)) with the default batch size \\(B=4\\). Although GION also generates images that relatively resemble the original ones, we still easily observe many speckles or smudges when comparing it with GI-NAS.
## Appendix D Limitations and Future Directions
In this paper, GI-NAS utilizes the initial gradient matching loss as the training-free search metric, which is intuitive but effective. Although we have empirically demonstrated that the initial loss is highly related to the final performance in Figure 6, it is still of great significance to further prove the effectiveness of this search metric with rigorous theoretical analysis. In the future work, we hope to provide more insightful theoretical results with regard to this search metric from various perspectives, such as the frequency spectrum [80, 73] or the implicit neural architectural priors [21, 81].
## Appendix E Societal Impacts
We hope that the remarkable attack performance improvements of GI-NAS over existing methods may help raise the public awareness of such privacy threats, as the sensitive data would be more likely to be revealed or even abused. Moreover, we hope that the idea of this paper may shed new light on the gradient inversion community and facilitate the research in this field. | Gradient Inversion Attacks invert the transmitted gradients in Federated Learning (FL) systems to reconstruct the sensitive data of local clients and have raised considerable privacy concerns. A majority of gradient inversion methods rely heavily on explicit prior knowledge (e.g., a well pre-trained generative model), which is often unavailable in realistic scenarios. To alleviate this issue, researchers have proposed to leverage the implicit prior knowledge of an over-parameterized network. However, they only utilize a fixed neural architecture for all the attack settings. This would hinder the adaptive use of implicit architectural priors and consequently limit the generalizability. In this paper, we further exploit such implicit prior knowledge by proposing **G**radient **I**nversion via **N**eural **A**rchitecture **S**earch (GI-NAS), which adaptively searches the network and captures the implicit priors behind neural architectures. Extensive experiments verify that our proposed GI-NAS can achieve superior attack performance compared to state-of-the-art gradient inversion methods, even under more practical settings with high-resolution images, large-sized batches, and advanced defense strategies. | Summarize the following text. | 221 |
arxiv-format/1904_02917v1.md | # 3D LiDAR and Stereo Fusion using Stereo Matching Network with Conditional Cost Volume Normalization
Tsun-Hsuan Wang, Hou-Ning Hu, Chieh Hubert Lin, Yi-Hsuan Tsai, Wei-Chen Chiu, Min Sun
## I Introduction
The accurate 3D perception has been desired since its vital role in numerous tasks of robotics and computer vision, such as autonomous driving, localization and mapping, path planning, and 3D reconstruction. Various techniques have been proposed to obtain depth estimation, ranging from active sensing sensors (e.g., RGB-D cameras and 3D LiDAR scanners) to passive sensing ones (e.g., stereo cameras). We observe that these sensors all have their own pros and cons, in which none of them perform well on all practical scenarios. For instance, RGB-D sensor is confined to its short-range depth acquisition and thereby 3D LiDAR is a common alternative in the challenging outdoor environment. However, 3D LiDARs are much more expensive and only provide sparse 3D depth estimates. In contrast, a stereo camera is able to obtain denser depth map based on stereo matching algorithms but is typically incapable of producing reliable matches in regions with repetitive patterns, homogeneous appearance, or large illumination change.
Thanks to the complementary characteristic across different sensors, several works [1][2] have studied how to fuse multiple modalities in order to provide more accurate and denser depth estimation. In this paper, we consider the fusion of passive stereo camera and active 3D LiDAR sensor, which is a practical and popular choice. Existing works along this research direction mainly investigate the output-level combination of the dense depth from stereo matching with the sparse measurement from 3D LiDAR. However, rich information provided in stereo images is thus not well utilized in the procedure of fusion. In order to address this issue, we propose to study the design choices for more closely integrating the 3D LiDAR information into the process of stereo matching methods (illustrated in Fig. 1). The motivation that drives us toward this direction is an observation that typical stereo matching algorithms usually suffer from having ambiguous pixel correspondences across stereo pairs, and thereby 3D LiDAR depth points are able to help reduce the search space of matching and resolve ambiguities.
As depth points from 3D LiDAR sensors are sparse, it is not straightforward to simply treat them as additional features connected to each pixel location of a stereo pair during performing stereo matching. Instead, we focus on facilitating sparse points to regularize higher-level feature representations in deep learning-based stereo matching. Recent state-of-the-arts on deep models of stereo matching are composed of two main components: matching cost computation [3][4] and cost volume regularization [5][6][7][8], where the former basically extracts the deep representation of image patches and the latter builds up the search space to aggregate all potential matches across stereo images with further regularization (e.g., 3D CNN) for predicting the final depth estimate.
Being aligned with these two components, we extend the stereo matching network by proposing two techniques: (1) **Input Fusion** to incorporate the geometric information from sparse LiDAR depth with the RGB images for learning joint feature representations, and (2) **CCVNorm** (**C**onditional **C**ost **V**olume **N**ormalization) to adaptively regularize cost volume optimization in dependence on LiDAR measurements. It is worth noting that our proposed techniques have
Fig. 1: Illustration of our method for 3D LiDAR and stereo fusion. The high-level concept of stereo matching pipeline involves 2D feature extraction from the stereo pair, obtaining pixel correspondence, and finally disparity computation. In this paper, we present (1) Input Fusion and (2) Conditional Cost Volume Normalization that are closely integrated with stereo matching networks. By leveraging the complementary nature of LiDAR and stereo modalities, our model produces high-precision disparity estimation.
little dependency on particular network architectures but only relies on a commonly-utilized cost volume component, thus having more flexibility to be adapted into different models. Extensive experiments are conducted on the KITTI Stereo 2015 Dataset [9] and the KITTI Depth Completion Dataset [10] to evaluate the effectiveness of our proposed method. In addition, we perform ablation study on different variants of our approach in terms of performance, model size and computation time. Finally, we analyze how our method exploits the additional sparse sensory inputs and provide qualitative comparisons with other fusion schemes to further highlight the strengths and merits of our method.
## II Related Works
**Stereo Matching.** Stereo matching has been a fundamental problem in computer vision. In general, a typical stereo matching algorithm can be summarized into a four-stage pipeline [11], consisting of _matching cost computation_, _cost support aggregation_, _cost volume regularization_, and _disparity refinement_. Even when deep learning is introduced to stereo matching in recent years and brings a significant leap in performance of depth estimation, such design paradigm is still widely utilized. For instance, [3] and [4] propose to learn a feature representation for matching cost computation by using a deep Siamese network, and then adopt the classical semi-global matching (SGM) [12] to refine the disparity map. [13] and [5] further formulate the entire stereo matching pipeline as an end-to-end network, where the cost volume aggregation and regularization are modelled jointly by 3D convolutions. Moreover, [6] and [7] propose several network designs to better exploit multi-scale and context information. Built upon the powerful learning capacity of deep models, this paper aims to integrate LiDAR information into the procedure of stereo matching networks for a more efficient scheme of fusion.
**RGB Imagery and LiDAR Fusion.** Sensor fusion of RGB imagery and LiDAR data obtain more attention in virtue of its practicability and performance for depth perception. Two different settings are explored by several prior works: LiDAR fused with a monocular image or stereo ones. As the depth estimation from a single image is typically based on a regression from pixels, which is inherently unreliable and ambiguous, most of the recent monocular-based works aim to achieve the completion on the sparse depth map obtained by LiDAR sensor with the help of rich information from RGB images [14][15][16][17][18][19], or refine the depth regression by having LiDAR data as a guidance [20][21].
On the other hand, since the stereo camera relies on the geometric correspondence across images of different viewing angles, its depth estimates are less ambiguous in terms of the absolute distance between objects in the scene and can be well aligned with the scale of 3D LiDAR measurements. This property of stereo camera makes it a practical choice to be fused with 3D LiDAR data in robotic applications, where the complementary characteristics of passive (stereo) and active (LiDAR) depth sensors are better utilized [22][23][24]. For instance, Maddern _et al._[25] propose a probabilistic framework for fusing LiDAR data with stereo images to generate both the depth and uncertainty estimate. With the power of deep learning, Park _et al._[1] utilize convolutional neural network (CNN) to incorporate sparse LiDAR depth into the estimation from SGM [12] of stereo matching. However, we argue that the sensor fusion directly applied to the depth outputs is not able to resolve the ambiguous correspondences existing in the procedure of stereo matching. Therefore, in this paper we advance to encode sparse LiDAR depth at earlier stages in stereo matching, i.e., matching cost computation and cost regularization, based on our proposed **CCVNorm** and **Input Fusion** techniques.
**Conditional Batch Normalization.** While the Batch Normalization layer improves network training via normalizing neural activations according to the statistics of each mini-batch, the Conditional Batch Normalization (CBN) operation instead learns to predict the normalization parameters (i.e., feature-wise affine transformation) in dependence on some conditional input. CBN has shown its generality in various application for coordinating different sources of information into joint learning. For instance, [26] and [27] utilize CBN to modulate imaging features by a linguistic embedding and successfully prove its efficacy for visual question answering. Perez _et al._[28] further generalize the CBN idea and point out its connections to other conditioning mechanisms, such as concatenation [29], gating features [30], and hypernetworks [31]. Lin _et al._[32] introduces CBN to a task of generating patches with spatial coordinates as conditions, which shares similar concept of modulating features by spatial-related information. In our proposed method for the fusion of stereo camera and LiDAR sensor, we adopt the mechanism of CBN to integrate LiDAR data into the cost volume regularization step of the stereo matching framework, not only because of its effectiveness but also the clear motivation on reducing the search space of matching for more reliable disparity estimation.
## III Method
As motivated above, we propose to fuse 3D LiDAR data into a stereo matching network by using two techniques: Input Fusion and CCVNorm. In the following, we will first describe the baseline stereo matching network, and then sequentially provide the details of our proposed techniques. Finally, we introduce a hierarchical extension of CCVNorm which is more efficient in terms of runtime and memory consumption. The overview of our proposed method is illustrated in Fig. 2.
### _Preliminaries of Stereo Matching Network_
The end-to-end differentiable stereo matching network used in our proposed method, as shown in the bottom part of Fig. 2, is based on the work of GC-Net [5] and is composed of four primary components which are in line with the typical pipeline of stereo matching algorithms [11]. First, the deep feature extracted from a rectified left-right stereo pair is learned to compute the cost of stereo matching.
The representation with encoded context information acts as a similarity measurement that is more robust than simple photometric appearance, and thus it benefits the estimation of pixel matches across stereo images. A cost volume is then constructed by aggregating the deep features extracted from the left-image with their corresponding ones from the right-image across each disparity level, where the size of cost volume is 4-dimensional \\(C\\times H\\times W\\times D\\) (i.e., _feature size \\(\\times\\) height \\(\\times\\) width \\(\\times\\) disparity_). To be detailed, the cost volume actually includes all the potential matches across stereo images and hence serves as a search space of matching. Afterwards, a sequence of 3D convolutional operations (3D-CNN) is applied for cost volume regularization and the final disparity estimation is carried out by regression with respect to the output volume of 3D-CNN along the \\(D\\) dimension.
### _Input Fusion_
In the cost computation stage of stereo matching network, both left and right images of a stereo pair are passed through layers of convolutions for extracting features. In order to enrich the representation by jointly reasoning on appearance and geometry information from RGB images and LiDAR data respectively, we propose Input Fusion that simply concatenates stereo images with their corresponding sparse LiDAR depth maps. Different from [14] that has explored a similar idea, for the setting of stereo and LiDAR fusion, we form the two sparse LiDAR depth maps corresponding to stereo images by reprojecting the LiDAR sweep to both left and right image coordinates with triangulation for converting depth values into disparity ones.
### _Conditional Cost Volume Normalization (CCVNorm)_
In addition to Input Fusion, we propose to incorporate information of sparse LiDAR depth points into the cost regularization step (i.e., 3D-CNN) of stereo matching network, learning to reduce the search space of matching and resolve ambiguities. As inspired by Conditional Batch Normalization (CBN) [27][26], we propose CCVNorm (**Co**onditional **Co**st **V**olume **N**ormalization) to encode the sparse LiDAR information \\(L^{s}\\) into the features of 4D cost volume \\(F\\) of size \\(C\\times H\\times W\\times D\\). Given a mini-batch \\(\\mathcal{B}=\\{F_{i,\\cdot,\\cdot,\\cdot,\\cdot}\\}_{i=1}^{N}\\) composed of \\(N\\) examples, 3D Batch Normalization (BN) is defined at training time as follows:
\\[F_{i,c,h,w,d}^{BN}=\\gamma_{c}\\frac{F_{i,c,h,w,d}-\\mathbb{E}_{\\mathcal{B}}[F_{ \\cdot,c,\\cdot,\\cdot,\\cdot}]}{\\sqrt{Var_{\\mathcal{B}}[F_{\\cdot,c,\\cdot,\\cdot, \\cdot}]}+\\epsilon}+\\beta_{c} \\tag{1}\\]
where \\(\\epsilon\\) is a small constant for numerical stability and \\(\\{\\gamma_{c},\\beta_{c}\\}\\) are learnable BN parameters. When it comes to Conditional Batch Normalization, the new BN parameters \\(\\{\\gamma_{i,c},\\beta_{i,c}\\}\\) are defined as functions of conditional information \\(L^{s}_{i}\\), for modulating the feature maps of cost volume in dependence on the given LiDAR data:
\\[\\gamma_{i,c}=g_{c}(L^{s}_{i}),\\hskip 28.452756pt\\beta_{i,c}=h_{c}(L^{s}_{i}) \\tag{2}\\]
However, directly applying typical CBN to 3D-CNN in stereo matching networks could be problematic due to few considerations: (1) Different from previous works [27][26], the conditional input in our setting is a sparse map \\(L^{s}\\) with varying values across pixels, which implies that normalization parameters should be carried out pixel-wisely; (2) An alternative strategy is required to tackle the void information
Fig. 2: Overview of our 3D LiDAR and stereo fusion framework. We introduce (1) **Input Fusion** that incorporates the geometric information from sparse LiDAR depth with the RGB images as the input for the Cost Computation phase to learn joint feature representations, and (2) **CCVNorm** that replaces batch normalization (BN) layer and modulates the cost volume features \\(F\\) with being conditioned on LiDAR data, in the Cost Regularization phase of stereo matching network. With the proposed two techniques, Disparity Computation phase yields disparity estimate of high-precision.
contained in the sparse map \\(L^{s}\\); (3) A valid value in \\(L^{s}_{h,w}\\) should contribute differently to each disparity level of the cost volume.
Therefore, we introduce CCVNorm (as shown in bottom-left of Fig. 3) which better coordinates the 3D LiDAR information with the nature of cost volume to tackle the aforementioned issues:
\\[\\begin{split} F^{CCVNorm}_{i,c,h,w,d}&=\\gamma_{i,c, h,w,d}\\frac{F_{i,c,h,w,d}-\\mathbb{E}_{\\mathbb{R}}[F_{c,c,\\cdots,\\cdot}]}{ \\sqrt{Varg_{[F_{c,c,\\cdots,\\cdot}]}+\\epsilon}}+\\beta_{i,c,h,w,d}\\\\ &\\gamma_{i,c,h,w,d}=\\begin{cases}g_{c,d}(L^{s}_{i,h,w}),&\\text{if }L^{s}_{i,h,w} \\text{ is valid}\\\\ \\overline{g}_{c,d},&\\text{otherwise}\\end{cases}\\\\ \\beta_{i,c,h,w,d}&=\\begin{cases}h_{c,d}(L^{s}_{i,h,w}),&\\text{if }L^{s}_{i,h,w} \\text{ is valid}\\\\ \\overline{h}_{c,d},&\\text{otherwise}\\end{cases}\\end{split} \\tag{3}\\]
Intuitively, given a LiDAR point \\(L^{s}_{h,w}\\) with a valid value, the representation (i.e., \\(F_{c,h,w,d}\\)) of its corresponding pixel in the cost volume under a certain disparity level \\(d\\) would be enhanced/suppressed via the conditional modulation when the depth value of \\(L^{s}_{h,w}\\) is consistent/inconsistent with \\(d\\). In contrast, for those LiDAR points with invalid values, the regularization upon the cost volume degenerates back to a unconditional batch normalization version and the same modulation parameters \\(\\{\\overline{g}_{c,d},\\overline{h}_{c,d}\\}\\) are applied to them. We experiment the following two different choices for modelling the functions \\(g_{c,d}\\) and \\(h_{c,d}\\):
**Categorical CCVNorm**: a \\(\\hat{D}\\)-entry lookup table with each element as a \\(D\\times C\\) vector is constructed to map LiDAR values into normalization parameters \\(\\{\\gamma,\\beta\\}\\) of different feature channels and disparity levels, where the LiDAR depth values are discretized here into \\(\\hat{D}\\) levels as entry indexes.
**Continuous CCVNorm**: a CNN is utilized to model the continuous mapping between the sparse LiDAR data \\(L^{s}\\) and the normalization parameters of \\(D\\times C\\)-channels. In our implementation, we use the first block of ResNet34 [33] to encode LiDAR data, followed by one \\(1\\times 1\\) convolution for CCVNorm in different layers respectively.
### _Hierarchical Extension_
We observe that both Categorical and Continuous CCVNorm require a huge number of parameters. For each normalization layer, the Categorical version demands \\(\\mathcal{O}(\\hat{D}DC)\\) parameters to build up the lookup table while the CNN for Continuous one even needs more for desirable performance. In order to reduce the model size for practical usage, we advance to propose a hierarchical extension (denoted as **HierCCVNorm**, which is shown in the top-right of Fig. 3), serving as an approximation of the Categorical CCVNorm with much fewer model parameters. The normalization parameters of HierCCVNorm for valid LiDAR points are computed by:
\\[\\begin{split}\\gamma_{i,c,h,w,d}&=\\phi^{g}(d)g_{c}( L^{s}_{i,h,w})+\\psi^{g}(d)\\\\ \\beta_{i,c,h,w,d}&=\\phi^{h}(d)h_{c}(L^{s}_{i,h,w}) +\\psi^{h}(d)\\end{split} \\tag{4}\\]
Basically, the procedure of mapping from LiDAR disparity to a \\(D\\times C\\) vector in Categorical CCVNorm is now decomposed into two sequential steps. Take \\(\\gamma\\) for an example, \\(g_{c}\\) is first used to compute the intermediate representation (i.e., a vector in size \\(C\\)) conditioned on \\(L^{s}_{i,h,w}\\), and is then modulated by another pair of modulation parameters \\(\\{\\phi^{g}(d),\\psi^{g}(d)\\}\\) to obtain the final normalization parameter \\(\\gamma\\). Note that \\(\\phi^{g},\\psi^{g},\\phi^{h},\\psi^{h}\\) are basically the lookup table with the size of \\(D\\times C\\). With this hierarchical approximation, each normalization layer only requires \\(\\mathcal{O}(DC)\\) parameters.
## IV Experimental Results
We evaluate the proposed method on two KITTI datasets [9][10] and show that our framework is able to achieve favorable performance in comparison with several baselines. In addition, we extensively conduct a series of ablation study to sequentially demonstrate the effectiveness of our design choices in the proposed method. Moreover, we investigate the robustness of our approach with respect to the density of LiDAR data, as well as benchmark the runtime and memory consumption. The code and model will be made available for the public.
### _Experimental Settings_
**KITTI Stereo 2015 Dataset.** KITTI Stereo dataset [9] is commonly used for evaluating stereo matching algorithms. It contains \\(200\\) stereo pairs for each of training and testing set, where the images are in size of \\(1242\\times 375\\). As the ground truth is only provided for the training set, we follow the identical setting as previous works [25][1] to evaluate our model on the training set with LiDAR data. For model training, since only \\(142\\) pairs among the training set are associated with LiDAR scans and they cover \\(29\\) scenes in the
Fig. 3: Conditional Cost Volume Normalization. At each pixel (red dashed bounding box), based on the discretized disparity value of corresponding LiDAR data, categorical CCVNorm selects the modulation parameters \\(\\gamma\\) from a \\(\\hat{D}\\)-entry lookup table, while the LiDAR points with invalid value are separately handled with an additional set of parameters (in gray color). On the other hand, HierCCVNorm produces \\(\\gamma\\) by a hierarchical modulation of 2 steps, with modulation parameters \\(g_{c}(\\cdot)\\) and \\(\\{\\phi^{g},\\psi^{g}\\}\\) respectively (cf. Eq. 4).
KITTI Completion dataset [10], we hence train our network on the subset of the Completion dataset with images of non-overlapping scenes (i.e., \\(33\\)k image pairs remained for training).
**KITTI Depth Completion Dataset.** KITTI Depth Completion dataset [10] collects semi-dense ground truth of LiDAR depth map by aggregating \\(11\\) consecutive LiDAR sweeps together, with roughly \\(30\\%\\) pixels annotated. The dataset consists of \\(43\\)k image pairs for training, \\(3\\)k for validation, and \\(1\\)k for testing. Since no ground truth is available in the testing set, we split the validation set into \\(1\\)k pairs for validation and another \\(1\\)k pairs for testing that contain non-overlapped scenes with respect to the training set. We also note that the full-resolution images (in size of \\(1216\\times 352\\)) of this dataset are bottom-cropped to \\(1216\\times 256\\) because there is no ground truth on the top.
**Evaluation Metric.** We adopt standard metrics in stereo matching and depth estimation respectively for the two datasets: (1) On KITTI Stereo [9], we follow its development kit to compute the percentage of disparity error that is greater than 1, 2 and 3 pixel(s) away from the ground truth; (2) On KITTI Depth Completion [10], Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and their inverse ones (i.e., iRMSE and iMAE) are used.
**Implementation Details.** Our implementation is based on PyTorch and follows the training setting of GC-Net [5] to have \\(\\mathcal{L}1\\) loss for disparity estimation. The optimizer is RMSProp [34] with a constant learning rate \\(1\\times 10^{-3}\\). The model is trained with batch size of \\(1\\) using a randomly-cropped \\(512\\times 256\\) image for \\(170\\)k iterations. The maximum disparity is set to \\(192\\). We apply CCVNorm to the 21, 24, 27, 30, 33, 34, 35th layers in GC-Net. We note that our full model refers to the setting of having both Input Fusion and HierCCVNorm, unless otherwise specified.
### _Evaluation on the KITTI Datasets_
For the KITTI Stereo 2015 dataset, we compare our proposed method to several baselines of stereo matching and LiDAR fusion in Table I. We draw few observations here: \\(1)\\) Without using any LiDAR data, deep learning-based stereo matching algorithms (i.e., MC-CNN [3] and GC-Net [5]) perform better than the conventional one (i.e., SGM [12]) by a large margin; \\(2)\\) GC-Net outperforms MC-CNN since its entire stereo matching process is formulated in an end-to-end learning framework, and it even performs competitively compared to two other baselines having LiDAR data fused either in input or output spaces (i.e., Probabilistic Fusion [25] and Park _et al._[1] respectively). This observation shows the importance of using an end-to-end trainable stereo matching network as well as designing a proper fusion scheme; \\(3)\\) Our full model learns to well leverage the LiDAR information into both the matching cost computation and cost regularization stages of the stereo matching network and obtains the best accuracy for disparity estimation against all the baselines.
In addition to disparity estimation, we compare our model with both monocular depth completion approaches and fusion methods of stereo and LiDAR data on the KITTI Completion dataset in Table II. From the results of Park _et al._, we observe that even with more information from stereo pairs, the performance is not guaranteed to be better than state-of-the-art method for monocular depth completion (i.e., NConv-CNN [18], Ma _et al._[15], and FusionNet [19]) if the stereo images and LiDAR data are not properly integrated. On the contrary, our method with careful designs of the proposed Input Fusion and HierCCVNorm is able to outperform baselines of both monocular or stereo fusion. It is also worth noting that, our model shows significant boost on the metrics related to inverse depth (i.e., iRMSE and iMAE) since our method is trained to predict disparity. Particularly, we emphasize here the importance of the inverse depth metrics, since they demand higher accuracy in the closer region, which are especially suitable for robotic tasks.
### _Ablation Study_
In Table III, we show the effectiveness of the proposed components step-by-step. Two additional baselines for fusion are introduced to have more throughout comparison: **Feature Concat** and **Naive CBN**. Feature Concat uses a ResNet34 [33] to encode LiDAR data, as utilized in other depth completion methods [14][15], and concatenate the LiDAR feature to the cost volume feature. Naive CBN follows a straightforward design of CBN that modulates the cost volume feature conditioned on valid LiDAR depth values.
**Overall Results.** First, we find that Input Fusion significantly improves the performance comparing to GC-Net. This highlights the significance of incorporating geometry information in the early matching cost computation (MCC) stage, mentioned in Sec. III-B. Next, in the cost regularization (CR) stage, we compare Feature Concat, Naive CBN, and different variants of our methods. All our CCVNorm variants outperform other mechanisms in fusing the LiDAR information to the cost volume in stereo matching networks. This demonstrates the benefit of applying the proposed CCVNorm scheme which serves as a regularization step on
\\begin{table}
\\begin{tabular}{l l|c c c c} \\hline Data & Method & iRMSE \\(\\downarrow\\) & iMAE \\(\\downarrow\\) & RMSE \\(\\downarrow\\) & MAE \\(\\downarrow\\) \\\\ \\hline \\hline \\multirow{3}{*}{Mono} & NConv-CNN [18] & \\(2.60\\) & \\(1.03\\) & \\(0.8299\\) & \\(0.2333\\) \\\\ & Ma _et al._[15] & \\(2.80\\) & \\(1.21\\) & \\(0.8147\\) & \\(0.2499\\) \\\\ & FusionNet [19] & \\(2.19\\) & \\(0.93\\) & \\(0.7728\\) & \\(\\mathbf{0.2150}\\) \\\\ \\hline \\multirow{2}{*}{Stereo} & Park _et al._[1] & \\(3.39\\) & \\(1.38\\) & \\(2.0212\\) & \\(0.5005\\) \\\\ & Ours Full & \\(\\mathbf{1.40}\\) & \\(\\mathbf{0.81}\\) & \\(\\mathbf{0.7493}\\) & \\(0.2525\\) \\\\ \\hline \\end{tabular}
\\end{table} TABLE II: Evaluation on the KITTI Depth Completion Dataset.
\\begin{table}
\\begin{tabular}{l l|c c c} \\hline Method & Sparsity & \\(>\\) 3 px \\(\\downarrow\\) & \\(>\\) 2 px \\(\\downarrow\\) & \\(>\\) 1 px \\(\\downarrow\\) \\\\ \\hline \\hline SGM [12] & & \\(20.7\\) & - & - \\\\ MC-CNN [3] & None & \\(6.34\\) & - & - \\\\ GC-Net [5] & & \\(4.24\\) & \\(5.82\\) & \\(9.97\\) \\\\ \\hline Prob. Fusion [25] & LiDAR & \\(5.91\\) & - & - \\\\ Park _et al._[1] & Data & \\(4.84\\) & - & - \\\\ Ours Full & & \\(\\mathbf{3.35}\\) & \\(\\mathbf{4.38}\\) & \\(\\mathbf{6.79}\\) \\\\ \\hline \\end{tabular}
\\end{table} TABLE I: Evaluation on the KITTI Stereo 2015 Dataset.
feature fusion for facilitating stereo matching (Sec. III-C). Finally, our full models with Input Fusion and categorical CCVNorm (with and without the hierarchical extension) produce the best results in the ablation.
**Categorical v.s. Continuous.** In addition, we empirically find that the categorical CCVNorm may serve as a better conditioning strategy than the continuous variant. Another interesting discovery is that the categorical variant performs competitively compared to the continuous one in most metrics (for disparity) except for the 1-px error. This is not surprising since the conditioning label for categorical CCVNorm is actually discretized LiDAR data, which may possibly lead to the propagation of quantization error. While the continuous variant performs better in 1-px error, they may not necessarily yield better results in sub-pixel errors (i.e., disparity RMSE and MAE), since cost volume is naturally a discretization in the disparity space, thus making the continuous variant harder to handle sub-pixel predictions [5].
**Benefits of Hierarchical CCVNorm.** In Table III, our hierarchical extension approximates the original CCVNorm and achieves comparable performance. We further show the computational time and model size for various conditioning mechanisms in Fig. 4 that highlights the advantages of our hierarchical CCVNorm. In both sub-figures, the scattered point closer to the left-bottom corner indicates a more accurate and efficient model. The figure shows that our hierarchical CCVNorm achieves good performance boost with only a small overhead in both computational time and model parameters compared to GC-Net. Note that, _Feature Concat_ adopts a standard strategy to encode LiDAR data in depth completion methods [14][15], resulting in much more parameters introduced. Overall, our hierarchical extension can be viewed as an approximation of the original CCVNorm with a huge reduction of computational time and parameters.
### _Robustness to LiDAR Density_
In Fig. 5, we study the robustness of different fusion mechanisms to the change of density in LiDAR data. We use \\(1.0\\) on the horizontal axis of Fig. 5 to indicate a full LiDAR sweep, and gradually sub-sample from it and observe how the performance of each fusion approach varies. The results highlight that both variants of CCVNorm (i.e., Categorical and Hierarchical CCVNorm) are consistently more robust to different density levels in comparison with other baselines (i.e., Feature Concat and Input Fusion only).
First, Input Fusion is highly sensitive to the density of sparse depth due to its property of treating both valid/invalid pixels equally and setting invalid values as a fixed constant,
Fig. 4: Error v.s. computation time and model parameters. It demonstrates that our hierarchical CCVNorm achieves comparable performance to the original CCVNorm but with much less overhead in computational time and model parameters.
Fig. 5: Robustness to LiDAR density. The 1.0 value in the horizontal axis indicates a complete LiDAR sweep and the shadow indicates the standard deviation. The figure shows that our method is more robust to LiDAR sub-sampling comparing to other baselines.
\\begin{table}
\\begin{tabular}{l|c|c c c c|c c c c} \\hline \\hline \\multirow{2}{*}{Method} & \\multirow{2}{*}{Stage} & \\multicolumn{4}{c|}{Disparity} & \\multicolumn{4}{c}{Depth} \\\\ & & \\(>\\) 3 px \\(\\downarrow\\) & \\(>\\) 2 px \\(\\downarrow\\) & \\(>\\) 1 px \\(\\downarrow\\) & RMSE \\(\\downarrow\\) & MAE \\(\\downarrow\\) & RMSE \\(\\downarrow\\) & MAE \\(\\downarrow\\) & iRMSE \\(\\downarrow\\) & iMAE \\(\\downarrow\\) \\\\ \\hline \\hline GC-Net [5] & MCC & \\(0.2540\\) & \\(0.4303\\) & \\(1.5024\\) & \\(0.6526\\) & \\(0.4020\\) & \\(1.0314\\) & \\(0.4054\\) & \\(1.6814\\) & \\(1.0356\\) \\\\ + IF & & \\(0.1694\\) & \\(0.3086\\) & \\(1.0405\\) & \\(0.5559\\) & \\(0.3245\\) & \\(0.7659\\) & \\(0.2613\\) & \\(1.4324\\) & \\(0.8362\\) \\\\ \\hline + FeatureConcat & & & \\(0.1810\\) & \\(0.3227\\) & \\(1.1335\\) & \\(0.5946\\) & \\(0.3812\\) & \\(0.8791\\) & \\(0.3443\\) & \\(1.5318\\) & \\(0.9821\\) \\\\ + NaiveCBN & & & \\(0.2446\\) & \\(0.4342\\) & \\(1.5616\\) & \\(0.6405\\) & \\(0.3915\\) & \\(1.0067\\) & \\(0.3808\\) & \\(1.6505\\) & \\(1.0087\\) \\\\ + CCVNorm (Cont) & & & \\(0.1363\\) & \\(0.2532\\) & \\(1.0265\\) & \\(0.5856\\) & \\(0.3688\\) & \\(0.8661\\) & \\(0.3385\\) & \\(1.5087\\) & \\(0.9500\\) \\\\ + CCVNorm (Cat) & & & \\(0.1254\\) & \\(0.2596\\) & \\(1.1348\\) & \\(0.5625\\) & \\(0.3574\\) & \\(0.8942\\) & \\(0.3425\\) & \\(1.4493\\) & \\(0.9209\\) \\\\ + HierCCVNorm (Cat) & & & \\(0.1268\\) & \\(0.2592\\) & \\(1.1124\\) & \\(0.5615\\) & \\(0.3583\\) & \\(0.8898\\) & \\(0.3403\\) & \\(1.4466\\) & \\(0.9230\\) \\\\ \\hline \\hline
\\begin{tabular}{l} + IF + FeatureConcat \\\\ + IF + CCVNorm (Cont) \\\\ + IF + CCVNorm (Cat) \\\\ + IF + HierCCVNorm (Cat) \\\\ \\end{tabular} & MCC & \\(0.1578\\) & \\(0.2958\\) & \\(1.0012\\) & \\(0.5550\\) & \\(0.3256\\) & \\(0.7622\\) & \\(0.2643\\) & \\(1.4303\\) & \\(0.8389\\) \\\\ + IF + CCVNorm (Cont) & & & \\(0.1460\\) & \\(0.2657\\) & \\(0.9586\\) & \\(0.6137\\) & \\(0.3235\\) & \\(0.7727\\) & \\(0.2573\\) & \\(1.5795\\) & \\(0.8335\\) \\\\ + IF + CCVNorm (Cat) & & & \\(0.1194\\) & \\(0.2406\\) & \\(0.9227\\) & \\(0.5409\\) & \\(0.3124\\) & \\(0.7325\\) & \\(0.2501\\) & \\(1.3940\\) & \\(0.8049\\) \\\\ + IF + HierCCVNorm (Cat) & & & \\(0.1196\\) & \\(0.2457\\) & \\(0.9554\\) & \\(0.5420\\) & \\(0.3131\\) & \\(0.7493\\) & \\(0.2525\\) & \\(1.3968\\) & \\(0.8069\\) \\\\ \\hline \\hline \\end{tabular}
\\end{table} TABLE III: Ablation study on the KITTI Depth Completion Dataset. “IF”, “Cat”, and “Cont” stand for **I**n**nput **F**usion, categorical and continuous variants of CCVNorm, respectively. For different stages, “MCC” stands for **M**atching **C**ost **C**omputation and “CR” is **C**ost Regularization. The bold font indicates top-2 performance.
and hence introducing numerical instability during network training/inference. Second, by comparing our two variants with Feature Concat, we observe that both methods do not suffer from severe performance drop in high LiDAR density (\\(0.7\\sim 1.0\\)). However, in low-level density (\\(0.1\\sim 0.6\\)), Feature Concat drastically degrades the performance while ours remains robust to the sub-sampling. This robustness results from our CCVNorm that modulates the pixel-wise feature during the cost regularization stage and introduces additional modulation parameters for invalid pixels (shown in Eq. (3)). Overall, this experiment validates that our CCVNorm can function well under varying or non-stable LiDAR density.
### _Discussions_
**Sensitivity to Sensory Inputs.** In Fig. 6, we present an example to investigate the sensitivity of different fusion mechanisms with respect to the conditional 3D LiDAR data: we manually modify a certain portion of sparse LiDAR disparity map (indicated by the white dashed box on the third image in the top row of Fig. 6), and visualize the changes in stereo matching outputs produced by this modification (referring to the bottom two rows of Fig. 6).
Interestingly, using \"Input Fusion only\" is unaware of the modification in the LiDAR data and produces almost identical output (before v.s. after in Fig. 6). The reason is that fusion solely on the input level is likely to lose the LiDAR information through the procedure of network inference. For \"Feature Concat\", where the fusion is performed in the later cost regularization stage, the change starts to be visible but not significant. On the contrary, all our variants based on CCVNorm (or having combination with Input Fusion) successfully reflect the modification of the sparse LiDAR data onto the disparity estimation output. Hence, this verifies again our contribution in proposing proper mechanisms for incorporating sparse LiDAR information with dense stereo matching.
**Qualitative Results.** Fig. 7 provides an example to illustrate qualitative comparisons between several baselines and the variants of our proposed method. Our full model (i.e., Input Fusion + hierarchical CCVNorm) is able to handle scenarios with complex structure by taking advantage of the complementary nature of stereo and LiDAR sensors. For instance, as indicated by the white dashed bounding box in Fig. 7, GC-Net fails to estimate disparity accurately on the objects containing the deformed shape (e.g., bicycles) in low illumination. In contrast, our method is capable of capturing the details of bicycles in disparity estimation due to the help from the sparse LiDAR data.
### _Computational Time_
We provide an analysis of computational time in Table IV. Except for Probabilistic Fusion [25] which is tested on a Core i7 processor and an AMD Radeon R9 295x2 GPU as reported in the original paper, all the other methods run on a machine with a Core i7 processor and an NVIDIA 1080Ti GPU. In general, the models based on stereo matching networks (i.e., GC-Net and ours) take longer for computation but provide significant improvement in performance (see Table I) in comparison with conventional algorithms. While improving the overall runtime performance via introducing more efficient stereo matching networks is out of the scope of this paper, we show that the overhead introduced by our Input Fusion and CCVNorm mechanisms upon the GC-Net method is only 0.049 seconds, validating the efficiency of our fusion scheme.
## V Conclusions
In this paper, built upon deep learning-based stereo matching, we present two techniques for incorporating LiDAR information with stereo matching networks: (1) Input Fusion that jointly reasons about geometry information extracted from LiDAR data in the matching cost computation stage and (2) CCVNorm that conditionally modulates cost volume feature in the cost regularization stage. Furthermore, with the hierarchical extension of CCVNorm, the proposed method only brings marginal overhead to stereo matching networks in runtime and memory consumption. We demonstrate the efficacy of our method on both the KITTI Stereo and Depth Completion datasets. In addition, a series of ablation studies validate our method over different fusion strategies in terms of performance and robustness. We believe that the detailed analysis and discussions provided in this paper could become an important reference for future exploration on the fusion of stereo and LiDAR data.
## References
* [1] K. Park, S. Kim, and K. Sohn, \"High-precision depth estimation with the 3d lidar and stereo fusion,\" in _IEEE International Conference on Robotics and Automation (ICRA)_, 2018.
\\begin{table}
\\begin{tabular}{l l l l l l} \\hline Method & SGM [12] & Prob. [25] & Park. [1] & GC-Net [5] & Ours \\\\ \\hline \\hline Time & \\(0.040\\) & \\(0.024\\) & \\(0.043\\) & \\(0.962\\) & \\(1.011\\) \\\\ \\hline \\end{tabular}
\\end{table} TABLE IV: Computational time (unit: second). Our method only brings small overhead (0.049 seconds) compared to the baseline GC-Net.
Fig. 6: Sensitivity to LiDAR data. We manually modify the sparse disparity input (indicated by the white dashed box in “Modified Sparse Disparity”) and observe the effect in disparity estimates. The results show that all our variants better reflect the modification of LiDAR data during the matching process.
* [2] S. S. Shivakumar, K. Mohta, B. Pfrommer, V. Kumar, and C. J. Taylor, \"Real time dense depth estimation by fusing stereo with sparse depth measurements,\" _ArXiv:1809.07677_, 2018.
* [3] J. Zbontar and Y. LeCun, \"Computing the stereo matching cost with a convolutional neural network,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2015.
* [4] W. Luo, A. G. Schwing, and R. Urtasun, \"Efficient deep learning for stereo matching,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2016.
* [5] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry, \"End-to-end learning of geometry and context for deep stereo regression,\" in _IEEE International Conference on Computer Vision (ICCV)_, 2017.
* [6] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, \"Cascade residual learning: A two-stage convolutional neural network for stereo matching,\" in _IEEE International Conference on Computer Vision Workshops (ICCV Workshops)_, 2017.
* [7] J.-R. Chang and Y.-S. Chen, \"Pyramid stereo matching network,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018.
* [8] P.-H. Huang, K. Matzen, J. Kopf, N. Ahuja, and J.-B. Huang, \"Deepmvs: Learning multi-view stereopsis,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018.
* [9] M. Menze and A. Geiger, \"Object scene flow for autonomous vehicles,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2015.
* [10] J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger, \"Sparsity invariant cnns,\" in _International Conference on 3D Vision (3DV)_, 2017.
* [11] D. Scharstein and R. Szeliski, \"A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,\" _International Journal of Computer Vision (IJCV)_, 2002.
* [12] H. Hirschmuller, \"Stereo processing by semiglobal matching and mutual information,\" _IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)_, 2008.
* [13] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, \"A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2016.
* [14] F. Mal and S. Karaman, \"Sparse-to-dense: Depth prediction from sparse depth samples and a single image,\" in _IEEE International Conference on Robotics and Automation (ICRA)_, 2018.
* [15] F. Ma, G. V. Cavalheiro, and S. Karaman, \"Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera,\" _ArXiv:1807.00275_, 2018.
* [16] J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger, \"Sparsity invariant cnns,\" in _International Conference on 3D Vision (3DV)_, 2017.
* [17] Z. Huang, J. Fan, S. Yi, X. Wang, and H. Li, \"Hms-net: Hierarchical multi-scale sparsity-invariant network for sparse depth completion,\" _ArXiv:1808.08685_, 2018.
* [18] A. Eldesokey, M. Felsberg, and F. S. Khan, \"Confidence propagation through cnns for guided sparse depth regression,\" _ArXiv:1811.01791_, 2018.
* [19] W. Van Gansbeke, D. Neven, B. De Brabandere, and L. Van Gool, \"Sparse and noisy lidar completion with rgb guidance and uncertainty,\" _ArXiv:1902.05356_, 2019.
* [20] X. Cheng, P. Wang, and R. Yang, \"Depth estimation via affinity learned with convolutional spatial propagation network,\" in _European Conference on Computer Vision (ECCV)_, 2018.
* [21] T.-H. Wang, F.-E. Wang, J.-T. Lin, Y.-H. Tsai, W.-C. Chiu, and M. Sun, \"Ping-and-play: Improve depth estimation via sparse data propagation,\" _ArXiv:1812.08350_, 2018.
* [22] K. Nickels, A. Castano, and C. Cianci, \"Fusion of lidar and stereo range for mobile robots,\" in _International Conference on Advanced Robotics (ICAR)_, 2003.
* [23] D. Huber, T. Kanade, _et al._, \"Integrating lidar into stereo for fast and improved disparity computation,\" in _International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT)_, 2011.
* [24] V. Gandhi, J. Cech, and R. Horaud, \"High-resolution depth maps based on tof-stereo fusion,\" in _IEEE International Conference on Robotics and Automation (ICRA)_, 2012.
* [25] W. Maddern and P. Newman, \"Real-time probabilistic fusion of sparse 3d lidar and dense stereo,\" in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2016.
* [26] E. Perez, H. De Vries, F. Strub, V. Dumoulin, and A. Courville, \"Learning visual reasoning without strong priors,\" _ArXiv:1707.03017_, 2017.
* [27] H. De Vries, F. Strub, J. Mary, H. Larochelle, O. Pietquin, and A. C. Courville, \"Modulating early visual processing by language,\" in _Advances in Neural Information Processing Systems (NIPS)_, 2017.
* [28] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville, \"Flim: Visual reasoning with a general conditioning layer,\" in _AAAI Conference on Artificial Intelligence (AAAI)_, 2018.
* [29] A. Radford, L. Metz, and S. Chintala, \"Unsupervised representation learning with deep convolutional generative adversarial networks,\" _ArXiv:1511.06434_, 2015.
* [30] S. Hochreiter and J. Schmidhuber, \"Long short-term memory,\" _Neural Computation_, 1997.
* [31] D. Ha, A. Dai, and Q. V. Le, \"Hypernetworks,\" _ArXiv:1609.09106_, 2016.
* [32] C. H. Lin, C.-C. Chang, Y.-S. Chen, D.-C. Juan, W. Wei, and H.-T. Chen, \"COCO-GAN: Conditional coordinate generative adversarial network,\" 2019.
* [33] K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2016.
* [34] G. Hinton, N. Srivastava, and K. Swersky, \"Neural networks for machine learning lecture 6a overview of mini-batch gradient descent.
Fig. 7: Qualitative Results. Comparing to other baselines and variants, our method captures details in complex structure area (the white dashed bounding box) by leveraging complementary characteristics of LiDAR and stereo modalities. | The complementary characteristics of active and passive depth sensing techniques motivate the fusion of the LiDAR sensor and stereo camera for improved depth perception. Instead of directly fusing estimated depths across LiDAR and stereo modalities, we take advantages of the stereo matching network with two enhanced techniques: Input Fusion and Conditional Cost Volume Normalization (CCVNorm) on the LiDAR information. The proposed framework is generic and closely integrated with the cost volume component that is commonly utilized in stereo matching neural networks. We experimentally verify the efficacy and robustness of our method on the KITTI Stereo and Depth Completion datasets, obtaining favorable performance against various fusion strategies. Moreover, we demonstrate that, with a hierarchical extension of CCVNorm, the proposed method brings only slight overhead to the stereo matching network in terms of computation time and model size. | Summarize the following text. | 167 |
arxiv-format/1704_08969v1.md | Does Forest Replacement Increase Water Supply in Watersheds? Analysis Through Hydrological Simulation
Ronalton Evandro Machado; Lubienska Cristina Lucas Jaquie Ribeiro; Milena Lopes
\\({}^{1}\\)Unicamp (University of Campinas) - College of Technology. Street Paschoal Marmo 1888, CEP 13484-332, Jd. Nova Italia, Limeira, SP, Brazil.
_Correspondence to:_ Ronalton Evandro Machado ([email protected])
## 1 Introduction
Knowledge on how forests affect the various aspects of water is essential to assess the role of forest cover on watershed's hydrological regime (LIMA, 2010). The forest is often regarded as effective to stabilize and maintain the river flow rates and this is one of the reasons why revegetation is repeatedly recommended to recover watersheds (BACELLAR, 2005). Some of the hydrological functions usually ascribed to forests, however, such as to increase rivers water availability are disputable and lack a technical and scientific basis. We observe, however, that this is still a worldwide controversy, especially regarding the establishment of water conservation and sustainable use of natural resources policies.
In this line of research we find a large collection of data in scientific literature, resulting from watersheds systematic monitoring all over the world, which use three methodologies, especially \"paired basins\" (Brown, 2005). Some experiences with paired basins showed the effect of forest cover on water yield, where natural vegetation has been removed and/or replaced by planted forests (BOSCH and HEWLETT, (1982); BRUIJNZEEL (1990, 2004); BUYTAERT et al., (2006)).
The paired-basin technique would be arguably the best methodology to evaluate the hydrological functions normally assigned to forests, applicable to basins with very similar characteristics. It is always preferable that paired watersheds should be as near as possible, so as to have similar physical aspects, climate, vegetation and use and occupation (BEST et al., 2003). Despite the advantages of using paired micro-basins to study the impact of vegetation changes on water yield, this kind of study takes time, since a watershed hydrological response to tree cutting or reforestation is a medium- to long-term process. It is also impossible to test other configurations of land management and use.
Another option to predict the impact of land-use changes on the quantity and quality of water in a watershed, e.g., vegetation replacement, is the use of hydrological models. According to Sun et al. (2006) mathematical models are probably the best tools to analyze complex non-linear relationships between water yield of forests and major environmental factors.
The large number of existing models applied to watersheds shows the advancement of this technology. There are many hydrological models that simulate the quality and quantity of water flow, each one have strengths and weaknesses which must be considered according to the user's needs and the characteristics of the study area. As an example, the Soil and Water Assessment Tool (SWAT) model allows great flexibility when configuringwatersheds (PETERSON & HAMLETT, 1998). The model was developed to predict the effect of different management scenarios in the quality and quantity of water, sediment yield and pollutant loads in agricultural watersheds (SRINIVASAN & ARNOLD, 1994). SWAT analyzes the watershed divided in sub-watersheds based on relief, soil and land use, preserving thus spatially distributed parameters of the entire watershed and homogeneous characteristics within the watershed.
The SWAT model is internationally recognized as a solid interdisciplinary watershed modeling tool, as demonstrated by annual international conferences and papers submitted to scientific journals (KUWAJIMA et al., 2011). SWAT many uses have shown promising results, e.g., hydrological assessments, impacts of climate change, evaluation of best management practices, estimation of pollutant load, determining the effects of land-use change, sediment yield, etc (SRINIVASAN & ARNOLD, 1994; ROSENTHAL et. al., 1995; CHO et al., 1995; MACHADO & VETTORAZZI, 2003; MACHADO et. al. 2003; KOCH et al., 2012; LESSA et al., 2014; ABBASPOUR et al., 2015; DECHMI & SKHIRI, 2013; LIU et al., 2015; ZHANG et al., 2014; ROCHA et al., 2015; LIN et al., 2015).
Due to the uncertainty of forests role in rivers produced water quantity and quality and the possibility of creating different scenarios that are difficult to test at watershed level, this paper's objective was first to identify \"Environmentally Sensitive Areas\" (ESAs) in the watershed under study and, subsequently, to simulate land use scenarios comparing them as to sediment yield and hydrological regime.
## 2 Methodology
Area of studyPinhal River watershed is located between UTM coordinates 250,000 and 275,000 m (S), 7,490,000 and 7,520,000 m (N) (UTM Zone 23 S, central meridian 45\\({}^{\\circ}\\) W). It consists of approximately 300 Km\\({}^{2}\\) (Figure 1). It has a tropical highland climate - Cwa, according to Koeppen classification, with hot and humid summer and cold and dry winter, and average annual temperature of 25 \\({}^{\\circ}\\)C. Average annual precipitation is approximately 1,240 mm.
The culture of sugarcane occupies most of the watershed area (42.3%), whereas the culture of citrus fruits occupies approximately 30% of the area. Much of the original forest vegetation has been destroyed in the process of land use and occupation, now scattered alongside watercourse banks (9%). The built-up area occupies 6.7%, located at the western side of Pinhal River watershed. The predominant soils in Pinhal River watershed are oxisols (72%) and cambisols (19%).
The Pinhal River is important for being the source of water for Limeira, state of Sao Paulo. The watershed has suffered in the past few decades from environmental degradation. The current situation may compromise this water source, if the process of degradation continues.
### The SWAT model and input data
SWAT is a distributed parameter model which simulates different physical processes in watersheds and which aims at analyzing land-use changes impacts on surface and subsurface runoff, sediment yield and water quality in agricultural watersheds that were not instrumented (SRINIVASAN & ARNOLD, 1994). The model operates on a daily basis and can simulate periods of 100 years or longer to determine the effects of management
Figure 1: Locations of the Pinhal watershed and gauging stations.
changes. It has been widely applied in hydrological modelling, water resources management and water pollution issues (DOUGLAS et al., 2010).
SWAT uses a command structure to propagate runoff, sediments and agrochemicals across the watershed. The model components include hydrology, climate, sediments, soil temperature, crop growth, nutrient and pesticide loading, and agricultural management (ARNOLD et al., 1998). The hydrological component of SWAT includes subroutines of surface runoff, percolation, lateral subsurface flow, return flow of shallow aquifer and evapotranspiration.
SWAT uses a modified formulation of the Curve Number (CN) method (USDA-SCS, 1972) to calculate surface runoff. The Curve Number method relates runoff to soil type, land use and management practices (ARNOLD et al., 1995). Sediment yield is estimated using the Modified Universal Soil Loss Equation (MUSLE) (WILLIAMS & BERNDT, 1977).
The model requires as input data daily precipitation, maximum and minimum air temperatures, solar radiation, wind speed and relative humidity. Data were obtained from UNICAMP Faculty of Technology weather station, located in Limeira, state of Sao Paulo, at UTM coordinates 251145 m (W) and 7503161 (S). Rainfall data were obtained from two other rainfall stations (Figure 1). Other data are cartographic layers: Digital Terrain Model (DTM), Land and Soil Use. Soil physical and hydraulic properties and crop phenological properties are stored in the model database. Table 1 summarizes the input data used in the study. Inputting data (layers and alphanumeric data) into SWAT is made via appropriate interface. The interface (ARNOLD et al., 2012) was developed between SWAT and GIS ArcGis. The interface automatically divides the watershed in sub-watersheds from DTM and then extracts input data from layers and Geodatabase for each sub-watershed. The interface display the model outputs using ArcGis charts and tables. We divided the Pinhal River watershed in 25 sub-watersheds up to the runoff measuring station at UTM coordinates 266175 m (W) and 7496308 (S) (Figure 1).
### Model evaluation
During analysis period (2012 to 2014) calibration of model is not possible due to inconsistency in observed data (the measuring station was constantly drowned during the operating period of a reservoir associated with a power station).
Despite the impossibility of calibrating the model for the Pinhal hydrographic basin, we used the hydrological regionalization methodology to validate the behavior of the model (Vandewiele, 1995; Bardossy, 2007). A hydrological regionalization is a technique that allows to transfer information between watersheds with similar characteristics in order to calculate, in sites where there are no data on the hydrological variables on interest (Emam et al., 2016). This technique becomes a useful tool for water resource management, especially when applied to most important instruments of Brazilian water resource policy that are the concession of water resource use rights and charging for the use of water resources (Fukunaga et al., 2015).
According to Tucci (2005), the hydrological information that can be regionalized can be in the form of variable, parameter or function. The hydrological function represents the relationship between a hydrological variable and one or more explanatory or statistical variables, such as flow-duration curve or relationship between impermeable areas and housing density (Tucci, 2002). The flow-duration curve relates the flow or level of a river and
\\begin{table}
\\begin{tabular}{c c c} \\hline
**Input data** & **Data description** & **scale** & **Data sources** \\\\ \\hline Land use & & Coordenadoria de \\\\ & Land-use classification - & Planejamento Ambiental, \\\\ & agricultural land, forest, & 25,000 \\\\ & pasture, urban and water & 20,000 \\\\ & & 0,000 \\\\ &
the probability of flowing greater than or equal to the ordinate value, thus being a simple, but concise and widely used method to illustrate the pattern of flow variation over time (Naghettini and Pinto, 2007).
For the construction of the flow-duration curve in this work, the series of simulated flows in the period from 2012 to 2014 was initially ordered decreasingly. This series was statistically divided into 10 equal intervals. For each interval, the number of flows was counted and the respective cumulative frequencies of the interval from highest to lowest are calculated. For the purposes of comparison, in the same graph, we plot the regionalised flows, according to the State Department of Water and Electric Energy (DAEE - state entity responsible for granting concessions of water resources in the State of Sao Paulo) and simulated, allowing The verification of sub or overestimation by the simulated curve. The Nash-Sutcliffe model efficiency coefficient (Nash and Sutcliffe, 1970) was used to validate the simulation results, besides the visual analysis of simulated flow-duration curve regionalized (NSE). The NSE (equation 2) was used to compare the regionalized and simulated flows in intervals of 5 in 5% probability of occurrence of the flow-duration curve. NSE can range from -\\(\\infty\\) to 1, where 1 is the optimal value and values above 0.75 can be considered very good (Moriasi et al, 2007). NSE is calculated as eq. 2:
\\[NSE=1-\\frac{\\sum_{i=1}^{n}(Q_{OSSi}-Q_{SIMi})^{2}}{\\sum_{i=1}^{n}(Q_{OSSi}- \\overline{Q_{OSS}})^{2}} \\tag{2}\\]
The PBIAS (Eq. 3) of the simulated discharge in relation to that regionalised were too utilized (Gupta et al., 1999).
\\[PBIAS[\\%]=\\left(\\frac{\\sum_{i=1}^{n}(Q_{OSSi}-Q_{SIMi})}{\\sum_{i=1}^{n}(Q_{ OBSi})}\\right)*100 \\tag{3}\\]
Where, \\(Q_{OSSi}\\) and \\(Q_{SIMi}\\) corresponds to the observed and simulated discharge, respectively, on day i (m3/s), and \\(\\overline{Q}_{OSS}\\) corresponds to the average observed discharge, in (m3/s), and n corresponds to the number of events.
**2.4.Identification of Environmentally Sensitive Areas (ESAs)**
The concept of \"Environmentally Sensitive Areas\" was created in industrialized countries approximately 30 years ago due to increased soil and water degradation and the degree of severity of degradation (RUBIO, 1995). Degradation has been caused by uncontrolled forest destruction, water pollution, wind and water erosion, salinization and inappropriate management of cultivated and uncultivated soil (GOURLAY, 1998).
Environmentally Sensitive Areas (ESAs) are areas that contain natural or cultural features important for an ecosystem functioning. They may be negatively impacted by human activities and are vital to long-term maintenance of biological diversity, soil, water, or other natural resource, in the local or regional context (NDUBISI et al., 1995). An environmentally sensitive area may also be considered, in general, a specific and delimited entity with unbalanced environmental and socioeconomic factors, or not sustainable for that particular environment (GOURLAY, 1998). As an example, high sensitivity may be related to land use, which in certain cases causes soil degradation. Annual crops in areas where the relief is hilly, with declivity and shallow soils, have a high risk of degradation.
To identify ESA's in Pinhal River watershed in the context of environmental degradation, we adapted the results from Adami et al. (2012) and identified three types of ESAs: Critical, Fragile and Potential. Adami et al. (2012) made an environmental analysis of Pinhal watershed via Geographic Information System (GIS) using key indicators of relief, soil and land uses to determine the capacity of natural resources and environmental fragility. The empirical analysis of environmental fragility methodology was used to identify areas that require more attention for improving environmental conditions. The procedures employed by authors in your study are shown in Fig.2.
### Scenario simulation
We made two scenario simulations using SWAT model interfaced with GIS ArcGis, aiming to verify the effect of land-use change on sediment yield (sediment transported from sub-watersheds to the main channel over time, ton/ha) and the watershed hydrological regime (Discharge (m\\({}^{3}\\)/s), surface runoff (mm), evapotranspiration (mm), soil water content (mm), water yield (mm)). Where the Water yield (mm H2O) is the net amount of water that leaves the sub-basin and contributes to streamflow in the reach during the time step. (WYLD = SURQ + LATQ + GWQ - TLOSS - pond abstractions). SURQ is the surface runoff contribution to streamflow during time step (mm H2O). LATQ is the Lateral flow contribution to streamflow during time step (mm H2O). GWQ is the Groundwater contribution to streamflow (mm). Water from the shallow aquifer that returns to the reach during the time step. TLOSS is the Average daily rate of water loss from reach by transmission through the streambed during time step (m\\({}^{3}\\)/s) (ARNOLD et al., 2012).
Figure 2: Flowchart of the procedures in the study the Adami et al. (2012).
One of the scenario simulations covered Critical and Fragile ESAs with overlapping forest cover on the land use map and we compared the results to the current scenario conditions (baseline). Thus, land use pattern projected in this scenario is just hypothetical and often hard to implement in practice due to already consolidated land use and occupation, but at the same time, it shows the watershed environmental fragility identified by Adami et al. (2012). Thus, these simulations illustrate the application and integration of hydrological and water quality models with GIS to evaluate watershed management scenarios modifying only land use and occupation layer and management practices.
We used the deviation of the analyzed event (PBIAS) as statistical criterion to evaluate sediment yield and compare the hydrological behavior of the watershed in different scenarios:
\\[PBIAS[\\%]=\\left(\\frac{\\text{E}-\\text{E}^{*}}{\\text{E}}\\right)*100 \\tag{1}\\]
where E represents baseline scenario events (current use) in the period and E\\({}^{*}\\) the results of the alternative scenario (ESAs) in the period. Percent bias calculation of the analyzed event (PBIAS) is important because it takes into account potential error among compared data. For this method, the higher the value of PBIAS (+ or -), the greater the difference in sediment yield and changes in hydrological regime among scenarios.
## 3 Results and Discussion
### Model evaluation
FIG. 3 show on the discharge data obtained by regionalization and simulated (i.e., flow-duration curve). The flow-duration curves generated show that the simulation tends to underestimate the discharge almost uniformly, presenting greater differences in the probabilities of 20 to 100%, and overestimating only those with lower probabilities (10 to 20%). Although underestimating the flows most of the time, the simulated flow-duration curve presented a pattern of variation close to the pattern of variation of the regionalized flows. The NSE applied to compare the regionalized and simulated flows in intervals of 5 in 5% of the flow-duration curve was 0.93. According to Moriasi et al (2007), NSE values between 0.7 and 1 indicate a very good performance of the model. As for the PBIAS result for the flow values at intervals of 5% of probability of occurrence, the model underestimated the flows by 11%. PBIAS between 10% and 15% indicates a good accuracy of the model (Van Liew et al., 2007). Emam et al. (2016), used SWAT model in the ungauged basin in Central Vietnam. The hydrological regionalization (i.e., ratio method) approach was used to predict the river discharge at the outlet of the basin. The model was calibrated with Nash-Sutcliff and R\\({}^{2}\\) coefficients greater than 0.7 in time scales daily by river discharge.
### Environmentally Sensitive Areas (ESAs)
ESAs identified in the Pinhal River watershed are shown in Figure 4 and Table 2. 16% of the watershed area are degraded due to improper land use, which is a threat to the surrounding environment. These areas are severely eroded and have high rates of surface runoff and soil loss. In this case, there may be higher peak streamflow and water bodies sedimentation (critical ESAs).
In 25% of the area we have identified regions where any change in the delicate balance between the environment and human activities may result in environmental degradation of the ecosystem. A change in soil management of annual and semiannual plants, e.g., sugarcane, in highly sensitive soils may cause an immediate increase in surface runoff and water erosion, pushing pesticides and fertilizers downstream (Fragile ESAs).
54% of the total watershed area is classified as Potential ESAs. Agricultural activities in these areas, although following Land Use Capability standards and requiring simple soil
\\begin{table}
\\begin{tabular}{l l l} \\hline _Class_ & _Area (ha)_ & _Area (\\%)_ \\\\ \\hline Critical ESAs & 4,801 & 16 \\\\ Fragile ESAs & 7,471 & 25 \\\\ Potential ESAs & 16,155 & 54 \\\\ Water & 149 & 1 \\\\ Urban or rural uses & 1,196 & 4 \\\\ Total & 29,772 & 100 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Environmentally Sensitive Areas (ESAs) identified in Pinhal River watershed.
Figure 4: ESAs Map in Pinhal River watershed.
conservation practices to control erosion, require attention because of the use of external agents such as pesticides in cultures of sugarcane and citrus fruits.
### Land-use change between scenarios
Figure 5 presents the land use map for the two scenarios and Table 3 the total and relative areas of occupation of each land cover in Pinhal River watershed for current use scenario (baseline) and for the scenario of ESAs recomposed with native vegetation. From the current scenario to ESAs scenario there is a reduction of areas occupied with sugarcane, citrus and pasture and, consequently, an increase of areas occupied with forest vegetation. Sugarcane occupied the largest area in the watershed and in the ESAs scenario there was a reduction of 46.30% in this area. Orange occupies the second largest area in current use scenario and in the new scenario was reduced by 18.8%, whereas pasture was reduced by 44.43%. Other uses area has been reduced by 42.61%. Some sub-watersheds increased forest cover compared to others: sub-watersheds number 11, 12, 13, 14, 15 and 16.
We present in Figure 6 the variation of land-use change in sub-watersheds scale between the two scenarios. The decrease in pasture and sugarcane areas, where soils are exposed to erosion during soil management and the increase of native vegetation area explain lower sediment yield and water yield. The decrease of pasture and increase of forest area in the Northwest region (Sub-watershed 12) also contributed to lower sediment and water yield in this region.
Figure 5: Land use map for current scenario (a) (Source: Secretaria do Meio Ambiente do Estado de São Paulo, 2013) and Critical and Fragile ESAs scenario (b), with native forest cover, overlapping current land use on Pinhal River watershed (ESAs scenario).
\\begin{table}
\\begin{tabular}{l c c c c c c} \\hline Land-use type & \\multicolumn{2}{c}{Current use} & \\multicolumn{2}{c}{ESAs scenario} & \\multicolumn{2}{c}{Change} \\\\ \\hline & Area (ha) & Percentage & Area (ha) & Percentage & Area (ha) & Percentage \\\\ & & (\\%) & & (\\%) & & (\\%) \\\\ \\cline{2-7} Sugarcane & 12,566 & 42.2 & 6,748 & 22.7 & -5,818 & -46.30 \\\\ Orange & 8,866 & 29.8 & 7,199 & 24.2 & -1,667 & -18.80 \\\\ Pasture & 2,341 & 7.9 & 1,301 & 4.4 & -1,040 & -44.43 \\\\ Forest & 2,662 & 8.9 & 12,609 & 42.4 & 9,947 & 373.67 \\\\ Other uses & 3,337 & 11.2 & 1,915 & 6.4 & -1,422 & -42.61 \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Land use and occupation change between the two scenarios (current use and ESAs) in Pinhal River watershed.
### Sediment Yield
The results of sediment yield presented in Figure 7 represent the erosion and sedimentation processes occurring throughout Pinhal River watershed during simulation period (2012 to 2014). With scenario change, reduction in sediment yield was -54% (PBIAS) compared to current use scenario. This reduction occurred mostly in sub-watersheds located in lithosols and cambisols (Figure 8). These are shallow, not deep soils. Cambisols in the watershed area occurs in undulated relief. These are poorly developed soils, with incipient B horizon. One of cambisols main features is their - shallowness and often high content of gravel. High silt content and low depth are responsible for this soil low permeability (TERAMOTO, 1995). The biggest issue, however, is soil erosion risk. Cambisols have restrictions of agricultural use, for their high erodibility, high risk of degradation and poortrafficability. These soils occupy 19% of the watershed total area. In current use scenario, 22.4% of this soil area is being occupied with native vegetation. In ESAs scenario the percentage increased to 68.3% (Table 4). Lithosols occupy approximately 4% of the watershed total area and are located in areas of greater declivity. They are in a geomorphologically unstable zone and erosion affects soil development and they are constantly renewed through superficial erosion (TERAMOTO, 1995). Extensive areas are occupied with sugarcane, pasture and orange (62.3%) cultivation on these soils. In the current scenario, 24.3% of lithosol is covered with native vegetation. In ESAs scenario the percentage is 95.7% (Table 4). Increased native vegetation on both soils explains 54% reduction (PBIAS) in sediment yield in the watershed, when we compare the two scenarios. Spatial location of agricultural areas in relation to relief, soil and climate is important to control erosion in watersheds.
\\begin{table}
\\begin{tabular}{l c c c c c c c} \\hline & \\multicolumn{4}{c}{**Cambisol**} & \\multicolumn{4}{c}{**Lithosol**} \\\\ \\hline
**Land-use** & **Current use** & **ESAs scenario** & **Current use** & **ESAs scenario** \\\\
**type** & & & & & & \\\\ \\hline & **Area** & **Percent** & **Area** & **Percent** & **Area** & **Percenta** & **Area** & **Percenta** \\\\ & **(ha)** & **age (\\%)** & **(ha)** & **age (\\%)** & **(ha)** & **ge (\\%)** & **(ha)** & **ge (\\%)** \\\\ \\cline{2-7}
**Forest** & 1,278 & 22 & 3,894 & 68 & 275 & 24 & 1,089 & 96 \\\\
**Pasture** & 947 & 17 & 399 & 7 & 169 & 15 & 10 & 1 \\\\
**Sugarcane** & 997 & 17 & 142 & 2 & 350 & 31 & 8 & 1 \\\\
**Other uses** & 2,476 & 43 & 1,263 & 22 & 339 & 30 & 31 & 3 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Cross tab between land-use changes on the scenarios for cambisols and lithosols in Pinhal River watershed.
Figure 8: Pinhal River watershed soil map (Oliveira, 1999).
watershed occurs aggradation, with lower sediment yield values. In ESAs scenario, replacement with native vegetation in Environmentally Sensitive Areas lead to an average sediment yield of 5.2 t/ha per year, with maximum of 14.2 t/ha. Average soil loss in sub-watersheds was near tolerable soil loss rates, which according to Leinz & Leonardos (1977) is 7.9 ton/ha for podzol and 4.2 tons/ha for lithosol. According to Figure 7, the lowest rates of sediment yield occurred in sub-watersheds with greater forest cover. As SWAT model simulates many processes in the watershed, some parameters may affect several processes (ARNOLD et al., 2012). With reduction of surface runoff in -45.8% (PBIAS) among scenarios (Table 5) due to greater soil protection, sediment yield has also been directly affected. Sediment yield difference between the two scenarios is presented in Figure 10. Analyzing Figure 10, this difference is greater in upstream sub-watersheds and in those with greater forest cover (sub-watersheds 11, 14, 15 and 16), according to Figure 5b.
### Hydrological regime
It is widely reported that land-use and land cover changes can affect quantity and quality of water resources of a watershed. We analyzed discharge (m\\({}^{3}\\)/s), surface runoff (mm), water yield (mm), evapotranspiration (mm) and soil water content (mm) (Figures 11-15) data to evaluate the impact of these changes on the watershed hydrological regime. Monthly values for the 2012-2014 period were then compared between the two scenarios and the results (PBIAS) showed that increasing forest cover in the watershed (+ 373.8%) decreased discharge, surface runoff (SR), Soil water content (SW), water yield (WY) and increased evapotranspiration (ET) (Table 5). Studies conducted by Huang et al. (2003), Zhang et al. (2008), Li et al. (2009), Cui et al. (2012) showed that increased forest cover in watersheds decreased water yield.
As both surface runoff and baseflow are the main components that contribute to water yield, we expected greater infiltration rate in ESAs scenario, for infiltration rate in forest areas is greater than in other land covers, e.g., sugarcane and pasture (Liu et al., 2012). Higher infiltration rate will increase baseflow, because in this scenario areas previously occupied with other land uses were now occupied with native vegetation. On the other hand, forest evapotranspiration will consume more water (PBIAS of evapotranspiration equal to +3.5%, Figure 14), because it is known that the forest is the surface with higher rates of evapotranspiration, higher than all the other vegetation types and also higher than a liquid's surface (Birkinshaw et al., 2011). Roots, especially of larger trees, increase water absorption
Figure 10: Spatial variations of average annual sediment yield at the watershed scale between the two scenarios.
from baseflow and, consequently, decrease water yield in the watershed, which may be seen in Figure 15, as the water content in the soil decreased in the studied period (-14.1%). Differently, with scenario change, this type of land cover provides greater resistance to runoff and, consequently, this component had a lower contribution to water yield in the watershed (-45.8%).
The influence of forest recovery in the hydrological regime can also be analyzed separately in two different periods. Comparing evapotranspiration demand independently in the wet period (October to March, Figure 14a) and dry period (April to September, Figure 14b), the difference between the two scenarios is even greater. In the wet period the difference is +1.3%, whereas in the dry period this difference is +8.2%. In the wet period available water in the soil (Figure 15a) compensates increased evapotranspiration demand of vegetation, even with increased forest cover (ESAs scenario), which contributes to lower water losses through evapotranspiration in the watershed (Figure 14a). In the dry period when SW is lower (Figure 15b), large forest vegetation access more easily underground water than a small vegetation, having, therefore, greater evapotranspiration demand and reducing water yield in the watershed. Based on results obtained from more than 90 experimental micro watersheds in different parts of the world, Bosch & Hewlett (1982) asserted that deforestation decreases evapotranspiration, which results in more water available in the soil and in streamflow. On the other hand, deforestation decreases streamflow at the watershed scale. It is worth mentioning, however, that these results vary from place to place and are often unpredictable (BROWN et al., 2005).
\\begin{table}
\\begin{tabular}{c c c c} \\hline Variable & Current use & ESAs scenario & PBIAS (\\%) \\\\ \\hline Discharge (m\\({}^{3}\\)/s) & 119.1 & 105.3 & -11.6 \\\\ Surface runoff (mm) & 570.4 & 309.1 & -45.8 \\\\ Evapotranspiration (mm) & 1,993.2 & 2,062.3 & +3.5 \\\\ Soil water content (mm) & 8,279.8 & 7,113.5 & -14.1 \\\\ Water yield (mm) & 1,471.4 & 1,187.9 & -19.3 \\\\ \\hline \\end{tabular}
\\end{table}
Table 5. PBIAS of hydrological variables analyzed between the two scenarios (current use and ESAs) in Pinhal River watershed, in the 2012-2014 period.
Figure 11 - Pinhal River watershed discharge comparison between the two scenarios.
Figure 12 - Pinhal River watershed surface runoff in the two scenarios.
Figure 13 - Comparison of water produced in Pinhal River watershed between the two scenarios.
Figure 14 - Pinhal River watershed evapotranspiration in two different scenarios.
Figure 14a - Pinhal River watershed wet season evapotranspiration in two different scenarios [PBIAS = +1.3%].
Figure 14b - Pinhal River watershed dry season evapotranspiration in two different scenarios [PBIAS = +8.2%].
Figure 15a - Pinhal River watershed soil water content in two different scenarios.
Figure 15b - Pinhal River watershed soil water content in two different scenarios.
Figure 16 shows spatial distribution of the hydrological regime variation (surface runoff, evapotranspiration, soil water content and water yields) at sub-watersheds scale between scenarios. The influence of land-use change on the hydrological regime is more visible in some of the sub-watersheds than others at the watershed scale. These variations were smaller in upstream sub-watersheds and as with sediment yield, major variations occurred in sub-watersheds with greater forest cover when we compare current scenario with ESAs scenario. Watersheds hydrological regime is the result of complex interactions between climate (wet versus dry years), plant physiological properties (e.g., leaf area and successional stages) and soil type (ANDREASIAN, 2004). According to Singh & Mishra (2012), these and other factors together make hydrological effects of forests a markedly different scenario.
Figure 16: Spatial variations of average annual hydrological regime at sub-watershed scale between the two scenarios. SURQ (surface runoff - mm), ET (evapotranspiration - mm), SW (soil water content - mm), WYLD (water yield - mm).
## 4 Conclusion
The role of forests in watersheds hydrological cycle and water yield is controversial. Although reducing sediment yield as the results obtained from simulation of different scenarios show (PBIAS = -54%), for it offers the soil greater protection, its influence on increasing and maintaining streamflow is questionable, because the results obtained from this study also showed that increased forest cover decreased water yield in the watershed in -19.3% (PBIAS) due mostly to its greater evapotranspiration capacity (+3.5%), this demand being even greater during dry season (+8.2%). Simulation results lead us to conclude that the impacts of land-use change on hydrological processes are complex and their consequences are not equal in all situations and with the same intensity.
#### Acknowledgments
UNICAMP Espaco da Escrita project/General Coordination for the English translation of this article.
**Funding:** This work was funded by Sao Paulo Research Foundation (FAPESP) [1grant #2013/02971-3].
## References
* [1][ABBASPOUR et al.2015] ABBASPOUR, K.C.; ROUHOLAHNEJAD, E.; VAGHEFI, S.; SRINIVASAN, R.; YANG, H.; KLOVE, B. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution largescale SWAT model. **Journal of Hydrology**, v. 524, n. 5, p. 733-752, 2015.
* [2][ADAMI et al.2012] ADAMI, S. F.; COELHO, R. M.; CHIBA, M. K.; MORAES, J. F. L. Environmental fragility and susceptibility mapping using geographic information systems: applications on Pinhal River watershed (Limeira, State of Sao Paulo). **Acta Scientiarum.** Technology (printed), v. 34, p. 433-440, 2012.
* [3][ANDREASSIAN2004] ANDREASSIAN, V., 2004. Waters and forests: from historical controversy to scientific debate. **Journal of Hydrology** 291, 1-27.
* [4][ARNOLD et al.1995] ARNOLD J.; WILLIAMS J.; MAIDMENT D. (1995) Continuous-time water and sediment-routing model for large basins. **J Hydraulic Eng.**, 121:171-183.
* [5][ARNOLD et al.1998] ARNOLD, J.G., SRINIVASAN, R., MUTTIAH, R.S., WILLIAMS, J.R., 1998. Large area hydrologic modelling and assessment part I: model development. J. Am. **Water Resour. Assoc.** 34 (1), 73-89.
* [6][ARNOLD et al.2012] ARNOLD, J.G., MORIASI, D.N., GASSMAN, P.W., ABBASPOUR, K.C., WHITE, M.J., SRINIVASAN, R., SANTHI, C., HARMEL, R.D., VAN GRIENSVEN, A., VAN LIEW, M.W., KANNAN, N., JHA, M.K., 2012. SWAT: model use, calibration, and validation. Trans. **ASABE 55** (4), 1491-1508.
* Ouro Preto/MG: 1-39
* [8][BEST et al.2003] BEST A., ZHANG, L., MCMAHOM T., WESTERN, A, VERTESSY R. 2003. A critical review of paired catchment studies with reference to seasonal flow and climatic variability. Australia, CSIRO Land and Water Technical. MDBC Publication, 56 p. (**Technical Report 25/03**).
* [9][BIRKINSHAW et al.1982] BIRKINSHAW, S. J.; BATHURST, J. C.; IROUME, A.; PALACIOS, H. The effect of forest cover on peak flow and sediment discharge--an integrated field and modelling study in central-southern Chile. **Hydrol. Process.** 25, 1284-1297 (2011).
* [10][BOSCH et al.1982] BOSCH, J.M., HEWLETT, J.D., 1982. A review of catchment experiments to determine the effect of vegetation changes on water yield and evapotranspiration. **Journal of Hydrology** 55 (1/4), 3-23.
* [11][BROWN, A. E.; ZHANG B., L.; MCMAHONC, T. A. WESTERNC, A. W.; VERTESSYB, R. A. A review of paired catchment studies for determining changes in water yield resulting from alterations in vegetation. **Journal of Hydrology** 310 (2005) 28-61.
* BRUIJNZEEL (1990) BRUIJNZEEL, L.A. (1990). **Hydrology of Moist Tropical Forests and Effects of Conversion**: a State of Knowledge Review. Humid Tropics Programme, IHP-UNESCO, Paris, and Vrije Universiteit, Amsterdam, 224 pp.
* BRUIJNZEEL (2004) BRUIJNZEEL, L.A. Hydrological functions of tropical forests: not seeing the soil for the trees? **Agriculture, Ecosystems and Environment** 104 (2004) 185-228.
* BUYAERT et al. (2006) BUYAERT, W.; CELLERI, R.; DE BIEVRE, B., CISNEROS, F.; WYSEURE, G.; DECKERS, J.; HOFSTEDE, R. Human impact on the hydrology of the Andean paramos. **Earth-Science Reviews** 79 (2006) 53-72.
* CHO and JENNINGS (1995) CHO, S. M.; JENNINGS, G. D.; STALLINGS, C.; DEVINE, H. A. GIS-basead water quality model calibration in the Delaware river basin. **ASAE**, St. Joseph, Michigan, 1995. (ASAE Microfiche, 952404)
* CUI and LIU (2012) CUI, X.; LIU, S.; WEI, X. Impacts of forest changes on hydrology: a case study of large watersheds in the upper reaches of Minjiang River watershed in China. **Hydrol. Earth Syst. Sci.**, 16, 4279-4290, 2012
* DECHMI and SKHIRI (2013) DECHMI, F. & SKHIRI, A. Evaluation of best management practices under intensive irrigation using SWAT model. **Agricultural Water Management**, 2013, vol. 123, issue C, pages 55-64.
* DOUGLAS and SRINIVASAN (2010) DOUGLAS, M. K.; SRINIVASAN, R.; ARNOLD, J. Soil and water assessment tool (swat) model: Current developments and applications. **T. Asabe**, 2010, 53, 1423-1431.
* Emam et al. (2015) Emam, A. R.; Kappas, M.; Nguyen, L. H. K. and Renchin, T. Hydrological Modeling in an Ungauged Basin of Central Vietnam Using SWAT Model. Manuscript under review for **journal Hydrol. Earth Syst. Sci**. Published: 18 February 2016.
* FukUNAGA (2015) FukUNAGA, D. C. et al. Application of the SWAT hydrologic model to a tropical watershed at Brazil. **Catena**, 125:206-213, 2015.
* GOURLAY and SLEE (1998) GOURLAY, D; SLEE, B. Public preferences for landscape features: a case study of two Scottish environmental sensitive areas. **Journal of Rural Studies**, v. 14, n.2, p. 249-263, 1998.
Gupta, H. V., Sorooshian, S., & Yapo, P. O. (1999). Status of automatic calibration for hydrologic models: Comparison with multilevel expert calibration. **Journal of Hydrologic Engineering**, 4(2), 135-143. DOI: 10.1061/(ASCE)1084-0699(1999)4:2(135)
* HUANG et al. (2003) HUANG, M.; ZHANG, L.; GALLICHAND, J. Runoff responses to afforestation in a watershed of the Loess Plateau, China. **Hydrol. Process.** 2003, 17, 2599-2609.
* PE. Adaptative Water Management: Looking to the future, 2011. p. 134-134.
* A new Approach Addressing Land Use Dynamics in the Model SWAT. **International Environmental Modelling and Software Society (iEMSs)** 2012 International Congress on Environmental Modelling and Software Managing Resources of a Limited Planet, Sixth Biennial Meeting, Leipzig, Germany, 2012.
* LEINZ et al. (1977) LEINZ, V.; LEONARDOS, O. H. **Glossario geologico**. 2. ed. Sao Paulo: Companhia Editora Nacional, 1977. 236p.
* 039, 2014.
* LIMA (2010) LIMA, W. P., 2010. A Silvicultura e a Aqua: Ciencia, Dogmas, Desafios. **Cadernos do Dialogo**, Vol. I. Instituto BioAtlantica, Rio de Janeiro. 64 p.
* LIN et al. (2015) LIN, B.; CHEN, X.; YAO, H.; CHEN, Y.; LIU, M.; GAO, L.; JAMES, A. Analyses of landuse change impacts on catchment runoff using different time indicators based on SWAT model. **Ecol. Indic.**, 58 (2015), pp. 55-63.
* LIU et al. (2015) LIU, Y., YANG, W., YU, Z., LUNG, I., & GHARABAGHI, B. (2015). Estimating sediment yield from upland and channel Erosion at a watershed scale using SWAT. **Water Resources Management**, Volume 29, Issue 5, pp 1399-1412.
* LIU et al. (2012) LIU, Z.; LANG, N.; WANG, K. Infiltration Characteristics under Different Land Uses in Yuanmou Dry-Hot Valley Area. **In Proceedings of the 2nd International Conference on Green Communications and Networks 2012 (GCN 2012):** Volume1; Yang, Y., Ma, M., Eds.; Springer: Berlin, Germany, 2013; Volume 223, pp. 567-572.
MACHADO, R. E. & VETTORAZZI, C. A. Sediment yield simulation for the Marins watershed, State of Sao Paulo, Brazil. **Rev. Bras. Cienc. Solo** [online]. 2003, vol.27, n.4, pp.735-741. ISSN 1806-9657. [http://dx.doi.org/10.1590/S0100-06832003000400018](http://dx.doi.org/10.1590/S0100-06832003000400018).
MACHADO, R. E.; VETTORAZZI, C. A.; CRUCIANI, D. E. Simulacao de Escoamento em uma Microbacia Hidrografica utilizando Tecnicas de Modelagem e Geoprocessamento. **Revista Brasileira de Recursos Hidricos**, v. 8, n.1, p. 147-155, 2003.
MORIASI, D.N.; ARNOLD, J.G.; VAN LIEW, M.W.; BINGNER, R.L.; HARMEL, R.D.; VEITH, T.L. Model evaluation guidelines for systematic quantification of accuracy in watershed simulations. **Transactions of the ASABE**. v. 50, n. 3, p. 885-900, 2007.
NAGHETTINI, M.; PINTO, E.J.A. **Hidrologia Estatistica**. Belo Horizonte: CPRM, 2007. 552 p.
NASH, J. E.; SUTCLIFFE, J. V. River flow forecasting through conceptual models: a discussion of principles. **Journal of Hydrology**, 10(3):282-290, 1970.
NDUBISI, F.; DE MEO T.; DITTO, N.D. Environmentally sensitive area: a template for developing greenway corridors. **Landscape Urban Planning**, v.33, p.159-177, 1995.
OLIVEIRA, J. B. **Solos da folha de Piracicaba**. Campinas, Instituto Agronomico, 1999. 173p. (Boletim Cientifico, 48).
PETERSON, J. R.; HAMLETT, J. M. Hydrologic calibration of the SWAT model in a watershed containing fraigpan soils. **Journal of the American Water Resources Association**, v.34, n.3, p.531-544, 1998.
ROCHA, J.; ROEBELING, P.; RIAL-RIVAS, M.E. Assessing the impacts of sustainable agricultural practices for water quality improvements in the Vouga catchment (Portugal) using the SWAT model. **Sci. Total Environ.**, 536 (2015), pp. 48-58 [http://dx.doi.org/10.1016/j.scitotenv.2015.07.038](http://dx.doi.org/10.1016/j.scitotenv.2015.07.038).
ROSENTHAL, W. D.; SRINIVASAN R.; ARNOLD, J. G. Alternative River Management Using a Linked GIG-Hydrolog Model. **Transactions of the ASAE**, v.38, n.3, p.783-790, 1995.
RUBIO, J.L. Desertification: evolution of a concept. In: FANTECHI, R.; PETER, D.; BALABANIS, P.; RUBIO, J.L. (Eds.) EUR 15415 **Desertification in a European
**Physical and Socio-economic Aspects**, Brussels, Luxembourg. Office Publications of the European Communities, p.5-13, 1995.
* [1]SINGH, S. & MISHRA, A. Spatiotemporal analysis of the effects of forest covers on water yield in the Western Ghats of peninsular India. **Journal of Hydrology**, 446-447 (2012) 24-34.
* [1]SRINIVASAN, R.; ARNOLD J. G. Integration of a basin-scale water quality model with GIS. **Water Resources Bulletin**, v.30, n.3, p.453-462, 1994.
* [1]SUN, G.; ZHOU, G.; ZHANG, Z.; WEI, X.; MCNULTY, S. G.; VOSE, J. Potential water yield reduction due to forestation across China. **Journal of Hydrology** (2006) 328, 548- 558.
* Escola Superior de Agricultura \"Luiz de Queiroz\", Universidade de Sao Paulo.
* [1]TUCCI, C. E. M. Flow regionalization in the upper Paraguay basin, Brasil. **Hydrological Sciences Journal**, v. 40, n.4, p. 485 -497, 1995.
* Porto Alegre. Ed. Universidade/UFRGS, 2002.
* [1]VAN LIEW, M. W. et al. Suitability of SWAT for the conservation effects assessment project: a comparison on USDA-ARS watersheds. **Journal of Hydrological Research**, 12(2):173- 189, 2007.
* [1]ZHANG, P.; LIU, R.; BAO, Y.; WANG, J.; YU, W.; SHEN, Z. Uncertainty of SWAT model at different DEM resolutions in a large mountainous watershed. **Water Res.**, 53 (2014), pp. 132-144 [http://dx.doi.org/10.1016/j.watres.2014.01.018](http://dx.doi.org/10.1016/j.watres.2014.01.018)
* [1]Zhang, Y., Liu, S., Wei, X., Liu, J., and Zhang, G.: Potential impact of afforestation on water yield in the subalpine region of southwestern, **J. Am. Water Resour. Assoc.**, 44 (2008), pp. 1144-1153. | The forest plays an important role in a watershed hydrology, regulating the transfer of water within the system. The forest role in maintaining watersheds hydrological regime is still a controversial issue. Consequently, we use the Soil and Water Assessment Tool (SWAT) model to simulate scenarios of land use in a watershed. In one of these scenarios we identified, through GIS techniques, \"Environmentally Sensitive Areas\" (ESAs) which have watershed been degraded and we considered these areas protected by forest cover. This scenario was then compared to current usage scenario regarding watershed sediment yield and hydrological regime. The results showed a reduction in sediment yield of 54% among different scenarios, at the same time that the watershed water yield was reduced by 19.3%.
**Keywords:** SWAT model, Hydrological model, native vegetation, GIS | Provide a brief summary of the text. | 167 |
arxiv-format/1009_0368v1.md | # Discovering Potential User Browsing Behaviors Using Custom-Built Apriori Algorithm
Sandeep Singh Rawat\\({}^{1}\\) and Lakshmi Rajamani\\({}^{2}\\)
\\({}^{1}\\) Department of Computer Science & Engineering,
Guru Nanak Institute of Technology, Ibrahimpatnam,
Andhra Pradesh 501506, India
[email protected]
\\({}^{2}\\) Department of Computer Science & Engineering,
College of Engineering, Osmania University,
Hyderabad, Andhra Pradesh, India
[email protected]
## 1 Introduction
The enormous content of information on the World Wide Web makes it an obvious candidate for data mining research. Application of data mining techniques to the World Wide Web is referred to as Web Mining. This area of Web Mining can be divided into three sub areas - Web Content Mining, Web Structure Mining and Web Usage Mining. Web mining is a technology on finding out implicit pattern, \\(p\\), from vast Web document structures and muster, \\(C\\). If we see \\(C\\) as export and \\(p\\) as import, then, the process of Web mining can be thought of as a mapping from import to export: \\(\\tilde{\\varsigma}:C\\xrightarrow{}p\\). Accurate Web usage information could help to attract new customers, retain current customers, and improve cross marketing/sales, effectiveness of promotional campaigns, tracking leaving customers and find the most effective logical structure for their Web space. User profiles could be built by combining users' navigation paths with other data features, such as page viewing time, hyperlink structure, and page content. At present, there are about three kinds of search engines in Internet. The one is search engine based on artificial, such as Yahoo. The other is search engine based on Robot, for example, AltaVista, Lycos and Excite. And the third is meta-search engine, for instance, Byte-search, MetaCrawler and Ixquick, etc. Althoughthe present search engines have taken great conveniences for people's searching information, it has little good effect. On the other hand, data mining technology is becoming more and more mature after many years' development. And it may use computer forwardly to distill valuable data modes from enormous data to meet people' different demands. So currently it is an important task to import data mining into Web information retrieval. Web usage mining is a process of picking up information from user how to use web sites. Web content mining is a process of picking up information from texts, images and other contents. Web structure mining is a process of picking up information from linkages of web pages, such as Table 1.
Table 1. The relationship among the different areas of Web Mining
\\begin{tabular}{|l|l|l|l|l|} \\hline
**Type** & **Structure** & **Form** & **Object** & **Collection** \\\\ \\hline Usage & Accessing & Click & Behaviour & Logs \\\\ \\hline Content & Pages & Text & Index & Pages \\\\ \\hline Structure & Map & Hyperlinks & Map & Hyperlinks \\\\ \\hline \\end{tabular}
The access data of the users visiting a given web site is provided by the Server Log Files. They provide details about file requests to a web server and the server response to those requests. In the access log, which is the main log file, each line describes the source of a request, the file requested, the date and time of the request, the content type and length of the transferred file, and other data such as errors and the identity of referring pages.
The rest of the paper is organized as follows. Section II discusses the related work. Section III explains the solution framework used for generation of rules for educational web site analysis. Section IV gives some intuition about the observed results and explains about the how the different co-relations and rules are generated and presents the simulations that demonstrate comparisons between the apriori algorithm and the custom-built apriori. Finally, section V concludes and future enhancements of the system are mentioned.
## 2 Related Work
The wide usage of the Internet in various fields has increased the automatic extraction of the log data from the web sites. The usage of data mining techniques on the data collected from the web helps us pattern selection, which acts as a traditional way of decision-making tools. Web usage mining is the application of the data mining techniques on the web-collected data, which is already present in the form of various patterns. Web usage mining is presented on secondary data such as (user name, ip address, date and time, their type of browsers used, type of URL used to view the site etc.) which is deduced from the interactions of the users in between the web sessions. Wang tong HE Pi-lian in their paper, \"_Web Log Mining by an Improved AprioriAll algorithm\" showed_ that the possibility and importance about applying Data Mining in Web log mining and showed some problems in the conventional searching engines. Then it offers an improved algorithm based on the original AprioriAll algorithm, which has been used in Web, logs mining widely. Test results show the improved algorithm has a lower complexity of time and space [1]. Vic Ciesielski and Anand Lalani in their paper _\"Data Mining of Web Access Logs from an Academic Web Site\"_ used a general-purpose data-mining tool to determine whether we can find any 'golden nuggets' in the web access logs of a large academic web site. They discovered several nuggets, the most significant being that a major difference between visitors from within Australia and visitors from outside Australia generally arrive via search engines and are interested in information about postgraduate courses [2]. Gui-Rong Xue et al. proposed a paper on \"_Log Mining to Improve the Performance of Site Search\"_. This paper [3] proposes a novel re-ranking method based on user logs within websites. With the help of website taxonomy, they mine for generalized association rules and abstract access patterns of different levels. Mining results are subsequently used to re-rank the retrieved pages.
Maristella Agosti and Giorgio Maria Di Nunzio published a paper on _\"Web Log Mining: A Study of User Sessions\"_. This study [4] reports on initial findings on a specific aspect that is highly relevant for personalization services like study of web user sessions. Mike Thelwall in his paper [5]_\"Web Log File Analysis: Backlinks and Queries\"_ stated that web log files are a useful source of information about visitor site use, navigation behavior and to some extent demographics. Wen-Chen Hu et al. had their research over web usage mining and published a paper [6] on _\"World Wide Web Usage Mining Systems and Technologies\"_. According to this web usage mining is used to discover interesting user navigation patterns and can be applied to many real-world problems, such as improving Web sites/pages, making additional topic or product recommendations, user/customer behavior studies, etc. Zhenglu Yang et al. published a paper on _\"An Effective System for Mining Web Log\"_. In this paper [7], they proposed an effective web log-mining system consists of data preprocessing, sequential pattern mining and visualization. Osmar R. Zaiane proposed a paper on _\"Web Usage mining for a Better Web-Based Learning Environment\"_. In this paper [8], they discussed some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process. Magdalini Eirinaki and Michalis Vazirgiannis published a paper titled _\"Web Mining for Web Personalization\"_. In this paper [9] they presented a survey of the use of web mining for web personalization.
Yongjian Fu and Ming-Yi Shih published a paper titled _\"A Framework for Personal Web Usage Mining\"_. In this paper [10], they proposed a framework to mine Web usage data on client side, or personal Web usage mining, as a complement to the server side Web usage mining. Renata Ivancsy and Istvan Vajk proposed a paper on _\"Different Aspects of Web Log Mining\"_. In this paper [11] three of the most important approaches are introduced for web log mining. All the three methods are based on the frequent pattern mining approach. Federico Michele Facca and Pier Luca Lanzi published a paper on _\"Recent Developments in Web Usage Mining Research\"_. In their terms Web Usage Mining is the area of Web Mining, which deals with the extraction of interesting knowledge from logging information produced by web servers [12]. Jaideep Srivastava et al.published a paper titled _\"Web Usage Mining: Discovery and Applications of Usage Patterns from Web Data\"_. According to this paper web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications [13]. Alzenyr da Silva et al. published a paper _\"Mining Web Usage Data for Discovering Navigation Clusters\"_. According to them, the analysis of a web site based on usage data is an important task as it provides insight into the organization of the site and its satisfactory results regarding user needs [14]. Ramakrishnan Srikant and Yinghui Yang published their paper on _\"Mining Web Logs to Improve Website Organization\"_. This paper states that many websites have a hierarchical organization of content. In this paper [15], they proposed an algorithm to automatically find pages in a website whose location is different from where visitors expect to find them. Sujni Paul [16] in her paper proposed an Optimized Distributed Association Rule mining algorithm for geographically distributed data is used in parallel and distributed environment so that it reduces communication costs. The response time is calculated in this environment using XML data. Keshavamurthy B.N., Mitesh Sharma and Durga Toshniwal [17] in their paper \"Efficient Support Coupled Frequent Pattern Mining Over Progressive Databases\", proposed novel approach efficiently mines frequent sequential pattern coupled with support using progressive mining tree.
## 3 Custom-built Algorithm
A server log is a log file (or several files) automatically created and maintained by a server of activity performed by it. A typical example is a web server log which maintains a history of page requests. These files are usually not accessible to general Internet users, only to the webmaster or other administrative person. A statistical analysis of the server log may be used to examine traffic patterns by time of day, day of week, referrer, or user agent. Efficient web site administration, adequate hosting resources and the fine tuning of sales efforts can be aided by analysis of the web server logs. In the generated analysis we try to prepare an analysis that can be successfully be utilized by the web site developer in order to improve his/her web site effectively. we tried to avoid all the unnecessary details such as browser details and included all the required or needed details such as analysis about the visitors, complete hits, daily analysis, searched files etc. We have generated the frequent item sets from the given database, which solves both the specified problems. First problem is to find those item sets whose occurrences exceed a predefined threshold in the database; those item sets are called frequent or large item sets. Here we are capable generating the frequent item sets by specifying the threshold as maximum number of hits. The second problem is to generate association rules from those large item sets with the constraints of minimal confidence. Suppose one of the large item sets is L\\({}_{k}\\), L\\({}_{k}\\) = {I\\({}_{1}\\), I\\({}_{2}\\), , I\\({}_{k}\\)}, association rules with this item sets are generated in the following way: the first rule is {{I\\({}_{1}\\), I\\({}_{2}\\), , I\\({}_{k\\)-1} {I\\({}_{k}\\)}, by checking the confidence this rule can be determined as interesting or not. This problem is also solved as we specified the % of total (support) through which we can determine if the item set generated is interesting or not. See the figure 1 for the pseudo code for custom-built apriori algorithm.
## 4 Result and Rule Generation
We categorized this information in such a way that the user feels easier to understand the study made and make the necessary reforms in the site as required. Different categorizations made in the analysis are:
* General statistics
* Access statistics
* Co-relations
Figure 1: Pseudo code for custom-built apriori algorithm
### General Statistics
General statistics consists of all the general or summarized analysis which gives us the complete overview of the different fields present in the log file. This generalized study gives us the perfect count of the different fields such as
* Total number of hits
* Total number of visitors
* Different errors
* Successful visits
* Incomplete visits
* Error reports etc
This analysis completely is based on the status codes field; status codes of different numbers have different importance of their own through which it makes the job easier for the developer or the user to understand the analysis, see the figure 2. We also try to acquire the related information as to how are they reaching the specific web sited so, as by analyzing the entire available patterns in the log file these patterns can be further used in the pattern discovery in order to produce the results.
### Access Statistics
Similar to general statistics in this access statistics, see figure 3, we try to specify the more detailed analysis, which gives us the access statistics of different users depending upon their different allocated ip address and url users. This access statistics consist of the details of both the successful and unsuccessful hits based on:
* ip address
* url
Figure 2: Output generated by general statistics
### Co-relations
We try to generate the different rules among the specified item sets. The different item sets present in our analysis are:
* ip address
* url
* path
The different rules that are formed with these combinations are:
* ipadd\\(\\_\\)url
* url\\(\\_\\)path
* lpadd\\(\\_\\)path
* lpadd\\(\\_\\)url\\(\\_\\)path
The ipadd\\(\\_\\)url relation for example; collect all the distinct ip addresses which have visited or completed their visits successfully and then we collected the successful urls that have completed their paths successfully using the specified ip address, see the figure 4.
Figure 4: Output generated for the generated co-relation
Figure 3: Output generated for urls in access statistics
### Comparisons
We try to compare the existing apriori algorithm with the custom-built apriori algorithm. See the table 2 for step by step comparisons.
\\begin{table}
\\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \\hline
**APRIRI Algorithm** & **Custom built APRIORI Algorithm** \\\\ \\hline
1) Procedure apriori-gen(Lk-1:freq(k-1) itemset for each item set of distipadd y \\{ count2=0; } & 1) Procedure apriori(ipadd,url) for each item set of distipadd y \\{ count2=0; } \\\\ For each itemset l2 belongs lk-1 if(l[1[1]=l2[1]^(1[1[2]=l2[2]^\\(\\wedge\\) ) then & 1) for each item set of ipadd x \\{ if(distip[y].equals(ipadd[x]))\\{ if successful \\{ count2++; count2++; count2; count2; ipu2[k5]=url[x]; // join step } \\\\ \\} & 1) Procedure apriori
3) Generate C2 candidates from L1. Scan D for count of each candidate. Compare 3) Generate co-relations from the above statistics and scan for the count and compare the threshold count of hits 1.2) Generate C3 candidates from L2 Scan D for count of each candidate and compare the candidate supp 1.
3. the use of the grid-computing paradigm to solve more challenging web mining problems and
4. this work primarily on the association rule mining but future research will be done using other data mining functions such as classification, clustering and so on.
## Acknowledgements
This work is partially supported by the Research Committee of the Guru Nanak Institutions (GNI), Hyderabad, India. The authors would like to thankfully acknowledge the support and cooperation of the teaching and non-teaching staff of Guru Nanak Institute of Technology, Hyderabad, India and College of Engineering, Osmania University, Hyderabad, India during this work.
## References
* [1] WANG tong HE Pi-lian, \"Web Log Mining by an Improved AprioriAll Algorithm\", proceedings of world academy of science, engineering and technology volume 4 February 2005 ISSN 1307-6884, 0 2005 WASET.ORG.
* [2] Vic Ciesielski and Anand Lalani, \"Data Mining of Web Access Logs from an Academic Web Site\", Proceedings of the Third International Conference on Hybrid Intelligent Systems (HIS'03).
* [3] Gui-Rong Xue1, Hua-Jun Zeng, Zheng Chen, Wei-Ying Ma and Chao-Jun Lu, \"Log Mining to Improve the Performance of Site Search\" Computer Science and Engineering Shanghai Jiao-Tong University, Shanghai 200030, P.R.China.
* [4] Maristella Agosti and Giorgio Maria Di Nunzio \"Web Log Mining: A Study of User Sessions\" Department of Information Engineering, University of Padua Via Gradegnigo 6/a, 35131 Padova, Italy.
* [5] Mike Thelwall, \"\"Web Log File Analysis: Backlinks and Queries\" School of Computing and Information Technology, University of Wolverhampton, Wulfruna Street, Wolverhampton, WV1 1SB, UK.
* NUMBER 4.
* [7] Zhenglu Yang, Yitong Wang and Masaru Kitsuregawa, \"An Effective System for Mining Web Log\", Institute of Industrial Science, The University of Tokyo 4-6-1, 153-8305, Japan.
* [8] Osmar R. Zaiane, \"Web Usage Mining for a Better Web-Based Learning Environment\", Department of Computing Science University of Alberta Edmonton, Alberta, Canada.
* [9] Magdalini Eirinaki, Michalis Vazirgiannis, \"Web Mining for Web Personalization\", Department of Informatics Athens University of Economics and Business Patision 76, Athens, 10434, GREECE, (c) ACM, [2003], Vol. 3, No. 1, February 2003.
* [10] Yongjian Fu, Ming-Yi Shih published, \"A Framework for Personal Web Usage Mining\", Department of Computer Science Department of Computer Science, University of Missouri-Rolla University of Missouri-Rolla Rolla, MO 65409-0350.
* [11] Renata Ivancsy, Istvan Vajk, \"Different Aspects of Web Log Mining\", Department of Automation and Applied Informatics, and HAS-BUTE Control Research Group Budapest University of Technology and Economics Goldmann Gy. ter 3, H-1111 Budapest, Hungary.
* [12] Federico Michele Facca and Pier Luca Lanzi, \"Recent Developments in Web Usage Mining Research\", Artificial Intelligence and Robotics Laboratory Department of Information Technology, Politecnico di Milano.
* [13] Jaideep Srivastava, Robert Cooleyz, Mukund Deshpande and Pang-Ning Tan, \"Web Usage Mining: Discovery and Applications of Usage Patterns from Web Data\", SIGKDD Explorations, Jan 2000. Volume 1, Issue 2.
* [14] Alzennyr da Silva, Yves Lechevallier, Francisco de Carvalho and Brigite Trousse, \"Mining Web Usage Data for Discovering Navigation Clusters\", Proceedings of the 11th IEEE Symposium on Computers and Communications (ISCC'06) 2006.
* [15] Ramakrishnan Srikant and Yinghui Yang, \"Mining Web Logs to Improve Website Organization\", WWW'10, Hong Kong ACM, May 1-5, 2001.
* [16] Sujni Paul \"An Optimized Distributed Association Rule Mining Algorithm in Parallel and Distributed Data Mining with XML Data for Improved Response Time\", International Journal of Computer Science and Information Technology, Volume 2, Number 2, April 2010.
* [17] Keshavamurthy B.N., Mitesh Sharma and Durga Toshniwal, \"Efficient Support Coupled Frequent Pattern Mining Over Progressive Databases\", International Journal of Database Management Systems (JDMS), Vol.2, No.2. May 2010.
\\begin{tabular}{c c} & Mr. Sandeep Singh Rawat received his Bachelor of Engineering in Computer Science from National Institute of Technology - Surat (formerly REC - Surat), India and his Masters in Information Technology from Indian Institute of Technology, Roorkee, India. He is pursuing his Ph.D. at Osmania University, Hyderabad., India. He has presented three technical papers at international conferences and published paper in journals including IEEE Delhi Section and IEEE Computer Society Chapter, India. His research interest includes Data Mining, High Performance Computing and Machine Learning. Dr. Lakshmi Rajamani is working as Professor and Head of the Department, Computer Science and Engineering, University College of Engineering, Osmania University, Hyderabad, India. She received M.Sc (Statistics) from IIT Kanpur, M.Phil (Computer methods) from University of Hyderabad and PhD (CSE) from Jadavpur University, Kolkata. She authored more than 25 papers in various National/International conferences and Journals. Her research interests are in the areas of Neural Networks, Artificial Intelligence, Distributed Computing \\& Data Mining. \\\\ \\end{tabular} | Most of the organizations put information on the web because they want it to be seen by the world. Their goal is to have visitors come to the site, feel comfortable and stay a while and try to know completely about the running organization. As educational system increasingly requires data mining, the opportunity arises to mine the resulting large amounts of student information for hidden useful information (patterns like rule, clustering, and classification, etc). The education domain offers ground for many interesting and challenging data mining applications like astronomy, chemistry, engineering, climate studies, geology, oceanography, ecology, physics, biology, health sciences and computer science.
Collecting the interesting patterns using the required interestingness measures, which help us in discovering the sophisticated patterns that are ultimately used for developing the site. We study the application of data mining to educational log data collected from Guru Nanak Institute of Technology, Ibrahimpatnam, India. We have proposed a custom-built apriori algorithm to find the effective pattern analysis. Finally, analyzing web logs for usage and access trends can not only provide important information to web site developers and administrators, but also help in creating adaptive web sites.
Association Rule Mining, Data Mining, and Web log | Write a summary of the passage below. | 239 |
arxiv-format/2407_14102v1.md | MSSP : A Versatile Multi-Scenario Adaptable Intelligent Robot Simulation Platform Based on LIDAR-Inertial Fusion
Qiyan Li, Chang Wu\\({}^{*}\\), Yifei Yuan, Yuan You
To protect intellectual property, the source code of the simulation platform will be made publicly available following the acceptance of the paper.Qiyan Li, Chang Wu, Yifei Yuan are with the Institute of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China (e-mail: [email protected], [email protected]; [email protected])(Corresponding author: Chang Wu.).Yuan You is with the XGRIDS, Shenzhen 518051, China (e-mail: [email protected]).
## I Introduction
Intelligent robots equipped with sensor suites find wide applications across various domains such as service guidance [1], field exploration [2], agricultural supervision [3], and factory logistics [4]. Due to the high-precision ranging capabilities and adaptability to diverse environments [5], LIDAR SLAM technology has rapidly developed and become the mainstream solution for complex tasks. Despite advancements in LIDAR SLAM field and the emergence of high-performance frameworks, several challenges persist : (1 The emergence of various LIDARs including mechanical rotating LIDARs and recent solid-state LIDARs makes it impractical to acquire all types of sensors for validation and testing, due to the cost and limitations in hardware procurement and deployment. This has resulted in relatively insufficient research on the adaptability of existing algorithms to different sensors, thereby restricting in-depth exploration of algorithm performance and robustness. (2 As application scenarios for intelligent robots become increasingly complex and varied, traditional data collection methods are repetitive and cumbersome, failing to cover all potential scenarios. This severely limits the development and widespread application of intelligent robot technology, such as jungle and cave exploration [6, 7], as well as deep-sea and deep-space exploration [8, 9]. (3 Current validation and evaluation of SLAM algorithms rely primarily on ground truth provided by GPS or other global positioning devices [10, 11]. This approach not only incurs hardware deployment and device costs but also introduces inherent biases due to errors in the global positioning sensors themselves. Consequently, the credibility of experimental results is affected, limiting detailed studies of algorithm performance.
However, to address the current challenges in LIDAR SLAM, simulation technology offers an efficient and advantageous solution. By using precise simulation models, we can emulate various LIDAR sensors in virtual environments, eliminating the costs and hardware deployment issues associated with physical devices. Moreover, advanced simulation technology allows us to create customized virtual environments as needed, avoiding the time and cost constraints of data collection and the difficulties or impossibilities of collecting data in extreme scenarios [12, 13]. Additionally, simulation environments can provide ground truth information with absolute accuracy, enhancing the credibility of SLAM algorithm accuracy evaluations and enabling more detailed assessments of algorithm performance. Therefore, in this paper, we propose a multi-scenario adaptable intelligent robot simulation platform based on LIDAR-inertial fusion, named MSSP. Specifically, the innovations of our platform include:
1. We have designed and built a multi-scenario adaptable intelligent robot simulation platform based on LIDAR-inertial fusion in Gazebo, called MSSP. This platform is equipped with mechanical and solid-state LIDARs and IMU sensors, supporting both manual control and autonomous tracking modes. Additionally, it provides ground truth information with absolute accuracy of the robot's position in real-time through Gazebo plugins. The platform is highly user-friendly and extremely extensible.
2. The simulation platform supports the convenient import and custom creation of various types of virtual simulation environments, including various types of indoor and outdoor environments, as well as special scenarios such as degraded, unstructured, and dynamic environments. This capability overcomes the limitations of datasets and real-world environments, cover ing mainstream research and application scenarios for intelligent robots.
3. The simulation platform supports the validation and evaluation of LIDAR SLAM algorithms. By utilizing the ground truth information with absolute accuracy provided by simulation, we could perform detailed analyses of the algorithms, avoiding the inherent errors present in global positioning devices. Our analysis and evaluation of various SLAM algorithms across different scenarios demonstrate the platform's effectiveness and practical value.
4. We make the simulation platform publicly available on GitHub to share our findings and contribute to the open-source community. In the future, developers can expand the platform's functionalities according to their specific needs, such as adding new sensors or virtual environments, thereby simplifying the process of algorithm development and validation.
## II System Overview
Figure 1 clearly illustrates the system architecture of MSSP simulation platform, which primarily comprises two core components: the robot system simulation and the SLAM algorithm evaluation, both implemented within the ROS environment [14].
The robot system simulation comprises two key modules: the robot model and the simulation environment, both implemented in Gazebo. Leveraging Gazebo simulation tool, the robot model and simulation environment are successfully loaded and can be customized, edited, and modified through a graphical user interface (GUI) or programming. The Gazebo simulation environment is constructed using various predefined models arranged in specific layouts to simulate real-world scenarios. The robot model consists of a motion chassis and a sensor suite: The motion chassis, controlled through a motion control module (implemented via Gazebo's built-in model differential control plugin), receives external inputs, allowing users to control the robot's movement within the simulation environment via manual control or autonomous tracking. During the robot's movement, the onboard sensor suite (including LIDAR and IMU) generates virtual sensor data within Gazebo through LIDAR and IMU modules (implemented via Gazebo's sensor plugins) to simulate real-world perception capabilities, achieving detailed scans of the external environment. The generated sensor data, primarily comprising point clouds and IMU measurements, are transmitted to subsequent LIDAR SLAM algorithm processing modules to obtain environmental map and robot localization information. Then, leveraging the system's pose acquisition module (implemented via Gazebo's built-in model plugin), ground truth information with absolute accuracy is obtained. Based on this, we can accurately and comprehensively evaluate the performance of various SLAM algorithms in different environments, providing a solid experimental foundation for further research and practical applications.
In the following sections, we will elaborate on the specific details of the robot model creation and robot motion control methods. Details regarding the configuration of the simulation environment and testing of SLAM algorithms will be covered in Section III and IV, respectively.
### _Robot Model Creation_
The simulation robot model is constructed using _.xacro files, which are organized in XML format. These files allow for adjusting the specifications and configurations of the robot through code editing and parameter settings, thereby simplifying the management of complex robot models. Figure 2 illustrates the detailed system architecture of the simulated robot model, comprising two core components: the motion chassis and the sensor suite. The design of the motion chassis is inherited from Diff-Robot (diff_wheeled_robot), featuring two actively differential driven wheels and two passive omni-directional wheels, with radii of 4cm and 2.5cm, respectively. The differential control of the drive wheels enables complex driving and steering operations, while the presence of omni-directional wheels ensures the stability of the robot, enabling it to achieve stable and reliable motion in the Gazebo environment.
Furthermore, to guarantee that the sensors can possess an optimal sensing range and can perform the most effective environmental scanning, the sensor suite, comprising LIDAR and IMU sensors, is mounted on a platform with dimensions of 40 cm \\(\\times\\) 10 cm \\(\\times\\) 25 cm. This platform is rigidly connected to the motion chassis.
1. **LIDAR**: The robot model is equipped with various types of LIDAR, including traditional mechanical rotating LIDARs(e.g., Velodyne_HDL_32E (velo-dyne_simulator)), and emerging solid-state LIDARs (e.g., Livox series (livox_laser_simulation)). Users can select an appropriate LIDAR based on their specific requirements. They are positioned at different locations on the platform(with the mechanical LIDAR placed in the center and the solid-state LIDAR positioned at the front) to achieve maximum environmental coverage, generating environmental scan data at 10 Hz.
2. **IMU**: The robot model is equipped with a 9-axis IMU sensor providing system acceleration, angular velocity, and magnetic field direction information. During simulation, the frequency of IMU can be adjusted as needed and is set to 200 Hz in our system.
In simulation platform, LIDAR and IMU sensor data have been augmented with appropriate noise to simulate real-world conditions. The noise can be adjusted or removed according to experimental requirements. In contrast to practical testing and validation of SLAM algorithms, the simulation platform obtains ground truth using gazebo built-in model plugin(libgazebo_ros_p3d.so), instead of simulated data from GPS or other global positioning devices. This plugin computes the position of the robot's body frame relative to a fixed reference frame in the simulated world. It publishes the robot's three-dimensional coordinates (position information) and quaternion orientation information through the ROS interface in standard message formats at a fixed frequency (set to maximum output frequency of all sensors in our platform, 200Hz). This approach ensures that the ground truth is absolutely accurate and error-free.
### _Robot Motion Control_
Effective motion control is crucial for the successful simulation of intelligent robots. In this letter, we adopt the differential drive model [15] as the robot motion control mechanism, designing and implementing both manual control and autonomous tracking modes to generate control signals, ensuring the robot moves precisely according to user intentions. Differential drive control is a widely used control strategy in wheeled robots, based on the principle of independently adjusting the speeds of the robot's wheels on either side. By manually controlling the rotational speed of different wheels, the robot's path and direction can be altered. The manual control mode relies on proactive human input, providing a simple and flexible way to achieve basic robot motion and control, suitable for various experimental scenarios. On the other hand, autonomous tracking control sacrifices some flexibility but enables more precise and smooth robot control and path navigation, optimizing the robot's performance in complex environments. In the following sections, we will elaborate on the principles and specific implementations of these two motion control methods.
#### Iii-B1 Manual Control of the Robot
The manual control of the robot is achieved by defining a mapping relationship between keyboard keys and robot actions, resulting in an intuitive control interface. This allows the operator to adjust the robot's direction and speed through simple keyboard operations. Subsequently, velocity and orientation information is published in the Twist message format, which is received by the robot's motion chassis and converted into specific wheel speed control signals. This ensures that the robot can move precisely according to the operator's intentions.
#### Iii-B2 Autonomous Tracking of the Robot
Autonomous tracking of the robot combines the Bspline path generation algorithm [16] and the Pure Pursuit tracking algorithm [17] to enhance control accuracy. In the path generation module, using the model insertion function of the Gazebo simulation tool, key discrete points on the path are first visually marked, which defines the shape of the curve. Subsequently, these discrete points are used to generate a two-dimensional cubic Bspline curve to ensure smoothness and continuity of the path, with 5000 points sampled for subsequent use in the autonomous tracking module. In the tracking module, the system obtains the robot's current position and orientation information by subscribing to odometry messages in real-time. This information, combined with the target path points in the generated trajectory, allows the system to calculate the angular difference and distance between the current position and the target point. Based on the system's predefined linear velocity (set to 0.2 or 0.4 m/s as needed in our platform), the system computes the angular velocity required for directional
Fig. 1: System architecture of MSSP simulation platform. This platform comprises two core components : the robot system simulation(shown in left pink rectangular box) and the SLAM algorithm evaluation(shown in right yellow rectangular box)
Fig. 2: System architecture of the robot simulation model. The left image shows the overall structure and detailed dimensions of the robot model, while the right one presents detailed information on the sensor suite and motion chassis.
control and publishes these control signals to the robot's motion control chassis to guide the robot towards the target path points. When the robot's distance to the target path point falls below a threshold(set to 0.01 meters in the platform, adjustable based on practical needs), the system updates the target point and continues the tracking operation. This precise control mechanism ensures that the robot can follow the predetermined B-spline curve accurately.
Figure 3 illustrates the detailed process of autonomous tracking. In the above image, the red points are the user-defined discrete path points, and the yellow point indicates the end of the path. In the following image, the red curve represents the generated target path, and the pure pursuit tracking algorithm achieves autonomous tracking by continuously controlling the robot (Robot Position) to approach the target point (Target Point) and updating the target point persistently.
## III Simulation Environment
The generation and import of the simulation environment in Gazebo are achieved through.world files. These files are written in XML format and define various elements in the simulation world, including collections of robots and objects (such as buildings, tables, chairs, and trees), as well as global parameter information (such as sky, lighting, and physical properties). Users can customize the creation of simulation environments according to their specific needs or directly import publicly available environments provided by the official Gazebo repository or the open-source community to facilitate convenient development.
Users can flexibly create or configure the desired simulation environment through a graphical user interface (GUI) or programming. Configuring and creating virtual environments through Gazebo's graphical user interface (GUI) is convenient and intuitive. Using this approach, developers can visually select and arrange predefined models from Gazebo's model library, and customize scenes by combining these models in specific ways. Gazebo's built-in model library covers a wide range of predefined models, including environmental elements like buildings, trees, roads, and bridges. These predefined model files are organized in the Simulation Description Format (SDF), allowing developers to precisely define various aspects of the models or create specific models by writing or modifying SDF files according to their needs. For highly customized models, such as complex environmental structures, developers can create the 3D models using software like Blender or Maya, and then import them into Gazebo for further development.
Alternatively, directly editing.world files provides a higher level of customization. In.world files, developers can manually add or modify environmental features, and meticulously configure attributes of simulation objects such as position, size, material, and dynamic behaviors. Both methods empower developers to customize or generate virtual environments according to their specific requirements, freeing them from the constraints of datasets and real-world scenes.
Gazebo provides a variety of official simulation world examples and comes equipped with numerous predefined model files, facilitating developers to configure simulation environments according to their preferences. Additionally, the open-source community has contributed a wealth of simulated world environments. We select 10 environments with distinct features suitable for algorithm validation as scenarios for subsequent SLAM algorithm evaluation. Due to variations in performance of different SLAM algorithms in indoor and outdoor environments, we categorize the candidate scenarios into indoor and outdoor types to comprehensively assess algorithm performance across different scene types.
Furthermore, localization and mapping in non-ideal scenarios pose significant challenges in LIDAR SLAM research, such as occlusion in dynamic scenes, feature scarcity in unstructured environments, and algorithm degradation in indoor long corridors. To support algorithm validation under such conditions, the aforementioned 10 environments includes 5 non-ideal virtual simulation environments, including degraded scenarios, dynamic scenes, and unstructured environments. Detailed information regarding the type, and feature descriptions of each scene is provided in Table I, and the specific schematic diagrams of each scene are depicted in Figure 4.
Fig. 3: Bspline Path Generation and Pure Pursuit Algorithm for Robot Autonomous Tracking
## IV Experiment
We conduct extensive validation experiments on mainstream LIDAR SLAM methods using the simulated platform we designed. The algorithm simulation and evaluation are conducted within the ROS system. We implemented our simulation platform and ran algorithms on a desktop with an Intel i7-13700F CPU (16 cores, 4.1GHz), 32GB RAM, and an NVIDIA GeForce RTX 3060 GPU. Due to significant performance variations in algorithms resulting from different LIDAR scanning patterns, we design experiments in two groups: mechanical LIDAR experiments and solid-state LIDAR experiments. Multiple sequences are selected and recorded from section III's world environments, encompassing various types such as indoor and outdoor, structured and challenging, mechanical and solid-state modalities, and multiple control modes including manual controlling and autonomous tracking. Each sequence includes LIDAR and IMU sensor data, as well as ground truth information provided by Gazebo plugins. Basic information for each sequence can be found in the experimental results tables (Table II, Table IV).
### _Experiments on Mechanical LIDARs_
Table II presents detailed information about the test paths and evaluation results on A-LOAM [27], Lego-LOAM [28], LIO-SAM [10], FAST-LIO2 [11], Faster-LIO [29], Voxel-Map [30], and Point-LIO [31] algorithms. These paths encompass various environmental features used for testing SLAM algorithms, including indoor and outdoor, structured, dynamic, and degraded scenarios. These algorithms and scenarios are sufficient for demonstrating the extensive applicability and practical value of our platform. In our experiments, we use absolute pose error (APE) to quantify localization accuracy. The calculation of localization error was based on the EVO tool, an open-source trajectory error evaluation toolbox [32].
As shown in Table II, overall performance indicates that Fast-LIO2, Faster-LIO, and Voxel-Map perform notably well. Fast-LIO2 builds upon Fast-LIO [33] by leveraging an incremental kd-Tree structure to efficiently represent large-scale, dense point cloud maps. This approach allows Fast-LIO2 to directly register raw LIDAR points into the map without the need for manually designed feature extraction modules. It can utilize more detailed environmental features, thereby enhancing the algorithm's localization and mapping performance. Faster-LIO introduces a sparse incremental voxel data structure instead of a tree structure to organize point clouds. This change significantly improves SLAM computational efficiency while maintaining system accuracy. In contrast to traditional point cloud maps, Voxel-Map proposes a precise and adaptive representation method for probabilistic voxel maps. By modeling uncertainty in plane features due to laser point measurements and pose estimation, Voxel-Map achieves accurate registration of LIDAR frames, thereby enhancing SLAM algorithm localization performance.
It's noteworthy that the sequences Warehouse and District depict dynamic scenes, each containing a small number of moving pedestrians or personnel. Mainstream SLAM systems and point cloud registration methods typically assume
Fig. 4: Specific Schematic Diagrams of 10 Simulated Environments, from top left to bottom right, respectively corresponding to museum, factory, hospital, neighborhood, courtyard, corridor, district, warehouse, farmland, and desert scene.
static environments [34]. However, the presence of dynamic objects and significant occlusions can lead to feature tracking loss and consequently degrade pose estimation accuracy [35]. Moreover, accumulated scan data in point cloud maps may inadvertently capture unwanted paths due to dynamic objects, leading to \"ghosting effects\". These \"ghosts\" are treated as obstacles in the map, severely affecting subsequent localization and navigation performance of intelligent robots [36]. Figure 5 illustrates maps constructed by the Faster-LIO algorithm, highlighting \"ghosting\" effects caused by moving personnel (indicated by yellow dashed boxes). Given the rich environmental information and minimal occlusion effects from dynamic objects in sequences Warehouse-large and District-medium, their impact on SLAM algorithm localization accuracy is not significant.
### _Experiments on Solid-state LIDARs_
Compared to mechanical LIDARs, solid-state LIDAR offers advantages in cost and portability. However, characteristics such as limited field of view and non-repetitive scanning introduce challenges that can degrade LIDAR SLAM algorithms based on solid-state LIDARs and make them more susceptible to occlusion from dynamic objects [37]. We design a series of experiments to evaluate the localization and mapping performance of mainstream SLAM algorithms that support solid-state LIDARs in various environments. Our simulation platform offers seven models of LIDAR from Livox series for users to choose from, including Hap, Avia, Horizon, Mid 40, Mid 70, Mid 360 and Tele. Considering the product maturity and compatibility with publicly available algorithms, three LIDAR models with different parameters and performance characteristics from the Livox series are selected for algorithm performance evaluation. Table III provides detailed parameters information of the LIDARs used in Solid-state LIDAR experiments.
Table IV presents detailed information about the test paths for solid-state LIDARs and the evaluation results on Fast-LIO2, Faster-LIO, Voxel-Map, and Point-LIO. Figure 6 illustrates the comparison between estimated trajectories by solid-state LIDAR SLAM algorithms and ground truth paths. Based on the results from Table IV and Figure 6, it can be observed that, compared to mechanical LIDARs, the localization accuracy of SLAM algorithms based on solid-state LIDAR is generally lower across all paths. Particularly, drift phenomena are more prone at narrow corners, as depicted in Figure 6 for the District and Hospital sequences.
the positioning performance of SLAM algorithms in such scenarios. Additionally, in unstructured environments (e.g., Desert), the inherent lack of structural features further degrades localization performance. The limited Fov of solid-state LIDAR exacerbates the drift of the algorithm in these types of environments.
It is noteworthy that in the Corridor sequence, which depicts a degraded long corridor environment, the Fast-LIO2 algorithm exhibits significant drift. Figure 7 shows the localization trajectories of various SLAM algorithms in Corridor sequence, along with the detailed trajectory output by Fast-LIO2. From the upper part of Figure 7, it can be observed that Fast-LIO2 experiences significant drift at narrow corners 1, 2, and 3 in the Corridor sequence, leading to a sharp decline in algorithm performance. By analyzing the execution process, it is determined that the drifts at 1, 2, and 3 occur at approximately 315s, 385s, and 438s, respectively, as shown in the lower part of Figure 7.
To determine whether the errors at these corners are caused by IMU pose prediction or LIDAR point cloud registration, we plot the relative positioning error (RPE) of the IMU prior predictions and LIDAR posterior updates relative to the ground truth, as shown in Figure 8. From Figure 8, it is clear that the specific times of significant system drift correspond with the results in Figure 7. Additionally, in the detailed magnification of Figure 8, the error in the updated trajectory
Fig. 6: (a)–(h) respectively show the comparison between the localization results and ground truth path in District, Neighborhood, Factory, Desert, Museum, Hospital, Warehouse, and Courtyard sequence.
always appears before the prediction error, indicating that the drift in the system pose at these points is initially caused by point cloud registration. This subsequent predictions based on the updated poses leads to an exacerbation of cumulative error.
Additionally, to further verify and illustrate the causes of LIDAR registration errors and their impact on positioning results, we plot the variation of the total error between the LIDAR point nearest neighbor plane normals estimated by Fast-LIO2 and the ground truth at each frame, as shown in Figure 9. The ground truth normals are obtained by fitting the simulated error-free point clouds. As shown in Figure 9, there are significant spikes in the plane fitting errors at 315s, 385s, and 438s. This indicates that at these points, the plane fitting errors are substantial, leading to distortions in the LIDAR point cloud registration. Consequently, this causes drift in the algorithm at these locations, affecting the overall positioning performance of the algorithm.
## V Extended research
In this paper, we have elaborated on a multi-scenario adaptable intelligent robot simulation platform based on LIDAR-inertial fusion. This platform integrates various sensors and simulation environments, providing a versatile tool for research on robotic systems and algorithms. The platform demonstrates significant scalability, and through continuous technological upgrades and feature expansions in the future, it will further enhance its performance and application scope. It supports a wide range of needs from basic research to complex applications, laying a solid foundation for future developments. The scalability is mainly reflected in the following four aspects:
### _Sensor Expansion_
The current platform integrates various type of LIDAR and IMU sensors. In the future, this platform can add visual sensors such as cameras and infrared sensors to enhance system performance and adaptability in complex scenarios (e.g., degraded, low-light, or no-light environments) [38], providing support for tasks like autonomous driving and
Fig. 8: Comparison of RPE errors between IMU predictions and LIDAR updates relative to the ground truth.
Fig. 7: Estimated path of various SLAM algorithms and detailed localization trajectory of Fast-LIO2 in corridor sequence.
Fig. 9: Error Variation between the LIDAR point nearest neighbor plane normals estimated by Fast-LIO2 and the ground truth at each frame.
search and rescue missions. Additionally, integrating GPS or other global positioning devices can assess the impact of inherent sensor errors on algorithm evaluation. Furthermore, considering the addition of underwater sensors such as sonar, DVL (Doppler Velocity Log) and other underwater sensors in simulation platform to support marine exploration and underwater infrastructure development [39, 40].
### _Environment Expansion_
The current platform offers a collection of simulated environments, which can be expanded in the future to enhance diversity and realism. The platform supports custom design and customization to generate a wider range of virtual scenarios, including underwater and marine environments, harsh weather conditions like rain and snow, and even simulations of Martian or space environments. Additionally, leveraging digital twin technology [41] to generate highly realistic virtual environments based on real-world scenes allows for the replication of dynamic changes observed in the real world. This provides a more solid foundation for the practical deployment of robot technologies.
### _Carrier Morphology Expansion_
Currently, the platform provides models of wheeled robots.In the future, this platform can expand support to various forms such as autonomous vehicles and unmanned aerial vehicles (UAVs). This will enrich the platform's applications in urban and highway environments as well as low-altitude scenarios. Additionally, the platform will include specialized robot models such as robotic dogs and legged robots to adapt to complex terrains and diverse environments, including mountains, ruins, and dense forests, for exploration and operations.
### _Platform Functionality Expansion_
Currently, the platform's functionalities are primarily focused on evaluating localization accuracy. In the future, this platform can add functionality for evaluating mapping accuracy as well. A significant advantage of simulation environments lies in providing accurate ground truth maps, which serve as an ideal benchmark for evaluating mapping algorithms. In the future, detailed testing and evaluation of mapping effectiveness for algorithms including SLAM, Nerf(Neural Radiance Fields) [42], and 3DGS(3D Gaussian Splatting) [43] can be conducted through integration or development of mapping evaluation methods.
## VI Conclusions
This paper presents a versatile multi-scenario adaptable intelligent robot simulation platform based on LIDAR-Inertial fusion. The platform includes an intelligent robot model equipped with a sensor suite, capable of freely moving in the simulation environment through manual control or autonomous tracking. The platform supports the convenient import and custom creation of various simulation environments, including structured, unstructured, dynamic, and degraded environments. Additionally, the platform provides ground truth information with absolute accuracy, facilitating detailed analysis and evaluation of various SLAM algorithms. Experiments conducted with mechanical and solid-state LIDARs demonstrate the extensive adaptability and practicality of our platform. Furthermore, the platform exhibits strong extensibility in sensor simulation, environment creation, and map evaluation.
In the future, we plan to develop a GPU-based solid-state LIDAR simulation plugin to enhance the simulation efficiency of the platform in large-scale virtual scenarios.
## References
* [1] Valerio Magnago, Marco Andreetto, Stefano Divan, Daniele Fontanelli, and Luigi Palopoli. Ruling the control authority of a service robot based on information precision. In _2018 IEEE International Conference on Robotics and Automation (ICRA)_, pages 7204-7210. IEEE, 2018.
* [2] Rakesh Shrestha, Fei-Peng Tian, Wei Feng, Ping Tan, and Richard Vaughan. Learned map prediction for enhanced mobile robot exploration. In _2019 International Conference on Robotics and Automation (ICRA)_, pages 1197-1204. IEEE, 2019.
* [3] Enrico Bellocchio, Francesco Crocetti, Gabriele Costante, Mario Luca Fravolini, and Paolo Valigi. A novel vision-based weakly supervised framework for autonomous yield estimation in agricultural applications. _Engineering Applications of Artificial Intelligence_, 109:104615, 2022.
* [4] Chong Wu, Zeyu Gong, Bo Tao, Ke Tan, Zhenfeng Gu, and Zhou-Ping Yin. Rf-slam: Uhf-rfd based simultaneous tags mapping and robot localization algorithm for smart warehouse position service. _IEEE Transactions on Industrial Informatics_, 19(12):11765-11775, 2023.
* [5] Qin Zou, Qin Sun, Long Chen, Bu Nie, and Qingquian Li. A comparative analysis of lidar slam-based indoor navigation for autonomous vehicles. _IEEE Transactions on Intelligent Transportation Systems_, 23(7):6907-6921, 2021.
* [6] Xu Liu, Guilherme V. Nardari, Fernando Cladera, Yuezhan Tao, Alex Zhou, Thomas Donnelly, Chao Qu, Steven W. Chen, Roseli A. F. Romero, Camilto I. Taylor, and Vijay Kumar. Large-scale autonomous flight with real-time semantic slam under dense forest canopy. _IEEE Robotics and Automation Letters_, 7(2):5512-5519, 2022.
* [7] Wennie Tabib, Kshtiqi Goel, John Yao, Curtis Boirum, and Nathan Michael. Autonomous carve surveying with an aerial robot. _IEEE Transactions on Robotics_, 38(2):1016-1032, 2022.
* [8] Guorui Li, Tuck-Whye Wong, Benjamin Shih, Chunyu Guo, Luwen Wang, Jiaqi Liu, Tao Wang, Xiaobo Liu, Jiayao Yan, Baosheng Wu, Fajun Yu, Yursai Chen, Yiming Liang, Yaoting Xue, Chengjun Wang, Shunping He, Li Wen, Michael T. Tolley, A-Man Zhang, Cecilia Laschi, and Tiefeng Li. Bioinspired soft robots for deep-sea exploration. _Nature Communications_, 14(1):7097, November 2023.
* [9] Yongchang Zhang, Pengchun Li, Liale Quan, Longuqi Li, Guangyu Zhang, and Dekai Zhou. Progress, challenges, and prospects of soft robotics for space applications. _Advanced Intelligent Systems_, 5(3):2200071, 2023.
* [10] Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniele Rus. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In _2020 IEEE/RSJ international conference on intelligent robots and systems (IROS)_, pages 5135-5142. IEEE, 2020.
* [11] Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, and Fu Zhang. Fast-lio2: Fast direct lidar-inertial odometry. _IEEE Transactions on Robotics_, 38(4):2053-2073, 2022.
* [12] Kamak Ebadi, Lars Bernreiter, Harel Biggie, Gavin Catt, Yun Chang, Arghya Chatterjee, Christopher E Denniston, Simon-Pierre Deschenes, Kyle Harlow, Shehryar Khattak, et al. Present and future of slam in extreme environments: The darpa subt challenge. _IEEE Transactions on Robotics_, 2023.
* [13] Marco Bernardi, Brett Hosking, Chiara Petrioli, Brian J Bett, Daniel Jones, Veerle AI Huvenne, Rachel Marlow, Maaten Furlong, Steve McPhail, and Andrea Munafo. Aurora, a multi-sensor dataset for robotic ocean exploration. _The International Journal of Robotics Research_, 41(5):461-469, 2022.
* [14] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, Andrew Y Ng, et al. Ros: an open-source robot operating system. In _ICRA workshop on open source software_, volume 3, page 5. Kobe, Japan, 2009.
* [15] J. Borenstein, H. R. Everett, L. Feng, and D. Wehe. Mobile robot positioning: Sensors and techniques. _Journal of Robotic Systems_, 14(4):231-249, 1997.
* [16] Kaihuai Qin. General matrix representations for b-splines. In _Proceedings Pacific Graphics '98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208)_, pages 37-43, 1998.
* [17] Craig Coulter. Implementation of the pure pursuit path tracking algorithm. 1992.
* [18] Prabibit Kaur, Zichuan Liu, and Weisong Shi. Simulators for mobile social robots: State-of-the-art and challenges. In _2022 Fifth International Conference on Connected and Autonomous Driving (MetroCAD)_, pages 47-56, 2022.
* [19] Erdogan. Dataset-of-gazebo-worlds-models-and-maps. [https://github.com/mlherd/Dataset-of-Gazebo-Worlds-Models-and-Maps](https://github.com/mlherd/Dataset-of-Gazebo-Worlds-Models-and-Maps), 2021.
* [20] Alexandre Abadie. iot-lab. [https://github.com/iot-lab/](https://github.com/iot-lab/), 2022.
* [21] Belal ibrahim. dynamic_logistics_warehouse. [https://github.com/belal-ibrahim/dynamic_logistics_warehouse](https://github.com/belal-ibrahim/dynamic_logistics_warehouse), 2021.
* [22] Gennaro Raiola, Enrico Mingo Hoffman, Michele Focchi, Nikos Tsagarakis, and Claudio Semini. A simple yet effective whole-body locomotion framework for quadruped robots. _Frontiers in Robotics and AI_, 7:528-473, 2020.
* [23] Gennaro Raiola, Michele Focchi, and Enrico Mingo Hoffman. Wolf: the whole-body locomotion framework for quadruped robots. _arXiv preprint arXiv:2205.06526_, 2022.
* [24] Quatela, A and Roberto, G. Delybot. [https://github.com/gmelk/bejytBot](https://github.com/gmelk/bejytBot), 2023.
* [25] Alireza Ahmadi, Lorenzo Nardi, Nived Chebrolu, and Cyrill Stachniss. Visual servoing-based navigation for monitoring row-crop fields. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_, pages 4920-4926. IEEE, 2020.
* [26] Miguel A Gonzalez-Santantamara. ros2_rover, July 2021.
* [27] Ji Zhang, Sanjiv Singh, et al. Loam: Lidar odometry and mapping in real-time. In _Robotics: Science and systems_, volume 2, pages 1-9. Berkeley, CA, 2014.
* [28] Tixiao Shan and Brendan Englot. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 4758-4765. IEEE, 2018.
* [29] Chunge Bai, Tao Xiao, Yajie Chen, Haoqian Wang, Fang Zhang, and Xiang Gao. Faster-lio: Lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels. _IEEE Robotics and Automation Letters_, 7(2):4861-4868, 2022.
* [30] Chongjian Yuan, Wei Xu, Xiyuan Liu, Xiaoping Hong, and Fu Zhang. Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry. _IEEE Robotics and Automation Letters_, 7(3):8518-8525, 2022.
* [31] Dongjiao He, Wei Xu, Nan Chen, Fanze Kong, Chongjian Yuan, and Fu Zhang. Point-lio: Robust high-bandwidth light detection and ranging inertial odometry. _Advanced Intelligent Systems_, 5(7):2200459, 2023.
* [32] Henri Rebecq, Timo Horstschafer, Guillermo Gallego, and Davide Scaramuza. Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time. _IEEE Robotics and Automation Letters_, 2(2):593-600, 2016.
* [33] Wei Xu and Fu Zhang. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. _IEEE Robotics and Automation Letters_, 6(2):3317-3324, 2021.
* [34] Chenglong Qian, Zhaohong Xiang, Zhoruan Wu, and Hongbin Sun. Rf-lio: Removal-first tightly-coupled lidar inertial odometry in high dynamic environments. _arXiv preprint arXiv:2206.09463_, 2022.
* [35] Patrick Pfreuthschuh, Hufbreus FC Hendrikx, Victor Reijgwart, Renaud Dube, Roland Siegwart, and Andrei Camariture. Dynamic object aware lidar slam based on automatic generation of training data. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_, pages 11641-11647. IEEE, 2021.
* [36] Hyungtae Lim, Sungwon Hwang, and Hyun Myung. Erasor: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building. _IEEE Robotics and Automation Letters_, 6(2):2272-2279, 2021.
* [37] Jiarong Lin and Fu Zhang. Loam livox: A fast, robust, high-precision lidar odometry and mapping package for lidars of small fov. In _2020 IEEE international conference on robotics and automation (ICRA)_, pages 3126-3131. IEEE, 2020.
* [38] Liang Qin, Chang Wu, Xiaotong Kong, Yuan You, and Zhiqi Zhao. Bvt-slam: A binocular visible-thermal sensors slam system in low-light environments. _IEEE Sensors Journal_, 24(7):11599-11609, 2024.
* [39] Easton Potokar, Spencer Ashford, Michael Kaess, and Joshua G Mangelson. Holoocean: An underwater robotics simulator. In _2022 International Conference on Robotics and Automation (ICRA)_, pages 3040-3046. IEEE, 2022.
* [40] Peng Chen, Cedric Jamet, Zhihua Mao, and Delu Pan. Ole: A novel oceanic lidar emulator. _IEEE Transactions on Geoscience and Remote Sensing_, 59(11):9730-9744, 2021.
* [41] Michael Grieves. Digital twin: Manufacturing excellence through virtual factory replication. 30 2015.
* [42] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. _Communications of the ACM_, 65(1):99-106, 2021.
* [43] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. _ACM Trans. Graph._, 42(4):139-1, 2023. | This letter presents a multi-scenario adaptable intelligent robot simulation platform based on LIDAR-inertial fusion, with three main features: (1 The platform includes an versatile robot model that can be freely controlled through manual control or autonomous tracking. This model is equipped with various types of LIDAR and Inertial Measurement Unit (IMU), providing ground truth information with absolute accuracy. (2 The platform provides a collection of simulation environments with diverse characteristic information and supports developers in customizing and modifying environments according to their needs. (3 The platform supports evaluation of localization performance for SLAM frameworks. Ground truth with absolute accuracy eliminates the inherent errors of global positioning sensors present in real experiments, facilitating detailed analysis and evaluation of the algorithms. By utilizing the simulation platform, developers can overcome the limitations of real environments and datasets, enabling fine-grained analysis and evaluation of mainstream SLAM algorithms in various environments. Experiments conducted in different environments and with different LIDARs demonstrate the wide applicability and practicality of our simulation platform. The implementation of the simulation platform is open-sourced on Github. | Summarize the following text. | 219 |
arxiv-format/2210_06891v4.md | # Experimental Design for Multi-Channel
Imaging via Task-Driven Feature Selection
Stefano B. Blumberg\\({}^{1,2}\\), Paddy J. Slator\\({}^{2,3}\\), Daniel C. Alexander\\({}^{2}\\)
\\({}^{1}\\)Centre for Artificial Intelligence, Department of Computer Science, University College London
\\({}^{2}\\)Centre for Medical Image Computing, Department of Computer Science, University College London
\\({}^{3}\\)Cardiff University Brain Research Imaging Centre and School of Computer Science, Cardiff University
[email protected]
## 1 Introduction
Experimental design seeks a sampling scheme or design \\(D=\\{\\textbf{d}^{1}, ,\\textbf{d}^{C}\\}\\), where each \\(\\textbf{d}^{i}\\), \\(i=1, ,C\\), is a combination of experimental variables that are under the control of the experimenter, that provides data optimally informative for some criteria or task Antony (2003); Pukelsheim (2006). The experimental outcome (measured data) of design \\(D\\) is a matrix \\(X_{D}\\in\\mathbb{R}^{n\\times C}\\) with \\(C\\) corresponding measurements from each of \\(n\\) samples. The optimal choice of design depends on the experimental task, which we express as a function \\(\\mathcal{T}\\) that maps \\(X_{D}\\) to a corresponding matrix \\(Y\\) of labels. Experimental design optimization seeks the design that maximizes the ability to perform the task, subject to constraints of time or cost, i.e.
\\[D^{*}=\\operatorname*{arg\\,min}_{D}L(\\mathcal{T}(X_{D}),Y),\\ \\ \\text{subject to}\\ \\ |D|=C \\tag{1}\\]
where \\(L\\) is a loss function. Here we limit cost simply to the size \\(C\\) of \\(D\\); \\(\\mathcal{T}\\) can be any task, but often in imaging involves estimating/mapping model parameters e.g. via gradient-descent model-fitting in every pixel/voxel, as in Alexander (2008); Cercignani & Alexander (2006), or machine learning as in Gyori et al. (2022); Waterhouse & Stoyanov (2022).
In imaging, as illustrated in figure 0(a), \\(X_{D}\\) is typically a collection of \\(n\\) pixels or voxels with \\(C\\) channels (e.g. RGB images have \\(C=3\\)). The choice of \\(\\textbf{d}^{i}\\in D\\) controls the contrast in channel \\(i\\) and is global to the whole channel. Compact (small \\(C\\)) but informative designs are often critical in reducing acquisition or development costs in real-world applications. Examples include acquiring magnetic resonance imaging (MRI) contrasts, e.g. to estimate and map microstructural tissueproperties within the time a patient can stay still in a scanner Alexander (2008), or manufacturing affordable hyperspectral imaging devices including a few well-chosen spectral filters, e.g. for estimating tissue oxygenation Waterhouse and Stoyanov (2022).
Standard approaches for experimental design typically optimize \\(D\\) over a continuous space, for the task of model parameter estimation. For example, a classical approach still widely deployed uses the Fisher matrix Montgomery (2001), whilst more recent approaches use the paradigm of sequential Bayesian experimental design Blau et al. (2022); Foster et al. (2021); Ivanova et al. (2021). Both require a priori model choice, limiting consideration to model-based tasks, and even specific model-parameter choices or assumptions on their prior distribution. Moreover, such approaches rapidly become computationally intractable as the dimension of the optimization increases.
Here we suggest a new task-driven paradigm for experimental design for real-world imaging applications, illustrated in figure 1b, that does not require a priori model specification and replaces high-dimensional continuous search with a subsampling problem. First, the paradigm requires training data \\(X_{\\bar{D}}\\) with \\(\\bar{C}\\) channels/measurements acquired using a design \\(\\bar{D}\\) that densely samples the measurement space. Secondly the paradigm selects a subset of size \\(C\\ll\\bar{C}\\) image channels from \\(X_{\\bar{D}}\\) (optimizing the design and choosing \\(X_{D}\\subset X_{\\bar{D}}\\)), coupled with the training of a high-performing neural network that executes the task \\(\\mathcal{T}\\) driving the experimental design. Thus, the new paradigm replaces the optimization in equation 1 with:
\\[D^{*},\\mathcal{T}^{*}=\\operatorname*{arg\\,min}_{D,\\mathcal{T}}L(\\mathcal{T}(X _{D}),Y)\\;\\;\\text{subject to}\\;\\;D\\subset\\bar{D}. \\tag{2}\\]
In this paradigm, the task must be specified a priori, but may go beyond standard model-based tasks that drive classical/Bayesian experimental design, to include'model free' tasks such as missing data reconstruction. The training data requires only a small number of subjects/samples, so may use specialized hardware, lengthy acquisitions, or even simulations. In practice, such acquisitions are often made during early development phases of imaging technologies to explore the range of sensitivity, which informs the choice of, and often provides, \\(\\bar{D}\\). The paradigm we propose formalizes the exploitation of such data in experimental design for downstream systems designed for wide deployment and directly supports the use of deep learning for \\(\\mathcal{T}\\).
In the new paradigm, the experimental design problem becomes similar to supervised feature selection, where the \\(\\bar{C}\\) image channels of \\(X_{\\bar{D}}\\) are considered features. In supervised feature selection, state-of-the-art approaches Wojtas and Chen (2020); Lee et al. (2022) couple feature selection with task optimization, however the structure of the data in typical supervised feature selection problems differs from those in experimental design for imaging. Feature selection algorithms typically assume most features are uninformative and the task is to 'identify a small, highly discriminative subset' Kuncheva et al. (2020) e.g. genes associated with drug response from the entire genome. In experimental design for imaging, however, most channels individually offer similar amounts of information to support task performance, since they view the same scene/sample but with often-subtle differences in contrast (see e.g. figure 6). Design optimization seeks a compact combination that covers all important aspects.
Therefore we propose TADRED, a novel method for TAsk-DRiven Experimental Design in imaging. TADRED couples feature scoring and task execution in consecutive networks. The scoring and subsampling procedure enables efficient identification of subsets of complementarily informative
Figure 1: a) An example of experimental design for imaging. In remote sensing hyperspectral imaging (see table 3), each observed wavelength \\(\\mathbf{d}^{t}\\) is chosen by the experimenter. The outcome of each \\(\\mathbf{d}^{t}\\) is a grayscale image - a channel of the resultant data \\(X_{D}\\) (RGB has \\(3\\) channels). b) The new paradigm for experimental design illustrated for qMRI. First, obtain image data \\(X_{\\bar{D}}\\) with a large number of \\(\\bar{C}\\) channels. Next, train a user-chosen task network, which drives design optimization to select \\(C<\\bar{C}\\) channels – we propose TADRED for this. We consider three distinct example tasks in experiments.
channels jointly with training a high-performing network for the task. TADRED also gradually reduces the full set of samples stepwise to obtain the subsamples, which improves optimization.
Key contributions are:
1. A new coupled subsampling-task paradigm (feature selection) for experimental design in imaging.
2. TADRED: a novel approach for supervised feature selection tuned specifically for experimental design in imaging. TADRED performs task-based image channel selection.
3. A demonstration of our approach on six datasets/tasks in both clinically-relevant MRI and remote sensing and physiological applications in hyperspectral imaging. TADRED outperforms (i) Classical experimental design, (ii) Recent application-specific published results, (iii) State-of-the-art approaches in supervised feature selection.
## 2 Related Work
**Approaches in Experimental Design** A typical task in experimental design is to optimize the design \\(D\\) for estimating model parameters. The most widely used classical approach in imaging uses the Fisher information matrix Pukelsheim (2006). However, for non-linear models, the optimization requires pre-specification of parameter values of interest, leading to circularity, e.g. the standard design for VERDICT model with primary application in prostate cancer detection and classification Panagiotaki et al. (2015a) (used as a baseline in table 1) is computed by optimizing the Fisher-matrix for one specific combination of parameter values, despite aiming to highlight contrast in those parameters throughout the entire prostate. Approaches in the sequential Bayesian experimental design paradigm Blau et al. (2022); Foster et al. (2021); Ivanova et al. (2021) reduce this circularity by optimizing over combinations or ranges of parameter values. Recently Blau et al. (2022) also implemented an experimental design optimization in a discrete space and obtained state-of-the-art performance and deployment time, by using reinforcement learning to map history of designs and outcomes to the next design. However, the tasks driving experimental design in imaging are often'model free' supervised tasks such as missing data reconstruction (tables 2, 3) to recover missing image channels. Classical Fisher-matrix experimental design or sequential Bayesian techniques do not apply in such problems. Furthermore, the sequential Bayesian techniques have been deployed on only small-scale experiments with simulated data e.g. a simple localization problem for two sources. For example, experiments in Blau et al. (2022) have \\(C\\leq 2\\) and \\(D\\in\\mathbb{R}^{\\text{dim}},\\text{dim}\\leq 6\\). In contrast, e.g. the real-world experiment in table 1 has \\(C\\in\\{110,55,28,14\\}\\) and \\(D\\in\\mathbb{R}^{7\\cdot C}\\). Preliminary experiments suggest the application of these approaches to the high dimensional problems is not computationally tractable with the published code/methods. These issues motivate the reformulation of the experimental design paradigm and the introduction of TADRED. Appendix-E is a broader review of experimental design for quantitative MRI (qMRI) and hyperspectral imaging.
**Supervised Feature Selection** operates either at the instance level e.g. identifying different salient parts of different images; or at the population level by selecting across all the instances. In imaging, each combination of acquisition parameters \\(\\mathbf{d}^{i}\\in D\\) is global across all image pixels/voxels, so channel-selection for experimental design must be population-wide. Recursive feature elimination (RFE) / backward selection Guyon et al. (2002); Scikit-Learn (2023); Kohavi and John (1997) are frameworks that seek the most informative set of features among a superset to inform a model or task. They work by eliminating the least informative features stepwise to reach a prescribed feature-set size. 'Feature Importance Ranking for Deep Learning' (FIRDL) Wojtas and Chen (2020), 'Self-Supervision Enhanced Feature Selection with Correlated Gates' (SSEFS) Lee et al. (2022) are considered state-of-the-art in feature selection, outperforming both classical (e.g. RFE) and recent approaches outlined in appendix-E. Both techniques are specifically designed to 'identify a small, highly discriminative' subset Kuncheva et al. (2020) of features from a larger group of mostly uninformative features. SSEFS, in a first step, uses a probabilistic approach to search for this subset, whilst also exploiting the presence of correlated subsets for enhanced performance. A second step then trains a network on the chosen subset to execute the task. FIRDL instead has a complex optimization procedure involving exploration-exploitation stochastic local search. SSEFS and FIRDL are detailed in appendix A and are baselines in later experiments.
In contrast to typical feature selection problems, most candidate choices in experimental design are informative: few, if any, features are uninformative so no single small discriminative set exists. SSEFS's first step seeks groups of correlated features, which is less useful in experiment design, as most image channels correlate strongly (examples in figure 5). FIRDL incorporates global information by performing multiple evaluations on different feature combinations. However, FIRDL's search for a discriminative subset is inappropriate in the experimental design application; its multiple evaluations of the task-execution network are redundant and result in covariate shift and overfitting.
Nevertheless, TADRED builds upon the basic principles of task-driven feature selection, which is the foundation of FIRDL and SSEFS's success. TADRED adopts the same dual-network architecture, but with a different optimization procedure tailored to the experimental design problem. Specifically, TADRED's implements a novel combination of the dual selection/task network optimization within the paradigms of RFE/backwards selection. As such, it adopts a comparatively simple scoring procedure, which avoids the complicated and suboptimal joint optimization FIRDL/SSEFS require to search for a distinctively discriminative subset. TADRED's end-to-end dual networks avoids FIRDL's multiple evaluations on different feature combinations, and TADRED's passing of information through the optimization procedure improves on both SSEFS and FIRDL.
Finally, PROSUB Blumberg et al. (2022) (baseline in table 2) is a previous attempt to equate experimental design with feature selection and also uses RFE. It uses a customized neural architecture search at every step and was designed specifically to address a measurement-selection problem in qMRI (data in table 2) where it achieves state-of-the-art performance. However, the technique does not naturally generalize to other tasks, which is a key motivation for TADRED. TADRED avoids PROSUB's cumbersome neural architecture search and implements instead a novel four-phase procedure in each step, which keeps the gradient updates smooth and allows feature selection at each step. Also, beyond standard RFE, TADRED efficiently passes information from the optimization on larger feature sets to smaller sets by passing information on the network weights across the steps, unlike PROSUB. These advances combine to enhance substantially the performance, portability and generalizability of the algorithm across diverse experimental design problems.
## 3 TADRED: TASK-DRiven Experimental Design for Imaging
TADRED presents a novel approach to supervised feature selection, tailored to the particularities of the experimental design problem in imaging, and aims to solve equation 2. Section 3.1 describes an outer loop of the procedure, which is inspired by classical paradigms Kohavi & John (1997); Guyon et al. (2002), that gradually eliminates elements from the densely-sampled design \\(\\bar{D}\\) in \\(t=1, ,T\\) steps to obtain designs \\(\\bar{D}=D_{1}\\supset \\supset D_{T}\\). This corresponds to performing supervised feature selection for decreasing sizes \\(\\bar{C}=C_{1}> >C_{T}\\), where \\(\\{C_{t}\\}_{t=1}^{T}\\) are chosen by the user a priori. Then section 3.2 outlines an inner loop for training with fixed \\(1\\leq t\\leq T\\). Inspired by recent supervised feature selection advances Imrie et al. (2022); Wojtas & Chen (2020), TADRED trains two coupled networks at each step: a scoring network \\(\\mathcal{S}_{t}\\), which scores individual elements of \\(X_{\\bar{D}}\\) for importance to inform the subsampling, and a task network \\(\\mathcal{T}_{t}\\), which performs the task driving the design, i.e. estimates \\(Y\\) from chosen feature subset \\(X_{D_{i}}\\subset X_{\\bar{D}}\\). The training procedure is split into four phases that allows feature selection at each step and is inspired by Karras et al. (2018); Blumberg et al. (2022) which produced enhanced optimization. The full procedure is outlined in algorithm 2.
### Outer Loop
Across steps \\(t=1, ,T\\) we consider decreasing feature set sizes \\(\\bar{C}=C_{1}>C_{2}> >C_{T}\\) and perform supervised feature selection at each step in an inner loop (see section 3.2). Reducing feature set sizes stepwise aids the optimization procedure compared to e.g. training on all features then subsampling all at once (see table 5). The procedure passes information from the optimization on larger feature sets to smaller sets. Finally, the stepwise procedure efficiently produces a set of optimized designs (as is typical in supervised feature selection see e.g. Wojtas & Chen (2020) and also in Waterhouse & Stoyanov (2022)), which can be useful for post-hoc selection of design size to balance economy (small C) with task performance. Whilst iterative subsampling also increases computational time, this is comparable to other supervised feature selection approaches (appendix D).
### Inner Loop: Four-Phase Deep Learning Training
At step \\(1\\leq t\\leq T\\) of the outer loop the inner loop constructs (i) a binary mask \\(\\textbf{m}_{t}\\in\\{0,1\\}^{C},||\\textbf{m}_{t}||_{0}=C_{t}\\) to subsample the features; (ii) a weight vector for the features \\(\\bar{\\textbf{s}}_{t}\\in\\mathbb{R}_{+}^{\\bar{C}}\\);(iii) a trained network \\(\\mathcal{T}_{t}\\) to perform the task, which corresponds to solving the optimization problem:
\\[\\begin{split}\\underset{\\textbf{m}_{t},\\ \\mathcal{T}_{t},\\ \\tilde{\\textbf{s}}_{t}}{\\text{minimize}}& L(\\mathcal{T}_{t}(X_{D_{t}} \\odot\\tilde{\\textbf{s}}_{t}),Y),\\ \\ \\text{subject to}\\ ||\\textbf{m}_{t}||_{0}=C_{t},\\\\ &\\text{where}\\ X_{D_{t}}=\\textbf{m}_{t}\\odot X_{\\bar{D}}+(\\textbf {1}_{\\bar{C}}-\\textbf{m}_{t})\\odot X_{\\bar{D}}^{\\text{ill}},\\end{split} \\tag{3}\\]
the \\(\\odot\\) operation is element-wise dot product which follows broadcasting rules when inputs have mismatched dimensions, \\(||\\cdot||_{0}\\) is the \\(L^{0}\\) norm, \\(\\textbf{1}_{\\bar{C}}\\) is a vector with \\(\\bar{C}\\) ones, and 'feature fill' \\(X_{\\bar{D}}^{\\text{ill}}\\in\\mathbb{R}^{\\bar{C}}\\) is a hyperparameter that fills the removed features (we take the data median, see appendix C.2). The weight vector \\(\\tilde{\\textbf{s}}_{t}\\) contains feature scores, which the training procedure uses to remove low-scoring features by setting corresponding values of the mask \\(\\textbf{m}_{t}\\) to \\(0\\).
**Scoring, Subsampling, and Task Execution** The core of the training procedure uses the forward/backward pass in algorithm 1. The full procedure in algorithm 2 uses the forward/backward pass to update feature scoring gradually in tandem with improving label prediction.
The procedure aims to learn a meaningful sample-independent feature score to rank the features. In practice, deep-learning training is performed in batches and not across the whole data. Therefore we first learn a sample-dependent feature score \\(\\sigma(\\mathcal{S}_{t}(X_{\\bar{D}}))=\\tilde{s}\\in\\mathbb{R}_{+}^{n\\times\\bar{C}}\\), where \\(\\mathcal{S}_{t}\\) is a neural network and \\(\\sigma:\\mathbb{R}\\rightarrow[0,\\infty)\\) is an activation function to ensure positive scores (we take \\(\\sigma=2\\cdot\\text{sigmoid}\\) and at initialization \\(\\sigma(0)=1\\)). We then compute a sample-independent score \\(\\tilde{\\textbf{s}}_{t}\\in\\mathbb{R}_{+}^{\\bar{C}}\\) as an average of \\(\\tilde{s}\\) across the \\(n\\) samples in \\(X_{D}\\). We also compute a combined score that aids task execution
\\[\\textbf{s}=\\alpha\\odot\\tilde{s}+(1-\\alpha)\\odot\\tilde{\\textbf{s}}_{t},\\ \\ \\alpha\\in[0,1], \\tag{4}\\]
which balances the current learned sample-dependent score with a fixed global estimate of the sample-independent score and allows smooth integration between the two. The mix parameter \\(\\alpha\\) is set in the optimization procedure to shift the balance from sample-dependent to sample-independent scores.
We use a mask \\(\\textbf{m}_{t}\\in[0,1]^{\\bar{C}}\\) to subsample the features \\(X_{D_{t}}=\\textbf{m}_{t}\\odot X_{\\bar{D}}+(\\textbf{1}_{\\bar{C}}-\\textbf{m}_{ t})\\odot X_{D}^{\\text{ill}}\\), and replace the removed features with default values \\(X_{\\bar{D}}^{\\text{ill}}\\) to retain the shape of the data structures throughout training. Rather than learning the mask \\(\\textbf{m}_{t}\\) end-to-end e.g. using a sparsity term/prior as in Lee et al. (2022), we modify elements of \\(\\textbf{m}_{t}\\) during our training procedure. This is important to enable the outer loop of the procedure to output candidate designs at each step.
We now estimate the target \\(Y\\) with \\(\\widehat{Y}=\\mathcal{T}_{t}(\\textbf{s}\\odot X_{D_{t}})\\) from the subsampled data weighted feature-wise by the score, then calculate the loss \\(L(\\widehat{Y},Y)\\). This weighting allows gradients to flow end-to-end.
**Training Procedure** The key challenges in the design of the training procedure in the inner loop are how to (i) obtain meaningful global sample-independent scores \\(\\tilde{\\textbf{s}}_{t}\\) from learnt sample-dependent scores \\(\\tilde{s}_{t}\\), (ii) differentiate through a masking operation to compute \\(\\textbf{m}_{t}\\). TADRED's four-phase procedure, inspired by Karras et al. (2018); Blumberg et al. (2022) gradually modifies the neural network structure during deep learning training, moving from learning a simpler task (learning sample-dependent scores and retaining most features) to a more complex task (learning sample-independent scores and removing more features) by linear interpolation of network components. This improves optimization over directly learning the more difficult task. Thus we address (i) by first learning \\(\\tilde{s}\\) and then progressively reducing the final score to the average of \\(\\tilde{s}\\) its average i.e. \\(\\textbf{s}=\\tilde{\\textbf{s}}_{t}\\) (in algorithm 2 phase 2), and (ii) by progressively setting elements of \\(\\textbf{m}_{t}\\) to zero i.e., during training, mask elements are real valued but gradually reduce to binary values (in phase 3).
The training procedure is different for the first outer loop step \\(t=1\\) compared to steps \\(t\\geq 2\\). This is because for step \\(t=1\\) we train on all \\(\\bar{C}\\) features and do not have information from previous steps and for steps \\(t\\geq 2\\) we perform supervised feature selection for user-chosen \\(C_{t}\\) (solve equation 3) and training is initialized from step \\(t-1\\). We describe each step with reference to algorithm 2.
**Training for Step t = 1** In the first step (lines 1-4), we simply train \\(\\mathcal{S}_{1},\\mathcal{T}_{1}\\) on full information i.e. on all features for total (chosen) \\(E\\) epochs. At completion, we set the first sample-independent score \\(\\tilde{\\textbf{s}}_{1}\\) (line 4) to be the mean of the sample-dependent scores \\(\\tilde{s}\\) across samples/batches. We found training solely on a sample-dependent score results in faster optimization.
**Training for Steps t = 2, ,T** The four phases require choosing the number of epochs for each phase: \\(1<=E_{1}<E_{2}<E_{3}<E\\) for total number of epochs \\(E\\), training proceeds as follows:
**Phase 1)** Initialize \\(\\mathcal{S}_{t}\\) and \\(\\mathcal{T}_{t}\\) from \\(\\mathcal{S}_{t-1}\\) and \\(\\mathcal{T}_{t-1}\\), \\(\\bar{\\mathbf{s}}_{t}\\) to \\(\\bar{\\mathbf{s}}_{t-1}\\), \\(\\mathbf{m}_{t}\\) to \\(\\mathbf{m}_{t-1}\\),e and \\(\\alpha=\\frac{1}{2}\\) to balance learning a new score for this step and using information from the learnt score from step \\(t-1\\). Run \\(E_{1}\\) epochs to refine scores and task execution with \\(\\alpha\\) and \\(\\mathbf{m}_{t}\\) fixed.
**Phase 2)** Update the sample-independent score \\(\\bar{\\mathbf{s}}_{t}\\) with the learnt score from phase 1 (line 11). Run \\(E_{2}-E_{1}\\) epochs progressively linearly modifying \\(\\alpha\\) (line 13), so training moves gradually from using sample-dependent scores to sample-independent.
**Phase 3)** Choose the \\(C_{t-1}-C_{t}\\) lowest-scored features to remove (lines 16, 17). Run \\(E_{3}-E_{2}\\) epochs linearly modifying the mask for subsampling (line 19). This alters the \\(C_{t-1}-C_{t}\\) elements of \\(\\mathbf{m}_{t}\\) corresponding to the lowest-scored features gradually to \\(0\\). Thus \\(||\\mathbf{m}_{t}||_{0}=C_{t-1}\\) goes to \\(||\\mathbf{m}_{t}||_{0}=C_{t}\\). Separating this phase from phase 2 increases the stability of the optimization, as modifying the mask and score simultaneously results in large gradients.
**Phase 4)** Train \\(\\mathcal{T}_{t}\\) for final refinement for \\(E-E_{3}\\) epochs with the score weights fixed and features chosen. At completion return \\(\\mathcal{T}_{t}\\), \\(\\mathbf{m}_{t},\\bar{\\mathbf{s}}_{t}\\).
**Implementation Details and Hyperparameters** TADRED's hyperparameters are fixed across experiments and different application areas. They are detailed in appendix A.
## 4 Experiments and Results
This section demonstrates the benefits of TADRED in multiple scenarios, with example applications in qMRI and hyperspectral imaging. First, in table 1, we consider the standard experimental design task of model parameter estimation and outperform classical Fisher-matrix approaches. Within the new paradigm, we also show improvements over recent supervised feature selection approaches. We then show TADRED's efficacy in a'model-free' experimental design scenario: reconstruction of a densely sampled data set from a sparse subset, where Fisher-matrix or recent Bayesian experimental design cannot operate and TADRED outperforms best published results in an MRI challenge in table 2. In figure 2 we consider a reconstruction task to then estimate multiple clinically-relevant downstream traditional model-parameter estimation task to estimate multiple quantities. TADRED outperforms recent supervised feature selection techniques in this task that has immediate deployment potential. We then show the generalizability of TADRED by performing similar sets of experiments on hyperspectral images, outperforming both supervised feature selection baselines for earth remote sensing in table 3 and recent work in tissue oxygenation estimation in table 4. Tables 5, 6 show an ablation study and that TADRED is mostly robust to randomness in deep learning training.
Appendix C provides additional analysis. Appendix E provides details on experimental design in qMRI and hyperspectral imaging and how to implement our paradigm in real-world scenarios. Appendix F summarizes and visualizes the resultant densely-sampled data \\(X_{\\tilde{D}}\\). Following standard practice in MR parameter estimation Alexander et al. (2019); Cercignani et al. (2018), and hyperspectral image filter design Waterhouse and Stoyanov (2022), data samples are individual pixels/voxels.
**Baselines and Comparisons** We compare TADRED with standard model-based approaches such as the classical Fisher-matrix. Within the subsampling-task paradigm we use i) recent application-specific published results optimized by the respective authors; ii) state-of-the-art supervised feature selection approaches FIRDL, SSEFS (see section 2) and random selection then deep learning training (denoted by 'random') to mimic random baselines used in experimental design papers. Each feature selection approach conducts an extensive hyperparameter search, for fairness, the same number of evaluations are used for each feature subset \\(C\\). All details are in appendix A. As this requires multiple training run ( SSEFS in table 1 requires >400 runs), we examine the effect of the random seed on performance in table 6. We compare the computational costs of different approaches in appendix D.
**TADRED Outperforms Classical Experimental Design and Baselines in Model Parameter Estimation** A standard task in experimental design is selecting the design \\(D\\) to maximize the precision of model parameters. We evaluate strategies for this using the VERDICT-MRI model which aids early detection and classification of prostate cancer Panagiotaki et al. (2015a). We sample parameters \\(\\mathbf{\\theta}_{i}\\) for voxel \\(i=1, ,n\\), from a biologically plausible range, add synthetic noise representative of clinical qMRI, and the task is to estimate \\(Y=\\{\\mathbf{\\theta}_{1}, ,\\mathbf{\\theta}_{n}\\}\\) with performance metric MSE. The first baseline Panagiotaki et al. (2015b) uses classical Fisher-matrix experimental design (see section 2), to compute the design \\(D\\) with \\(C=20\\). The design produces a root-mean square error of \\(15.0\\times 10^{-2}\\) in this experiment. The TADRED design with \\(C=20\\) has corresponding error of \\(2.04\\times 10^{-2}\\). The supervised feature selection approaches in the new paradigm use a densely-sampled design \\(\\bar{D}\\), where \\(\\bar{C}=220\\) from Panagiotaki et al. (2015a) and use deep learning to estimate \\(\\mathbf{\\theta}_{i}\\). Appendix F.1 documents all designs, models, and data. Table 1 shows TADRED outperforms the feature selection baselines where \\(C=\\frac{\\bar{C}}{2},\\frac{\\bar{C}}{4},\\frac{\\bar{C}}{8},\\frac{\\bar{C}}{16}\\). Thus it can better estimate parameters shown to reduce unnecessary biopsies Singh et al. (2022) in shorter scan times, spurring wider deployment in clinical settings. Similar results on the well-known NODDI model are in appendix B.
**Best Performance on qMRI Challenge Data** The Multi-Diffusion Challenge Pizzolato et al. (2020) aimed to identify an informative subset of data, from which to reconstruct the original full dataset \\(X_{\\bar{D}}\\) (i.e. \\(Y=X_{\\bar{D}}\\)) which had \\(\\bar{C}=1344\\) measurements. This task provides a generic challenge that tests the ability of an experimental design or supervised feature selection algorithm to identify a
\\begin{table}
\\begin{tabular}{c c c c c} & \\(C=110\\) & 55 & 28 & 14 \\\\ \\hline Random & 1.54 & 2.24 & 3.25 & 6.10 \\\\ SSEFS & 1.06 & 1.28 & 1.89 & 4.58 \\\\ FIRDL & 2.22 & 2.14 & 3.09 & 4.05 \\\\ \\hline TADRED & **1.03** & **1.18** & **1.80** & **2.64** \\\\ \\end{tabular}
\\end{table}
Table 1: Performance comparison of feature selection approaches for VERDICT-MRI designs: MSE \\(\\times 10^{2}\\) between estimated model parameters and ground truth for various \\(C\\) and \\(\\bar{C}=220\\).
\\begin{table}
\\begin{tabular}{c c c c c} & \\(C=500\\) & 250 & 100 & 50 \\\\ \\hline PROSUB & 0.49 & 0.61 & 0.89 & 1.35 \\\\ \\hline TADRED & **0.22** & **0.43** & **0.88** & **1.34** \\\\ & \\(C=40\\) & 30 & 20 & 10 \\\\ \\hline PROSUB & 1.53 & 1.87 & 2.50 & 3.48 \\\\ \\hline TADRED & **1.52** & **1.76** & **2.12** & **2.88** \\\\ \\end{tabular}
\\end{table}
Table 2: Performance comparison on MUDI: MSE between \\(\\bar{C}=1344\\) reconstructed MRI channels/measurements and \\(\\bar{C}\\) ground-truth measurements for various \\(C\\). PROSUB results from Blumberg et al. (2022) table 1.
Figure 2: Downstream MRI metrics (see appendix F.3) estimated from the full set of channels/measurements on HCP data \\(\\bar{C}=288\\), and \\(\\bar{C}\\) from \\(C=18\\) reconstructed measurements. Left: MSE for various metrics; Right: Qualitative comparison where arrows highlight closer agreement from TADRED’s design with the gold standard than those from the best performing baseline.
subset with maximal information content. As discussed in section 2, neither classical Fisher matrix nor Bayesian experimental design approaches can perform this task. Data are brain scans of five human subjects, which were acquired from a state-of-the-art technique that acquires multiple MRI modalities simultaneously in a high-dimensional space where \\(\\mathbf{d}^{i}\\in\\mathbb{R}^{6}\\). Thus experimental design is important, as sampling in a time budget realistic in clinical settings is difficult Slator et al. (2021). The first experiment follows Blumberg et al. (2022) which has the best performance on the data and table 2 shows TADRED outperforms this approach. We also show TADRED outperforms the supervised feature selection baselines in appendix B. All details are in appendix F.2.
**Surpassing the Baselines in Estimation of Multiple Downstream Metrics** DTI, DKI, and MSDKI Basser et al. (1994); Jensen & Helpern (2010); Henriques (2018) are widely-used qMRI methods. They quantify tissue microstructure and show promise for extracting imaging biomarkers for many medical applications, such as mild brain trauma, epilepsy, stroke, and Alzheimer's disease Jensen & Helpern (2010); Ranzenberger & Snyder (2022); Tae et al. (2018). Reducing acquisition requirements (picking a small \\(C\\)) whilst obtaining more accurate quantification will enable their usage in a wider range of clinical application areas. We use publicly available, rich, high-resolution HCP data with \\(\\overline{C}=288\\) measurements from six human subjects, corresponding to \\(\\approx 30\\) minute scan times in the clinic - too long for general deployment. The task is to subsample sizes \\(C=\\frac{\\overline{C}}{8},\\frac{\\overline{C}}{16}\\) then reconstruct the data, where the models are then fitted using standard techniques. Further details on the models, data, and model fitting techniques are in appendix F.3. Quantitative and qualitative results are in figure 2 and appendix B and show TADRED outperforms the baselines on 17/18 comparisons on clinically useful downstream metrics. Furthermore, the downstream metrics produced by TADRED are visually closer to the gold standard than those from the best baseline, potentially enhancing the diagnosis of aberrations in tissue microstructure.
**Outperforming Baselines in Reconstructing Remote Sensing Ground Images** The JPL's Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) Thompson et al. (2017) remotely senses elements of the Earth's atmosphere and surface from aeroplanes, and has been used to examine the effect and rehabilitation of forests affected by large wildfires. Purdue University Agronomy Department obtained AVIRIS data to support soils research and we use this publicly available 'Indian Pine' data Baumgardner et al. (2015), obtained from two flights - which acquired ground images from \\(\\overline{C}=220\\) different wavelengths. Details are in appendix F.4. This experiment follows experiment in table 2 and examines a sampling-reconstruction task where we investigate if we can obtain the same quality data with fewer wavelengths - which would in practice require fewer sensors. Table 3 shows TADRED outperforms the supervised feature selection baselines with subsample sizes \\(C=\\frac{\\overline{C}}{2},\\frac{\\overline{C}}{4},\\frac{\\overline{C}}{8},\\frac {\\overline{C}}{16}\\). These improvements demonstrate the potential for using fewer filters in AVIRIS. In the development of next-generation airborne hyperspectral devices, TADRED may be used to choose the filters. Further results are in appendix B, promising additional applications are outlined in appendix E.
**Improving the Estimation of Oxygen Saturation** This experiment follows Waterhouse & Stoyanov (2022). Tissue oxygen saturation levels provide information regarding chemical and heat burns, along with the likelihood of healing. However, techniques such as spectrophotometers and pulse oximetry do not provide the spatial resolution to observe differences in blood saturation in neighboring tissue. Hyperspectral imaging is a non-invasive and real-time alternative to improve oxygenation estimation, yet application-specific spectral band selection is required to reduce the high cost of imaging sensors, allowing widespread clinical adoption. To address this, Waterhouse & Stoyanov (2022) adapted the model in Can & Ulgen (2019) for simulations and the objective is to estimate the pixel-wise abundance of oxyhemoglobin \\(HbO_{2}\\) and deoxyhemoglobin \\(Hb\\); and oxygen saturation \\(SO_{2}\\). Design
\\begin{table}
\\begin{tabular}{l c c c c} & \\(C=6\\) & 5 & 4 & 3 \\\\ \\hline Baseline & 4.54 & 4.91 & 5.33 & 6.17 \\\\ \\hline TADRED & **2.80** & **2.89** & **3.23** & **4.36** \\\\ \\hline \\(C=6\\) & 5 & 4 & 3 \\\\ \\hline Baseline & 4.45 & 4.43 & 5.10 & 6.36 \\\\ \\hline TADRED & **2.76** & **2.94** & **3.46** & **5.64** \\\\ \\end{tabular}
\\end{table}
Table 4: Performance comparison RMSE \\(\\times 10^{2}\\), estimating abundance of \\(HbO_{2},Hb\\) (top), \\(SO_{2}\\) (bottom). Experimental settings and baseline from Waterhouse & Stoyanov (2022) figure 5.
\\begin{table}
\\begin{tabular}{l c c c c} & \\(C=110\\) & 55 & 28 & 14 \\\\ \\hline Random & 1.81 & 2.60 & 3.99 & 8.27 \\\\ SSEFS & 2.03 & 4.49 & 5.77 & 10.8 \\\\ FIRDL & 8.10 & 9.87 & 10.6 & 10.3 \\\\ \\hline TADRED & **0.87** & **1.82** & **2.84** & **5.80** \\\\ \\end{tabular}
\\end{table}
Table 3: Performance comparison of feature selection approaches for remote sensing AVIRIS hyperspectral data, MSE between \\(\\bar{C}=220\\) reconstructed and \\(\\bar{C}\\) ground-truth measurements.
elements \\(\\textbf{d}^{i}\\in\\bar{D}\\) are chosen from \\(4\\) filters of different widths applied to \\(87\\) wavelengths (center), producing \\(\\bar{C}=348\\) measurements. Table 4 shows that TADRED produces directly outperforms all approaches and results published and optimized in Waterhouse & Stoyanov (2022) for estimating the abundance of \\(HbO_{2},Hb,SO_{2}\\); for feature sizes \\(C=6,5,4,3\\). This suggests that using TADRED during the development of clinically-viable hyperspectral devices may be beneficial to reduce costs.
**Component Analysis and the Effect of Randomness** We use the experimental settings in table 1. Table 5 examines the impact of removing TADRED's components on performance. First it considers TADRED without iteratively removing features in the optimization procedure, fixing \\(t=2\\) and \\(C_{1},C_{2}=\\bar{C},C\\), showing that iterative subsampling has better performance than subsampling all features one iteration. As the feature scoring is a key element of TADRED, we also show that removing the scoring network \\(\\mathcal{S}\\), whilst still learning a score, results in extremely poor performance, as training is destabilized when progressively setting the score from sample-dependent to sample-independent (recall equation 4). Table 6 shows how changing the random seed affects network initialization and data shuffling impacts performance; TADRED performs favorably compared to alternative approaches, and TADRED is mostly robust to the randomness inherent in deep learning.
## 5 Discussion
This paper proposes TADRED, a feature selection algorithm that enables a new subsampling paradigm for experimental design particularly in multi-channel imaging applications. We demonstrate substantial performance benefits over standard Fisher-matrix approaches at the heart of widely used quantitative MRI techniques, as well as strong potential in multiple hyperspectral-imaging applications. \"Standard\" data sets for testing TADRED do not exist, as its new paradigm is largely unexplored, but in the few available examples (dataset used in table 2 and hyperspectral datasets in tables 3, 4) TADRED strongly outperforms existing algorithms, even on datasets for which those baselines were specifically designed, and without problem-specific hyperparameter tuning.
TADRED combines the dual selection/task network training strategy in state-of-the-art feature selection algorithms (SSEFS and FIRDL) with an RFE framework better suited to identifying complementary subsets among many informative candidate features. Thus, TADRED outperforms SSEFS and FIRDL on the imaging experimental design problems we consider. In fact, random supervised feature selection often outperforms SSEFS when there are no informative/correlated feature subsets to identify and FIRDL's complex optimization procedure is often not beneficial when there is no such subset to identify and it underperforms simpler approaches. On the other hand, TADRED is likely to underperform SSEFS and FIRDL on typical applications in supervised feature selection where small sets of discriminative features reside among many uninformative features. One possible limitation is that TADRED's iterative subsampling in the paradigm of RFE and backward selection decreases the upper bound on performance as the optimal feature sets for sizes \\(C_{t},C_{t-1}\\) may not be nested. Future work will consider alternative strategies. This iterative subsampling also increases computational time compared to random supervised feature selection. However, appendix D shows TADRED's computational time is comparable to SSEFS and FIRDL. Here, we consider all image channels to have equal cost, but in practice some measurements/channels may be more expensive than others; TADRED's formulation adapts naturally to more complex cost functions on the experimental design. Also here we consider only tasks that treat each image pixel/voxel independently which is typical in quantitative imaging Alexander et al. (2019); Cercignani et al. (2018), so use only fully-connected networks (as the baselines), again TADRED's formulation adapts naturally to use e.g. a CNN for \\(\\mathcal{T}\\). TADRED has further applications to other imaging problems e.g autofocus for specialized equipment Lightley et al. (2022) and potentially beyond imaging to e.g. studies of cell populations Sinkoe & Hahn (2017).
\\begin{table}
\\begin{tabular}{c c c c c} & \\(C=110\\) & 55 & 28 & 14 \\\\ \\hline Random & 0.11 & 0.23 & 0.37 & 0.76 \\\\ SSEFS & 0.01 & 0.02 & 0.05 & 0.18 \\\\ FIRDL & 0.44 & 0.34 & 0.37 & 0.44 \\\\ \\hline TADRED & 0.01 & 0.02 & 0.01 & 0.12 \\\\ \\end{tabular}
\\end{table}
Table 6: Standard deviation of performance \\(\\times 10^{2}\\) across 10 random seeds settings.
\\begin{table}
\\begin{tabular}{c c c c c} & \\(C=110\\) & 55 & 28 & 14 \\\\ \\hline w/o Scoring Network \\(S\\) & 7.23 & 10.7 & 11.5 & 11.5 \\\\ w/o iterative subsampling & **1.03** & 1.19 & 1.83 & 2.80 \\\\ \\hline TADRED & **1.03** & **1.18** & **1.80** & **2.64** \\\\ \\end{tabular}
\\end{table}
Table 5: Ablation study on TADRED’s components.
## Reproducibility Statement
We provide the code: Code Link, which contains the entire source code for our algorithm TADRED. The code also contains the script to create simulations used in tables 1, 7 and downloading and preprocessing the data in for results presented in tables 2, 3 and figure 2. Further details on all data and preprocessing are in appendix F. We also provide detailed information on the implementation of TADRED and the baselines in appendix A.
## Acknowledgements
HPC: Tristan Clark, James O'Connor, Edward Martin; Ahmed Abdelkarim, Daniel Beechey, George Blumberg, Razvan Caramalauau, Amy Chapman, Alice Cheng, Luca Franceschi, G-Research (for a previous grant), Fredrik Helltrom, Jessica Hoang, Chen Jin, Jean Kaddour, Marcus Keil, Johannes Kirschner, Marcela Konanova, Eve Levy and Michael Salvato, Hongxiang Lin, Nina Montana-Brown, Luca Morreale, MUDI Organizers, Raymond Ojinnaka, Gabriel Oon, Brooks Paige, David Perez-Suarez, Stefan Piatek, Reviewers, Oliver Slumbers, Dennis Soemers, Danail Stoyanov, Shinichi Tamura, Dale Waterhouse, Tom Young, An Zhao, Yukhan Zhou. Funding: EPSRC grants M020533 R006032 R014019, Microsoft scholarship, NIHR UCLH Biomedical Research Centre, Research Initiation Project of Zhejiang Lab (No.2021ND0PI02). Data were provided [in part] by the Human Connectome Project, MGH-USC Consortium (Principal Investigators: Bruce R. Rosen, Arthur W. Toga and Van Wedeen; U01MH093765) funded by the NIH Blueprint Initiative for Neuroscience Research grant; the National Institutes of Health grant P41EB015896; and the Instrumentation Grants S10RR023043, 1S10RR023401, 1S10RR019307.
## References
* Abid et al. (2019) Abubakar Abid, Muhammed Fatih Balin, and James Zou. Concrete autoencoders: Differentiable feature selection and reconstruction. _In: International Conference on Machine Learning (ICML)_, 2019.
* Alexander (2008) Daniel C. Alexander. A general framework for experiment design in diffusion MRI and its application in measuring direct tissue-microstructure features. _Magnetic resonance in medicine_, 60(2):439-448, 2008.
* Alexander et al. (2019) Daniel C. Alexander, Tim B. Dyrby, Markus Nilsson, and Hui Zhang. Imaging brain microstructure with diffusion MRI: practicality and applications. _NMR in Biomedicine_, 32(4):e3841, 2019.
* Alfaro et al. (2018) Fidel Alfaro-Almagro, Mark Jenkinson, Neal K. Bangerter, Jesper L. R. Andersson, Ludovica Griffanti, Gwenaelle Douaud, Stamatios N. Sotiropoulos, Saad Jbabdi, Moises Hernandez-Fernandez, Emmanuel Vallee, Diego Vidaurre, Matthew Webster, Paul McCarthy, Christopher Rorden, Alessandro Daducci, Daniel C. Alexander, Hui Zhang, Iulius Dragonu, Paul M. Matthews, Karla L. Miller, and Stephen M. Smith. Image processing and quality control for the first 10,000 brain imaging datasets from UK biobank. _NeuroImage_, 166:400-424, 2018.
* Annadani et al. (2023) Yashas Annadani, Panagiotis Tigas, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, and Stefan Bauer. Differentiable multi-target causal bayesian experimental design. _In: International Conference on Machine Learning (ICML)_, 2023.
* Antony (2003) Jiju Antony. _Design of Experiments for Engineers and Scientists_. Oxford: Butterworth-Heinemann, 2003.
* Arad and Ben-Shahar (2017) Boaz Arad and Ohad Ben-Shahar. Filter selection for hyperspectral estimation. _In: International Conference on Computer Vision (ICCV)_, 2017.
* Arbour et al. (2022) David Arbour, Drew Dimmery, Tung Mai, and Anup Rao. Online balanced experimental design. _In: International Conference of Machine Learning (ICML)_, 2022.
* Basser and Pierpaoli (1996) Peter J. Basser and Carlo Pierpaoli. Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. _Journal of Magnetic Resonance_, 111:209-219, 1996.
* Basser et al. (1994) Peter J. Basser, James Mattiello, and Denis LeBihan. MR diffusion tensor spectroscopy and imaging. _Biophysical journal_, 66(1):259-267, 1994.
* Baumgardner et al. (2015) Marion F. Baumgardner, Larry L. Biehl, and David A. Landgrebe. 220 band AVIRIS hyperspectral image data set: June 12, 1992 indian pine test site 3. _Purdue University Research Repository doi:10.4231/R7RX99IC_, 2015.
* Baumgardner et al. (2022) Marion F. Baumgardner, Larry L. Biehl, and David A. Landgrebe. Aviris hyperspectral image data set. [https://purr.purdue.edu/publications/1947/1](https://purr.purdue.edu/publications/1947/1), 2022.
* Blau et al. (2022) Tom Blau, Edwin V. Bonilla, Iadine Chades, and Amir Dezfouli. Optimizing sequential experimental design with deep reinforcement learning. _In: International Conference on Machine Learning (ICML)_, 2022.
* application to quantitative MRI. 2022.
* Breiman (2001) Leo Breiman. Random forests. _Machine Learning_, 45:5-32, 2001.
* Camilleri et al. (2021) Romain Camilleri, Kevin Jamieson, and Julian Katz-Samuels. High-dimensional experimental design and kernel bandit. 2021.
* Can and Ulgen (2019) Osman Melih Can and Yekta Ulgen. Modeling diffuse reflectance spectra of donated blood with their hematological parameters. _Clinical and Preclinical Optical Diagnostics II_, 2019.
* Castiglia et al. (2023) Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, and Stacy Patterson. LESS-VFL: Communication-efficient feature selection for vertical federated learning. _In: International Conference on Machine Learning (ICML)_, 2023.
* Cercignani and Alexander (2006) Mara Cercignani and Daniel C. Alexander. Optimal acquisition schemes for in vivo quantitative magnetization transfer MRI. _Magnetic Resonance in Medicine_, 56(4):803-810, 2006.
* Cercignani et al. (2018) Mara Cercignani, Nicholas G. Dowell, and Paul S. Tofts. _Quantitative MRI of the Brain: Principles of Physical Measurement_. CRC Press, second edition, 2018.
* Chen et al. (2017) Jianbo Chen, Mitchell Stern, Martin J. Wainwright, and Michael I. Jordan. Kernel feature selection via conditional covariance minimization. _In: Neural Information Processing Systems (NIPS)_, 2017.
* Link (2016) Code Link. Code for this Paper by Stefano B. Blumberg. [https://github.com/sbb-gh/experimental-design-multichannel](https://github.com/sbb-gh/experimental-design-multichannel).
* Cohen et al. (2023) David Cohen, Tal Shnitzer, Yuval Kluger, and Ronen Talmon. Few-sample feature selection via feature manifold learning. _In: International Conference on Machine Learning (ICML)_, 2023.
* Connolly et al. (2023) Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, and Christopher Frye. Task-specific experimental design for treatment effect estimation. _In: International Conference on Machine Learning (ICML)_, 2023.
* Covert et al. (2023) Ian Covert, Wei Qiu, Mingyu Lu, Nayoon Kim, Nathan White, and Su-In Lee. Learning to maximize mutual information for dynamic feature selection. _In: International Conference on Machine Learning (ICML)_, 2023.
* Doudchenko et al. (2021) Nick Doudchenko, Khashayar Khosravi, Jean Pouget-Abadie, Sebastien Lahaie, Miles Lubin, Vahab Mirrokni, Jann Spiess, and Guido Imbens. Synthetic design: An optimization approach to experimental design with synthetic controls. _In: Neural Information Processing Systems (NIPS)_, 2021.
* Van Essen et al. (2013) David C. Van Essen, Stephen M. Smith, Deanna M. Barch, Timothy E. J. Behrens, Essa Yacoub, Kamil Ugurbil, and WU-Minn HCP Consortium. The WU-Minn Human Connectome Project: an overview. _Neuroimage_, 80:62-79, 2013.
* Van Essen et al. (2015)* Fabian et al. (2022) Zalan Fabian, Berk Tinaz, and Mahdi Soltanolkotabi. HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated mri reconstruction. _In: Neural Information Processing Systems (NIPS)_, 2022.
* Ferizi et al. (2017) Uran Ferizi, Benoit Scherrer, Torben Schneider, Mohammad Alipoor, Odin Euffacio, Rutger H. J. Fick, Rachid Deriche, Markus Nilsson, Ana K. Loya-Olivas, Mariano Rivera, Dirk H. J. Poot, Alonso Ramirez-Manzanares, Jose L. Marroquin, Ariel Rokem, Christian Potter, Robert F. Dougherty, Ken Sakaie, Claudia Wheeler-Kingshott, Simon K. Warfield, Thomas Witzel, Lawrence L. Wald, Jose G Raya, and Daniel C. Alexander. Diffusion MRI microstructure models with in vivo human brain connectome data: results from a multi-group comparison. _NMR in biomedicine_, 30(9), 2017.
* Fick et al. (2019) Rutger Fick, Demian Wassermann, and Rachid Deriche. The Dmipy toolbox: Diffusion MRI multi-compartment modeling and microstructure recovery made easy. _Frontiers in Neuroinformatics_, 13(64), 2019.
* Fontaine et al. (2021) Xavier Fontaine, Pierre Perrault, Michal Valko, and Vianney Perchet. Online A-Optimal design and active linear regression. _In: International Conference on Machine Learning (ICML)_, 2021.
* Foster et al. (2021) Adam Foster, Desi R. Ivanova, Ilyas Malik, and Tom Rainforth. Deep adaptive design: Amortizing sequential bayesian experimental design. _In: Neural Information Processing Systems (NIPS)_, 2021.
* Garyfallidis et al. (2014) Eleftherios Garyfallidis, Matthew Brett, Bagrat Amirbekian, Ariel Rokem, Stefan van der Walt, Maxime Descoteaux, Ian Nimmo-Smith, and Dipy Contributors. DIPY, a library for the analysis of diffusion MRI data. _Frontiers in Neuroinformatics_, 8(8), 2014.
* Glynn et al. (2020) Peter W. Glynn, Ramesh Johari, and Mohammad Rasouli. Adaptive experimental design with temporal interference: A maximum likelihood approach. _In: Neural Information Processing Systems (NIPS)_, 2020.
* Grussu et al. (2017) Francesco Grussu, Torben Schneider, Carmen Tur, Richard L. Yates, Mohamed Tachrount, Andrada Ianus, Marios C. Yiannakas, Jia Newcombe, Hui Zhang, Daniel C. Alexander, Gabriele C. DeLuca, and Claudia A. M. Gandini Wheeler-Kingshott. Neurite dispersion: a new marker of multiple sclerosis spinal cord pathology? _Annals of Clinical and Translational Neurology_, 4(9), 2017.
* Grussu et al. (2021) Francesco Grussu, Stefano B. Blumberg, Marco Battiston, Lebina S. Kakkar, Hongxiang Lin, Andrada Ianus, Torben Schneider, Saurabh Singh, Roger Bourne, Shonit Punwani, David Atkinson, Claudia A. M. Gandini Wheeler-Kingshott, Eleftheria Panagiotaki, Thomy Mertzanidou, and Daniel C. Alexander. Feasibility of data-driven, model-free quantitative MRI protocol design: Application to brain and prostate diffusion-relaxation imaging. _Frontiers in Physics_, 9:615, 2021.
* Gudbjartsson & Patz (1995) Hakon Gudbjartsson and Samuel Patz. The Rician distribution of noisy MRI data. _Magnetic Resonance in Medicine_, 34(6):910-914, 1995.
* Guyon et al. (2002) Isabelle Guyon, Jason Weston, Stephen Barnhill, and Vladimir Vapnik. Gene selection for cancer classification using support vector machines. _Journal of Machine Learning Research_, 46(1):389-422, 2002.
* Gyori et al. (2022) Noemi G. Gyori, Marco Palombo, Christopher A. Clark, Hui Zhang, and Daniel C. Alexander. Training data distribution significantly impacts the estimation of tissue microstructure with machine learning. _Magnetic Resonance in Medicine_, 87(2):932-947, 2022.
* Hansen et al. (2022) Derek Hansen, Brian Manzo, and Jeffrey Regier. Normalizing flows for knockoff-free controlled feature selection. _In: Neural Information Processing Systems (NIPS)_, 2022.
* He et al. (2005) Xiaofei He, Deng Cai, and Partha Niyogi. Laplacian score for feature selection. _In: Neural Information Processing Systems (NIPS)_, 2005.
* Henriques (2018) Rafael Neto Henriques. Advanced methods for diffusion MRI data analysis and their application to the healthy ageing brain. _Ph.D Thesis_, 2018.
* Held et al. (2019)* Hutter et al. (2018) Jana Hutter, Paddy J. Slator, Daan Christiaens, Rui Pedro Teixeira, Thomas Roberts, Laurence Jackson, Anthony N. Price, Shaihan Malik, and Joseph V. Hajnal. Integrated and efficient diffusion-relaxometry using ZEBRA. _Scientific reports_, 8(1):1-13, 2018.
* Imrie et al. (2022) Fergus Imrie, Alexander Norcliffe, Pietro Lio, and Mihaela van der Schaar. Composite feature selection using deep ensembles. _In: Neural Information Processing Systems (NeurIPS)_, 2022.
* Ivanova et al. (2021) Desi R. Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, and Tom Rainforth. Implicit deep adaptive design: Policy-based experimental design without likelihoods. _In: Neural Information Processing Systems (NeurIPS)_, 2021.
* Ivanova et al. (2023) Desi R. Ivanova, Joel Jennings, Tom Rainforth, Cheng Zhang, and Adam Foster. Co-bed: Information-theoretic contextual optimization via bayesian experimental design. _In: International Conference on Machine Learning (ICML)_, 2023.
* Jensen & Helpern (2010) Jens H. Jensen and Joseph A. Helpern. MRI quantification of non-Gaussian water diffusion by kurtosis analysis. _NMR in Biomedicine_, 23(7):698-710, 2010.
* Laboratory (JPL) (2023) Jet Propulsion Laboratory (JPL). AVIRIS web page. [https://aviris.jpl.nasa.gov/](https://aviris.jpl.nasa.gov/), 2023.
* Jiang et al. (2020) Shali Jiang, Henry Chai, Javier Gonzalez, and Roman Garnett. BINOCULARS for efficient, nonmyopic sequential experimental design. _In: International Conference on Machine Learning (ICML)_, 2020.
* Johnston et al. (2019) Edward W. Johnston, Elisenda Bonet-Carne, Uran Ferizi, Ben Yvernault, Hayley Pye, Dominic Patel, Joey Clemente, Wivijin Piga, Susan Heavey, Harbir S. Sidhu, Francesco Giganti, James O'Callaghan, Mristha Brizmohun Appayya, Alistair Grey, Alexandra Saborowska, Sebastien Ourselin, David Hawkes, Caroline M. Moore, Mark Emberton, Hashim U. Ahmed, Hayley Whitaker, Manuel Rodriguez-Justo, Alexander Freeman, David Atkinson, Daniel Alexander, Elftheria Panagiotaki, and Shonit Punwani. VERDICT MRI for prostate cancer: Intracellular volume fraction versus apparent diffusion coefficient. _Radiology_, 291(2):391-397, 2019.
* Kaddour et al. (2020) Jean Kaddour, Steindor Saemundsson, and Marc Peter Deisenroth. Probabilistic active meta-learning. _In: Neural Information Processing Systems (NeurIPS)_, 2020.
* Kamiya et al. (2020) Kouhei Kamiya, Masaaki Hori, and Shigeki Aoki. NODDI in clinical research. _Journal of Neuroscience Methods_, 346:108908, 2020. ISSN 0165-0270.
* Karim et al. (2022) Shahid Karim, Akeel Qadir, Umar Farooq, Muhammad Shakir, and Asif Laghari. Hyperspectral imaging: A review and trends towards medical imaging. _Current Medical Imaging_, 10, 2022.
* Karras et al. (2018) Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. _In: International Conference on Learning Representations (ICLR)_, 2018.
* Khan et al. (2018) Muhammad Khan, Hamid Khan, Adeel Yousaf, Khurram Khurshid, and Asad Abbas. Modern trends in hyperspectral image analysis: A review. _IEEE Access_, 6:14118-14129, 2018.
* Kleinegesse & Gutmann (2020) Steven Kleinegesse and Michael U. Gutmann. Bayesian experimental design for implicit models by mutual information neural estimation. _In: International Conference on Machine Learning (ICML)_, 2020.
* Knoll et al. (2020) Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael Rabbat, et al. Advancing machine learning for mr image reconstruction with an open competition: Overview of the 2019 fastMRI challenge. _Magnetic resonance in medicine_, 84(6):3054-3070, 2020.
* Kohavi & John (1997) Ron Kohavi and George H. John. Wrappers for feature subset selection. _Artificial Intelligence_, 97(1):273-324, 1997.
* Kumagai et al. (2022) Atsutoshi Kumagai, Tomoharu Iwata, and Yasutoshi Ida and. Few-shot learning for feature selection with Hilbert-Schmidt independence criterion. _In: Neural Information Processing Systems (NeurIPS)_, 2022.
* Kuncheva et al. (2020) Ludmila I. Kuncheva, Clare E. Matthews, Alvar Arnaiz-Gonzalez, and Juan Jose Rodriguez Diez. Feature selection from high-dimensional data with very low sample size: A cautionary tale. _arXiv preprint arXiv:2008.12025_, 2020.
* Lee (2022) Changhee Lee. Code for self-supervision enhanced feature selection with correlated gates. _[https://github.com/ch18856/SEFS](https://github.com/ch18856/SEFS), git commit 21fe6d97cd98612e3d0eb5ce20d42d0e2e94eb5a_, 2022.
* Lee et al. (2022) Changhee Lee, Fergus Imrie, and Mihaela van der Schaar. Self-supervision enhanced feature selection with correlated gates. _International Conference on Learning Representations (ICLR)_, 2022.
* Li et al. (2016) Yifeng Li, Chih-Yu Chen, and Wyeth W. Wasserman. Deep feature selection: Theory and application to identify enhancers and promoters. _Journal of Computational Biology_, 23(5):322-336, 2016.
* Lightley et al. (2022) Jonathan Lightley, Frederik Gorlitz, Sunil Kumar, Ranjan Kalita, Arinbjorn Kolbeinsson, Edwin Garcia, Yuriy Alexandrov, Vicky Bousgouni, Riccardo Wysoczanski, Peter Barnes, Louise Donnelly, Chris Bakal, Christopher Dunsby, Mark A. A. Neil, Seth Flaxman, and Paul M. W. French. Robust deep learning optical autofocus system applied to automated multiwell plate single molecule localization microscopy. _Journal of Microscopy_, 288(2):130-141, 2022.
* Lindenbaum et al. (2021) Ofir Lindenbaum, Uri Shaham, Erez Peterfreund, Jonathan Svirsky, Nicolas Casey, and Yuval Kluger. Differentiable unsupervised feature selection based on a gated laplacian. _In: Neural Information Processing Systems (NeurIPS)_, 2021.
* Lu & Fei (2014) Guolan Lu and Baowei Fei. Medical hyperspectral imaging: a review. _Journal of Biomedical Optics_, 19(1), 2014.
* Lyle et al. (2023) Clare Lyle, Arash Mehrjou, Pascal Notin, Andrew Jesson, Stefan Bauer, Yarin Gal, and Patrick Schwab. DiscoBAX: Discovery of optimal intervention sets in genomic experiment design. _In: International Conference on Machine Learning (ICML)_, 2023.
* Malkomes et al. (2021) Gustavo Malkomes, Bolong Cheng, Eric H Lee, and Mike Mccourt. Beyond the pareto efficient frontier: Constraint active search for multiobjective experimental design. _In: International Conference on Machine Learning (ICML)_, 2021.
* Manolakis et al. (2016) Dimitris G. Manolakis, Ronald B. Lockwood, and Thomas W. Cooley. _Hyperspectral Imaging Remote Sensing: Physics, Sensors, and Algorithms_. Cambridge University Press, 2016.
* Mehrjou et al. (2022) Arash Mehrjou, Ashkan Soleymani, Andrew Jesson, Pascal Notin, Yarin Gal, Stefan Bauer, and Patrick Schwab. Genedisco: A benchmark for experimental design in drug discovery. _In: International Conference on Learning Representations (ICLR)_, 2022.
* Mehta et al. (2022) Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, and Willie Neiswanger. An experimental design perspective on model-based reinforcement learning. _In: International Conference on Learning Representations (ICLR)_, 2022.
* Montgomery (2001) Douglas C. Montgomery. _Design and analysis of experiments_. John Wiley & Sons, fifth edition, 2001.
* Muckley et al. (2021) Matthew J. Muckley, Benedikt Riemenschneider, Alaleh Radmanesh, Sooyoung Kim, Gukyeong Jeong, Jaeho Ko, et al. Results of the 2020 fastMRI challenge for machine learning MR image reconstruction. _IEEE transactions on medical imaging_, 40(9):2306-2317, 2021.
* Organizers (2022) MUDI Organizers. MUli-dimensional Dlffusion (MUDI) MRI challenge 2019 data. [https://www.developingbrain.co.uk/data/](https://www.developingbrain.co.uk/data/), 2022.
* Mutny & Krause (2022) Mojmir Mutny and Andreas Krause. Experimental design for linear functionals in reproducing kernel Hilbert spaces. _In: Neural Information Processing Systems (NeurIPS)_, 2022.
* Nandy et al. (2021) Preetam Nandy, Divya Venugopalan, Chun Lo, and Shaunak Chatterjee. A/B testing for recommender systems in a two-sided marketplace. _In: Neural Information Processing Systems (NeurIPS)_, 2021.
* Nandy et al. (2021)* Panagiotaki et al. (2015a) Eleftheria Panagiotaki, Rachel W. Chan, Nikolaos Dikaios, Hashim U. Ahmed, James O'Callaghan, Alex Freeman, David Atkinson, Shonit Punwani, David J. Hawkes, and Daniel C. Alexander. Microstructural characterization of normal and malignant human prostate tissue with vascular, extracellular, and restricted diffusion for cytometry in tumours magnetic resonance imaging. _Investigative Radiology_, 50(4):218-227, 2015a.
* Panagiotaki et al. (2015b) Eleftheria Panagiotaki, Andrada Ianus, Edward Johnston, Rachel W. Chan, Nicola Stevens, David Atkinson, Shonit Punwani, David J. Hawkes, and Daniel C. Alexander. Optimised VERDICT MRI protocol for prostate cancer characterisation. _In: International Society for Magnetic Resonance in Medicine (ISMRM)_, 2015b.
* Panagiotaki et al. (2014) Eleftheria Panagiotaki, Simon Walker-Samuel, Bernard Siow, Peter S. Johnson, Vineeth Rajkumar, Barbara R. Pedley, Mark F. Lythgoe, and Daniel C. Alexander. Noninvasive quantification of solid tumor microstructure using VERDICT MRI. _Cancer research_, 74(7):1902-1912, 2014.
* Peng et al. (2005) Hanchuan Peng, Fuhui Long, and Chris Ding. Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 27(8):1226-1238, 2005.
* Pizzolato et al. (2020) Marco Pizzolato, Marco Palombo, Elisenda Bonet-Carne, Francesco Grussu, Andrada Ianus, Fabian Bogusz, Tomasz Pieciak, Lipeng Ning, Stefano B. Blumberg, Thomy Mertzanidou, Daniel C. Alexander, Maryam Afzali, Santiago Aja-Fernandez, Derek K. Jones, Carl-Fredrik Westin, Yogesh Rathi, Steven H. Baete, Lucilio Cordero-Grande, Thilo Ladner, Paddy J. Slator, Daan Christiaens, Jean-Philippe Thiran, Anthony N. Price, Farshid Sepehrband, Fan Zhang, and Jana Hutter. Acquiring and predicting MUli-dimensional DIFfusion (MUDI) data: an open challenge. _In: International Society for Magnetic Resonance in Medicine (ISMRM)_, 2020.
* Pukelsheim (2006) Friedrich Pukelsheim. _Optimal Design of Experiments_. Society for Industrial and Applied Mathematics, 2006.
* Quinzan et al. (2023) Francesco Quinzan, Ashkan Soleymani, Patrick Jaillet, Cristian R. Rojas, and Stefan Bauer. DRCFS: Doubly robust causal feature selection. _In: International Conference on Machine Learning (ICML)_, 2023.
* Ranzenberger and Snyder (2022) Logan R. Ranzenberger and Travis Snyder. _Diffusion Tensor Imaging_. StatPearls Publishing, fifth edition, 2022.
* Scikit-Learn (2023) Scikit-Learn. Scikit-learn recursive feature elimination (RFE). [https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html), 2023.
* Simchi-Levi and Wang (2023) David Simchi-Levi and Chonghuan Wang. Pricing experimental design: Causal effect, expected revenue and tail risk. _In: International Conference on Machine Learning (ICML)_, 2023.
* Simmonds and Green (1996) John J. Simmonds and Robert O. Green. Current status, preformance and plans for the NASA airborne visible and infrared imaging spectrometer (AVIRIS). 1996. URL [https://www.osti.gov/biblio/379497](https://www.osti.gov/biblio/379497).
* Singh et al. (2022) Saurabh Singh, Harriet Rogers, Baris Kanber, Joey Clemente, Hayley Pye, Edward W. Johnston, Tom Parry, Alistair Grey, Eoin Dinneen, Greg Shaw, Susan Heavey, Urszula Stopka-Farooqui, Aiman Haider, Alex Freeman, Francesco Giganti, David Atkinson, Caroline M. Moore, Hayley C. Whitaker, Daniel C. Alexander, Eleftheria Panagiotaki, and Shonit Punwani. Avoiding unnecessary biopsy after multiparametric prostate MRI with VERDICT analysis: The INNOVATE study. _Radiology_, pp. 212536, 2022.
* Sinkoe and Hahn (2017) Andrew Sinkoe and Juergen Hahn. Optimal experimental design for parameter estimation of an IL-6 signaling model. _Processes_, 5(3), 2017.
* Slator et al. (2021) Paddy J. Slator, Marco Palombo, Karla L. Miller, Carl-Fredrik Westin, Frederik Laun, Daeun Kim, Justin P. Haldar, Dan Benjamini, Gregory Lemberskiy, Joao P. de Almeida Martins, and Jana Hutter. Combined diffusion-relaxometry microstructure imaging: Current status and future prospects. _Magnetic Resonance in Medicine_, 86(6):2987-3011, 2021.
* Sinkoe et al. (2015)* Sokar et al. (2022) Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, and Decebal Constantin Mocanu. Where to pay attention in sparse training for feature selection? _In: Neural Information Processing Systems (NeurIPS)_, 2022.
* Song et al. (2007) Le Song, Alexander J. Smola, Arthur Gretton, Justin Bedo, and Karsten M. Borgwardt. Supervised feature selection via dependence estimation. _In: International Conference on Machine learning (ICML)_, 2007.
* Song et al. (2012) Le Song, Alexander J. Smola, Arthur Gretton, Justin Bedo, and Karsten M. Borgwardt. Feature selection via dependence maximization. _Journal of Machine Learning Research_, 13(5):1393-1434, 2012.
* Stuart et al. (2019) Mary B. Stuart, Andrew J. S. McGonigle, and Jon R. Willmott. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. _Sensors_, 19(14):3071, 2019.
* Stuart et al. (2020) Mary B. Stuart, Leigh R. Stanger, Matthew J. Hobbs, Tom D. Pering, Daniel Thio, Andrew J.S. McGonigle, and Jon R. Willmott. Low-cost hyperspectral imaging system: Design and testing for laboratory-based environmental applications. _Sensors_, 20(11):3239, 2020.
* Tae et al. (2018) Woo Suk Tae, Byung Joo Ham, Sung Bom Pyun, Shin Hyuk Kang, and Byung Jo Kim. Current clinical applications of diffusion-tensor imaging in neurological disorders. _Journal of Clinical Neurology_, 14(2):129-140, 2018.
* Teshnizi et al. (2020) Ali Ahmadi Teshnizi, Saber Salehkaleybar, and Negar Kiyavash. Lazyiter: A fast algorithm for counting markov equivalent dags and designing experiments. _In: International Conference on Machine Learning (ICML)_, 2020.
* Thompson et al. (2017) David R. Thompson, Joseph W. Boardman, Michael L. Eastwood, and Robert O. Green. A large airborne survey of earth's visible-infrared spectral dimensionality. _Optics Express_, 25(8):9186-9195, 2017.
* Tibshirani (1996) Robert Tibshirani. Regression shrinkage and selection via the lasso. _Journal of the Royal Statistics Society. Series B (Methodological)_, pp. 267-288, 1996.
* Tigas et al. (2022) Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Scholkopf, Yarin Gal, and Stefan Bauer. Interventions, where and how? Experimental design for causal models at scale. _In: Neural Information Processing Systems (NeurIPS)_, 2022.
* Tigas et al. (2023) Panagiotis Tigas, Yashas Annadani, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, and Stefan Bauer. Differentiable multi-target causal bayesian experimental design. _In: International Conference on Machine Learning (ICML)_, 2023.
* Waterhouse & Stoyanov (2022) Dale J. Waterhouse and Danail Stoyanov. Optimized spectral filter design enables more accurate estimation of oxygen saturation in spectral imaging. _Biomedical Optics Express_, 13(4):2156-2173, 2022.
* Wojtas (2021) Maksymilian A. Wojtas. Code for feature importance ranking for deep learning, git commit 836096edb9822e509cadc67bcd2e7cc5fa2324cc. [https://github.com/maksym33/FeatureImportanceDL](https://github.com/maksym33/FeatureImportanceDL), 2021.
* Wojtas & Chen (2020) Maksymilian A. Wojtas and Ke Chen. Feature importance ranking for deep learning. _In: Neural Information Processing System (NeurIPS)_, 2020.
* Wu et al. (2019) Renjie Wu, Yuqi Li, Xijiong Xie, and Zhijie Lin. Optimized multi-spectral filter arrays for spectral reconstruction. _Sensors_, 19(13), 2019.
* Yamada et al. (2020) Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, and Yuval Kluger. Feature selection using stochastic gates. _In: International Conference on Machine Learning (ICML)_, 2020.
* Yaman et al. (2022) Burhaneddin Yaman, Seyed Amir Hossein Hosseini, and Mehmet Akcakaya. Zero-shot self-supervised learning for MRI reconstruction. _In: International Conference on Learning Representations (ICLR)_, 2022.
* Yash* Zaballa and Hui (2023) Vincent D. Zaballa and Elli E. Hui. Stochastic gradient bayesian optimal experimental designs for simulation-based inference. _In: Differentiable Almost Everything Workshop of the International Conference of Machine Learning (ICML)_, 2023.
* Zbontar et al. (2018) Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J Muckley, et al. fastMRI: An open dataset and benchmarks for accelerated mri. _arXiv preprint arXiv:1811.08839_, 2018.
* Zhang et al. (2012) Hui Zhang, Torben Schneider, Claudia A. Wheeler-Kingshott, and Daniel C Alexander. NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain. _NeuroImage_, 61(4):1000-1016, 2012.
* Zhang et al. (2022) Junzhe Zhang, Jin Tian, and Elias Bareinboim. Partial counterfactual identification from observational and experimental data. _In: International Conference on Machine Learning (ICML)_, 2022.
* Zheng et al. (2020) Sue Zheng, David Hayden, Jason Pacheco, and John W Fisher III. Sequential bayesian experimental design with variable cost structure. _In: Neural Information Processing Systems (NeurIPS)_, 2020.
## Appendix Structure
The appendices are structured as follows:
1. Appendix A provides comprehensive details on all approaches utilized in this paper, including the specific hyperparameters employed.
2. Appendix B are supplementary experimental results.
3. Appendix C offers further experimental analysis of our method, TADRED.
4. Appendix D compares the computational cost of all the approaches used in this paper and outlines the computational resources employed.
5. Appendix E is a comprehensive description of prior work related to our problem.
6. Appendix F details all data for each experiment, along with specifics of each task.
## Appendix A Key Approaches, Hyperparameters, and Settings
This section describes the different supervised feature selection approaches used in this paper and details the choice of parameter settings within each.
### General Experimental Settings
For every experiment comparing TADRED with baselines, we split the data into training, validation/development, and test sets. This is described in detail in section F. Following Lee et al. (2022) (paper of SSEFS), we conducted an extensive hyperparameter search for each approach (using the validation set), for different experimental settings and subsample values \\(C\\). This is described in detail below for every approach. For fairness, we have the same number of evaluations on the validation set for each different feature set size \\(C\\), i.e. number of trials for model selection. The best model was then applied to the test set and we reported the performance.
Other general hyperparameters are: batch size \\(1500\\), learning rate \\(10^{-4}\\) (\\(10^{-5}\\) for the experiment in figure 2), ADAM optimizer, and default network weight initialization. The default option for early stopping used 20 epochs for patience (i.e training stops if validation performance does not improve in 20 epochs).
### TADRED - Task-Driven Experimental Design for Multi-Channel Imaging
This subsection details the hyperparameters for the method we present in this paper: TADRED for TAsk-DRiven Experimental Design in imaging, as outlined in section 3. Figure 3 provides a graphical representation of TADRED's structure and computational graph.
We conducted a brief search for TADRED-specific hyperparameters and fixed these hyperparameters across all experiments. We set the numbers of epochs in the four-phase inner loop training procedure as \\(E_{1}=25,E_{2}=E_{1}+10,E_{3}=E_{2}+10\\). Following other baselines, we do not fix the total number of training epochs \\(E\\), but keep training beyond \\(e=E_{3}\\) in algorithm 2 phase 4 until early stopping criteria (on the validation set) are met.
For fairness, when comparing TADRED with other supervised feature selection baselines, we chose a simple set of hyperparameters for the feature set sizes \\(\\{C_{t}\\}_{t=1}^{T}\\) and number of outer loop steps \\(T\\). Here we fixed \\(T=5\\) and \\(C_{1},C_{2},C_{3},C_{4},C_{5}=\\bar{C},\\frac{C}{2},\\frac{C}{4},\\frac{C}{8}, \\frac{C}{16}\\). When comparing TADRED against two recent application-specific published results optimized by the authors of Blumberg et al. (2022); Waterhouse and Stoyanov (2022), we followed standard practice and performed a brief hyperparameter search on the validation set. We used \\(C_{1}, ,C_{9}=\\{1344,500,250,100,50,40,30,20,10\\},T=9\\) in table 2 and \\(C_{1}, ,C_{19}=\\{348\\}\\) + [250::45::-50] + [45::8::-5] + [8,6,5,4,3,2], \\(T=19\\) (notation is [start::stop::step] ) in table 4.
We perform a grid search to find the optimal network architecture hyperparameters for each task. The Scoring Network \\(\\mathcal{S}\\) and Task Network \\(\\mathcal{T}\\) have the same number of hidden layers \\(\\in\\{1,2,3\\}\\), number of units \\(\\in\\{30,100,300,1000,3000\\}\\), and for each combination we obtain task performance on the feature set sizes \\(C_{1}>C_{2}> >C_{T}\\). The best performing network on the validation set is deployed on the test data.
### Random Supervised Feature Selection
This baseline is inspired by the random design baselines used in experimental design papers e.g. Foster et al. (2021); Ivanova et al. (2021). For a particular design size, \\(C\\), we repeat the following process: i) randomly select \\(C\\) features/channels; ii) perform grid search on the task network (mapping subsampled data \\(X_{D}\\) to target \\(Y\\)), with number of hidden layers \\(\\in\\{1,2,3\\}\\), number of units \\(\\in\\{30,100,300,1000,3000\\}\\); iii) train until early stopping criteria specified on the validation set are met; iv) evaluate the best trained model on the test set.
Self-Supervision Enhanced Feature Selection with Correlated Gates (SSEFS)Lee et al. (2022)
This approach has a lengthy hyperparameter search detailed in Appendix B of Lee et al. (2022), which consists of a three-phase procedure and four neural networks. Note that this required multiple training steps e.g. obtaining results for in table 1 requires >400 runs. SSEFS exploits task-performance, self-supervision, additional unlabeled data, and correlated feature subsets. It scores the features then subsequently trains a task-based network (analogous to \\(\\mathcal{T}\\) in TADRED) on subsampled data. We use the official repository Lee (2022) and verified our implementation by replicating results in the
Figure 3: TADRED’s structure. During training TADRED concurrently performs feature scoring, feature subsampling, and task execution. During training we progressively set the score to be sample-independent by setting \\(\\alpha\\) to \\(1\\). We score features with \\(\\bar{\\mathbf{s}}_{t}\\in\\mathbb{R}^{C}\\) and remove features with low score by setting corresponding values of the mask to \\(0\\), in this example we removed feature 2.
paper Lee et al. (2022). The full optimization procedure follows Lee et al. (2022) and is split into i) self-supervision phase, ii) supervision phase, iii) training on selected features only.
The self-supervision phase finds the optimal encoder network hyperparameters. We follow Appendix B of Lee et al. (2022) and perform grid search. Similar to other approaches in this paper, we consider the encoder network, feature vector estimator network, gate vector estimator network, all have same number of hidden layers \\(\\in\\{1,2,3\\}\\) number of units, including hidden dimension \\(\\in\\{30,100,300,1000,3000\\}\\). Directly following Lee et al. (2022) table S.1, other hyperparameters \\(\\alpha\\in\\{0.01,0.1,1.0,10,100\\}\\), \\(\\pi\\in\\{0.2,0.4,0.6,0.8\\}\\). The self-supervisory dataset is input data \\(X_{\\bar{D}}\\). On the best validation performance (with early stopping), this returns a trained encoder network, cached for the supervision phase.
The supervision phase scores the features. The pretrained encoder is loaded from the previous phase. We then perform grid search, where the predictor network has number of hidden layers \\(\\in\\{1,2,3\\}\\), number of units \\(\\in\\{30,100,300,1000,3000\\}\\), following Lee et al. (2022) table S.1\\(\\beta\\in\\{0.01,0.1,1.0,10,100\\}\\). On the best validation performance with early stopping, the process returns a score for all features.
The final phase is repeated for a different number of subset sizes \\(C\\). We extract the \\(C\\) highest scored features from the previous phase and perform grid search on the task network (mapping subsampled data \\(X_{\\bar{D}}\\) to target \\(Y\\)), with number of hidden layers \\(\\in\\{1,2,3\\}\\), number of units \\(\\in\\{30,100,300,1000,3000\\}\\). Training is until early stopping on the validation set. The best trained model is evaluated on the test set.
### Feature Importance Ranking for Deep Learning (FIRDL) Wojtas and Chen (2020)
This approach has a three-stage procedure detailed in Appendix D of Wojtas and Chen (2020) and uses two neural networks. One of the networks scores the masks (analogous to mask \\(\\mathbf{m}\\) in TADRED), and the other trains a task network to perform a task on the subsampled data (analogous to \\(\\mathcal{T}\\) in TADRED). We use the official repository Wojtas (2021) and verified our implementation by replicating results in the paper Wojtas and Chen (2020). The following process is repeated for different feature subset sizes \\(C\\).
We perform a grid search to find the optimal hyperparameters. The operator network (analogous to task network \\(\\mathcal{T}\\) in this paper) and the selector network have the same number of hidden layers \\(\\in\\{1,2,3\\}\\), number of units \\(\\in\\{30,100,300,1000,3000\\}\\), \\(s_{p}=5,E_{1}=15000\\). The joint training uses early stopping on the validation set, and returns an optimal feature set size, of size \\(C\\) and a trained operator network. The best performing operator network on the validation set is deployed on the test data.
## Appendix B Additional Results
This section provides additional results supporting the experiments presented in the main paper.
Table 7 contains results that repeat the experiment in table 1 using the NODDI model Zhang et al. (2012) instead of VERDICT. The first baseline uses the classical Fisher-matrix experimental design Alexander (2008) to compute the design \\(D\\) from Zhang et al. (2012) where \\(C=99\\). For the supervised feature selection approaches we use densely-sampled designs \\(\\bar{D}\\) where \\(\\bar{C}=3612\\) Ferizi et al. (2017). Similar to results in table 1, table 7 shows TADRED outperforms classical experimental design with \\(C\\) set to 99 following the classical approach used in current practice. In addition, TADRED outperforms the supervised feature selection baselines where \\(C=\\frac{C}{2},\\frac{C}{4},\\frac{C}{8},\\frac{C}{16}\\). The optimized designs enable us to estimate the widely used NODDI parameters in shorter scan times opening the potential for a wider range of clinical applications. All information on designs, models and data is in section F.1.
Table 8 shows extra results within the experiment documented in figure 2. We consider an additional feature set subsample size \\(C=36\\) which extends the results in figure 2 and show TADRED outperforms the baselines on 17/18 comparisons on clinically useful downstream metrics. This is beneficial as pressure for time in clinical MRI protocols is intense as many different MR contrasts are informative, but patient-time in the scanner is limited. Therefore shorter acquisition protocols for
these widely informative downstream metrics (parametric maps) enables their exploitation in a wider range of clinical studies and applications.
Table 9 shows additional results on the MUDI data in table 2. This experiment compares TADRED with the supervised feature selection baselines following settings in the original MUDI challenge. Evaluation uses the MSE metric as in the original challenge. Further details are in appendix F.2.
Table 10 shows additional results on the AVIRIS data presented in table 3. Here, we only use data from the north-to-south flight. Improvements of TADRED over the supervised feature selection baselines are similar to that in table 3.
## Appendix C Further Analysis
### Analyzing the Effect on Randomness on Feature Set Chosen
Table 11 examines how the changing the random seed that affects network initialization and data shuffling, impacts the feature set chosen. Results show TADRED performs favorably compared to alternative approaches and mostly chooses the same features
We examine how varying the size of densely-sampled design \\(\\bar{D}\\) (used to create \\(X_{\\bar{D}}\\)) affects performance. Across \\(10\\) random seeds, we randomly sample the design from Panagiotaki et al. (2015b) to create a custom \\(\\bar{D}\\) with \\(\\bar{C}\\) elements. Training is on fixed network sizes for a single subsampling rate \\(C_{2}=C=14\\). We use \\(10\\%\\) of the training data within the experimental settings of table 1. Results are in figure 4 and exemplify typical behavior that while performance is reasonably stable for large \\(\\bar{C}\\) a phase change occurs as \\(\\bar{C}\\) nears \\(C\\) and performance decreases rapidly, as the set of samples to choose from becomes too sparse.
### TADRED
Variant with Random Selection
We tested a modified training procedure that works in the'same manner' as the original implementation of TADRED whilst the scores chosen are random. Across different modifications, in settings of table 5, results (MSE) are more than 10% worse, even more than 'w/o iterative subsampling'. Thus, although the 'gradual aspect' of TADRED's training procedure improves performance (in fact line 2 in the ablation study table 5 already demonstrates this with 'less gradual' scenario without iterative subsampling decreases performance), the learning of the scoring network is working as intended and further improves results.
## Appendix D Computational Cost of Different Approaches and Infrastructure
It is difficult to compare the computational cost of TADRED against SSEFS, FIRDL. The official implementations, described in appendix A use different machine learning frameworks and all use customized early stopping. In particular, SSEFS has a three stage procedure (first two are large hyperparameter searches) which are completed consecutively; TADRED does not require this. TADRED and FIRDL train on two distinct networks, whilst SSEFS uses four, as such, training costs are somewhat comparable if network sizes are taken to be the same. Practical requirements in all cases were reasonable and training for all methods for each experiment were performed within 24 hours. As an example, we compare the time to run the various methods for the results in table 6 per \\(C\\) with the network sizes fixed and no hyperparameter search over different network sizes. Training speeds are: Random supervised feature selection (baseline) 505s; SSEFS (baseline) 1934s (using only run a single run per seed for the first two stages; as previously noted, for other results in the paper, this is much slower as the method proposes a computationally expensive sequential hyperparameter search); FIRDL (baseline) 1756s; TADRED (the new approach) 1988s. The random supervised feature selection baseline is by far the most computationally economical, as we expect, because it uses no iterative search. TADRED's computational cost is similar to the two state-of-the-art supervised feature selection baselines, SSEFS and FIRDL.
\\begin{table}
\\begin{tabular}{c c c c c} & \\(C=110\\) & 55 & 28 & 14 \\\\ \\hline SSEFS & data mean & 1.06 & 1.28 & 1.89 & 4.58 \\\\ FIRDL & zeros & 2.22 & 2.14 & 3.09 & 4.05 \\\\ \\hline TADRED & data median & 1.03 & 1.19 & 1.80 & 2.55 \\\\ TADRED & data mean & 1.03 & 1.20 & 1.79 & 2.80 \\\\ TADRED & zeros & 1.03 & 1.19 & 1.79 & 2.51 \\\\ \\end{tabular}
\\end{table}
Table 12: Comparison of the choice of \\(X_{\\bar{D}}^{\\text{fill}}\\). Experimental settings follow table 1.
Figure 4: Analyzing the performance on different densely-sampled designs \\(\\bar{D}\\) where \\(|\\bar{D}|=\\bar{C}\\) and \\(C=14\\). Settings follow table 1.
\\begin{table}
\\begin{tabular}{c c c c c} & \\(C=110\\) & 55 & 28 & 14 \\\\ \\hline Random & 32.8 & 15.2 & 6.82 & 2.87 \\\\ SSEFS & 81.2 & 71.4 & 62.2 & 75.9 \\\\ FIRDL & 34.3 & 18.1 & 48.8 & 41.0 \\\\ \\hline TADRED & 74.3 & 82.1 & 84.6 & 59.1 \\\\ \\end{tabular}
\\end{table}
Table 11: Mean Jaccard Index between chosen measurements, across 10 random seeds, experimental settings in table 1 VERDICT simulations.
Exploratory analysis and development was conducted on a mid-range (as of 2023) machine with a AMD Ryzen Threadripper 2950X CPU and single Titan V GPU. All experimental results reported in this paper were computed on low-to-mid range (as of 2023) graphical processing units (GPU): GTX 1080 Ti, Titan Xp, Titan X, Titan V, RTX 2080 Ti. We ran jobs on a high-performance computing cluster shared with other users, allowing multiple jobs to run in parallel.
## Appendix E Extended Related Work
This section provides further information to section 2, detailing previous work related to our problem.
**Classical and Other Recent Supervised Feature Selection Approaches** Supervised feature selection approaches are either i) 'filter methods', which select features using some proxy metric independent of the final task, ii) 'wrapper methods', which use the task to evaluate feature set performance, or iii) 'embedded methods', which couple the feature selection with the task training. Embedded methods FIRDL Wojtas & Chen (2020), SSEFS Lee et al. (2022) are state-of-the-art outperforming classical approaches e.g recursive feature elimination (RFE)-original Guyon et al. (2002), BAHSIC Song et al. (2007; 2012), mRMR Peng et al. (2005), CCM Chen et al. (2017), RF Breiman (2001), DFS Li et al. (2016), LASSO Tibshirani (1996), L-Score He et al. (2005) and recent deep learning-based CE Abid et al. (2019), STG Yamada et al. (2020), DUFS Lindenbaum et al. (2021). More recent approaches extend the supervised feature selection paradigm to limit false discovery rate Hansen et al. (2022), few-shot learning Kumagai et al. (2022), discovering groups of predictive features Imrie et al. (2022), the unsupervised setting Sokar et al. (2022), few-sample classification problems Cohen et al. (2023), dynamic feature selection Covert et al. (2023), for federated learning Castiglia et al. (2023), for identifying high-dimensional causal features Quinzan et al. (2023). They are not designed for the standard regression-based supervised feature selection problem considered in this paper.
**Other Recent Experimental Design Approaches** Techniques for experimental design have been developed for causal modeling Tigas et al. (2022); Zhang et al. (2022); Teshnizi et al. (2020), linear models Fontaine et al. (2021); Mutny & Krause (2022), online learning Arbour et al. (2022), active learning Kaddour et al. (2020), drug discovery Mehriou et al. (2022), reinforcement learning Mehta et al. (2022), A/B testing Nandy et al. (2021), panel-data settings Doudchenko et al. (2021), bandit problems Camilleri et al. (2021), balancing competing objectives with uncertainty Malkomes et al. (2021), temporal treatment and control Glynn et al. (2020), causal discovery when interventions can be costly or risky Tigas et al. (2023), designing pricing experiments Simchi-Levi & Wang (2023), contextual optimization for Bayesian experimental design Ivanova et al. (2023), genomics Lyle et al. (2023), treatment effects in large randomized trials Connolly et al. (2023), learning causal models with Bayesian approaches Amadani et al. (2023). These are not applicable to the problem setting we consider. Approaches Zheng et al. (2020); Kleinegesse & Gutmann (2020); Jiang et al. (2020) are older sequential experimental design approaches, whilst Zaballa & Hui (2023) is contemporary to this work - they face the same issues as Blau et al. (2022); Foster et al. (2021); Ivanova et al. (2021) (discussed in section 2) - which focus on estimating model parameters and are mostly demonstrated in small-scale problems which do not scale up to the high dimensional problems we face in experimental design for image-channel selection.
**Experimental Design in qMRI** In qMRI the design \\(D\\) is known as an 'acquisition scheme'. One standard task is 'parameter mapping', first estimating biologically-informative model parameters by voxel-wise model fitting, to then obtain downstream metrics Alexander et al. (2019). This provides information that is not visible directly from the images, such as microstructural properties of tissue. However, acquisition time (corresponding to \\(C=|D|\\)) is limited by factors of cost and the ability of (often sick) subjects to remain motionless in the noisy and claustrophobic environment of the scanner. Thus experimental design can be crucial to support the most accurate image-driven diagnosis, prognosis, or treatment choices. Many clinical scenarios use \\(D\\) based on intuition loosely guided by understanding of the physical systems under examination, but this can lead to highly suboptimal designs particularly for complex models. However, some studies optimize the design using the Fisher information matrix, e.g. Alexander (2008); Cercignani & Alexander (2006).
Lengthy MRI acquisitions corresponding to \\(\\bar{D}\\) to enable our new experimental design paradigm are made easily on a few subjects, but are not feasible in routine patient imaging. However, such lengthy acquisitions are often made in the design phase of quantitative imaging techniques, e.g. as in Ferizi et al. (2017). We note also that several distinct experiment design problems arise in MRI.
Here we focus on estimating per-voxel parameter values, but others e.g. Zbontar et al. (2018); Knoll et al. (2020); Muckley et al. (2021); Fabian et al. (2022); Yaman et al. (2022), focus on how best to subsample the k-space. Our approach is complementary to and may be combined with those; they expedite the acquisition of each individual channel, we identify a compact/economical set of channels.
**Experimental Design in Hyperspectral Imaging** Hyperspectral imaging (a.k.a. imaging spectroscopy) obtains pixel-wise information of an object-of-interest across multiple wavelengths of the electromagnetic spectrum from specialized hardware Manolakis et al. (2016). This produces an 'image cube' a 2D image with \\(C\\) channels, as in qMRI, image channels correspond to the measurements. Experimental design involves choosing a design consisting of wavelengths and/or filters Arad & Ben-Shahar (2017); Waterhouse & Stoyanov (2022); Wu et al. (2019) which controls the image channels, and where most current practice uses uniform spacing for the wavelengths Thompson et al. (2017). For our paradigm, expensive devices can acquire large numbers of images with different spectral sensitivity simultaneously to provide training data for the design of much cheaper deployable devices Stuart et al. (2020). Recovering high-quality information from the few wavelengths chosen for particular applications by experimental design, reduces acquisition cost, increases acquisition speed, avoids misalignment, reduces storage requirements, and speeds up clinical adoption. Hyperspectral imaging has many applications Khan et al. (2018) from multiple modalities in medical imaging z Lu & Fei (2014); Karim et al. (2022), remote sensing Baumgardner et al. (2015), and environmental monitoring Stuart et al. (2019).
### Simulations with the VERDICT and NODDI Biophysical Models.
This section describes the the VERDICT and NODDI models and the experimental settings used in tables 1, 7. Exact code to perform the simulations is in Code Link.
The VERDICT (Vascular, Extracellular and Restricted Diffusion for Cytometry in Tumors) model Panagiotaki et al. (2014), maps histological features of solid-cancer tumors particularly for early detection and classification of prostate cancer Panagiotaki et al. (2015); Johnston et al. (2019); Singh et al. (2022). The VERDICT model includes parameters: \\(f_{I}\\) the intra-cellular volume fraction, \\(f_{V}\\) the vascular volume fraction, \\(D_{v}\\) the vascular perpendicular diffusivity, \\(R\\) the mean cell radius, and \\(\\mathbf{n}\\) - a 3D vector defining mean local vascular orientation.
The NODDI (Neurite Orientation Dispersion and Density Imaging) model Zhang et al. (2012), maps parameters of the cellular composition of brain tissue and is widely used in neuroimaging studies in neuroscience such as the UK Biobank study Alfaro-Almagro et al. (2018), and neurology e.g. in Alzheimer's disease Kamiya et al. (2020) and multiple sclerosis Grussu et al. (2017). The NODDI model includes five tissue parameters: \\(f_{ic}\\) the intra-cellular volume fraction, \\(f_{iso}\\) the isotropic volume fraction, the orientation dispersion index (ODI) that reflects the level of variation in neurite orientation, and \\(\\mathbf{n}\\) - a 3D vector defining the mean local fiber orientation.
To conduct the simulations on the VERDICT and NODDI models, we employ the widely-used, open-source diminy toolbox Fick et al. (2019). The code is available: Code Link. In each case, data simulation uses a known, fixed, acquisition scheme, i.e. experimental design, in combination with a set of ground truth model parameters. We chose the ground truth model parameters \\(\\{\\mathbf{\\theta}_{1}, ,\\mathbf{\\theta}_{n}\\}\\) for voxel/sample \\(i=1, ,n\\) by uniformly sampling parameter combinations from the bounds given in table 14. We choose these bounds as they approximate the physically feasible limits of the parameters.
The VERDICT data has number of samples \\(n=1000K,100K,100K\\) in the train, validation, test split, with target data \\(Y\\in\\mathbb{R}^{n\\times 8}\\), \\(\\mathbf{\\theta}_{i}\\in\\mathbb{R}^{8},i=1, ,n\\). The classical experimental design approach yields an acquisition scheme derived from the Fisher information matrix Panagiotaki et al. (2015) and here \\(X_{D}\\in\\mathbb{R}^{n\\times 20},C=20\\). The approaches in supervised feature selection (including TADRED) also use a densely-sampled empirical acquisition scheme, designed specifically for the VERDICT protocol from Panagiotaki et al. (2015) and here \\(X_{D}\\in\\mathbb{R}^{n\\times 220}\\) with \\(\\bar{C}=220\\) measurements.
The NODDI data has number of samples \\(n=100K,10K,10K\\) in the train,validation,test split, with target data \\(Y\\in\\mathbb{R}^{n\\times 7},\\mathbf{\\theta}_{i}\\in\\mathbb{R}^{7},i=1, ,n\\). The classical experimental design approach yields an acquisition scheme derived from the Fisher information matrix Zhang et al. (2012) and so \\(X_{D}\\in\\mathbb{R}^{n\\times 99},C=99\\). The approaches in supervised feature selection use a densely-sampled empirical acquisition scheme from an extremely rich acquisition from Ferizi et al. (2017). This was designed for the ISBI 2015 White Matter Challenge, which aimed to collect the richest possible data to rank biophysical models, and required a single subject to remain motionless for two uncomfortable back-to-back 4 hour scans. Here \\(X_{D}\\in\\mathbb{R}^{n\\times 3612}\\) with \\(\\bar{C}=3612\\) measurements.
We added Rician noise to all simulated signals, which is standard for MRI data Gudbjartsson & Patz (1995). The signal to noise ratio of the unweighted signal is \\(50\\), which is representative of clinical qMRI.
### Multi-Diffusion (MUDI) Challenge Data
Data used in tables 2, 9 are images from five in-vivo human subjects, and are publicly available MUDI Organizers (2022), and was acquired with the state-of-the-art ZEBRA sequence Hutter et al. (2018). This diffusion-relaxation MRI dataset has a 6D acquisition parameter space \\(\\mathbf{d}^{i}\\in\\mathbb{R}^{6}\\):
\\begin{table}
\\begin{tabular}{c c c} \\multicolumn{3}{c}{VERDICT Model} & \\multicolumn{2}{c}{NODDI Model} \\\\ \\hline Parameter & Minimum & Maximum \\\\ \\hline \\(f_{I}\\) & 0.01 & 0.99 \\\\ \\(f_{V}\\) & 0.01 & 0.99 \\\\ \\(D_{v}\\) (amps\\({}^{2}\\) s\\({}^{-1}\\)) & 3.05 & 10 \\\\ \\(R\\) (\\(\\mu\\)m) & 0.01 & 20 \\\\ \\(\\mathbf{n}\\) & [-1 -1 -1] & [1 1 1] \\\\ \\end{tabular} \\begin{tabular}{c c c} \\multicolumn{3}{c}{NODDI Model} \\\\ \\hline Parameter & Minimum & Maximum \\\\ \\hline \\(f_{ic}\\) & 0.01 & 0.99 \\\\ \\(f_{iso}\\) & 0.01 & 0.99 \\\\ \\(ODI\\) & 0.01 & 0.99 \\\\ \\(\\mathbf{n}\\) & [-1 -1 -1] & [1 1 1] \\\\ \\end{tabular}
\\begin{tabular}{c c c} \\multicolumn{3}{c}{NODDI Model} \\\\ \\hline Parameter & Minimum & Maximum \\\\ \\hline \\(f_{ic}\\) & 0.01 & 0.99 \\\\ \\(f_{iso}\\) & 0.01 & 0.99 \\\\ \\(ODI\\) & 0.01 & 0.99 \\\\ \\(\\mathbf{n}\\) & [-1 -1 -1] & [1 1 1] \\\\ \\end{tabular}
\\end{table}
Table 14: Parameter ranges for simulating synthetic VERDICT and NODDI model data.
echo time (TE), inversion time (TI), b-value, and b-vector directions in 3 dimensions: \\(b_{x},b_{y},b_{z}\\). Data has 2.5mm isotropic resolution and field-of-view \\(220\\times 230\\times 140\\)mm and resulted in 5 3D brain volumes (i.e. images) with \\(\\bar{C}=1344\\) measurements/channels, which here are unique diffusion- \\(T2^{*}\\) and \\(T1\\)- weighting contrasts. More information is in Hutter et al. (2018); Pizzolato et al. (2020). Each subject has an associated brain mask, after removing outlier voxels resulted in \\(104520,110420,105743,132470,105045\\) voxels for respective subjects \\(11,12,13,14,15\\). For the experiment in table 2 we follow Blumberg et al. (2022) and perform 5-fold cross validation on the 5 subjects. For the experiment in table 9, we followed the original challenge Pizzolato et al. (2020) and took subjects \\(11,12,13\\) as the training and validation set, and subjects \\(14,15\\) as the unseen test set, where \\(90\\%-10\\%\\) of the training/validation set voxels are respectively, for training and validation.
### Human Connectome Project (HCP) Test-Retest Data
This section describes the data and model fitting procedure used in figure 2 and table 8.
This section utilizes WU-Minn Human Connectome Project (HCP) diffusion data, which is publicly available at www.humanconnectome.org (Test Retest Data Release, release date: Mar 01, 2017) Essen et al. (2013). The data comprises \\(\\bar{C}=288\\) volumes (i.e. measurements/channels), with 18 b=0 s mm\\({}^{-2}\\) (i.e. non-diffusion weighted) volumes, 90 gradient directions for b=1000 s mm\\({}^{-2}\\), 90 directions for b=2000 s mm\\({}^{-2}\\), and 90 directions for b=3000 s mm\\({}^{-2}\\). We used 3 scans for training (ID numbers \\(103818\\_1,105923\\_1,111312\\_1\\)), one scan for validation (\\(114823\\_1\\)) and one scan for testing \\((115320\\_1)\\), which produced numbers of samples \\(n=708724+791369+681650=2181743,774149,674404\\) for the respective splits. We used only voxels inside the provided brain mask and normalized the data voxelwise with a standard technique in MRI, by dividing all measurements by the mean signal in each voxel's b=0 values. Undefined voxels were then removed.
Diffusion tensor imaging (DTI) Basser et al. (1994), diffusion kurtosis imaging (DKI) Jensen & Helpern (2010), and Mean Signal DKI (MSDKI) Henriques (2018) are widely-used qMRI methods. Like NODDI and VERDICT, they use diffusion MRI to sensitize the image intensity to the Brownian motion of water molecules within the tissue to provide a window on tissue microstructure. However, whereas NODDI and VERDICT are designed specifically for application to brain tissue and cancer tumors, respectively, DTI and DKI are more general purpose techniques that provide indices of diffusivity (e.g. mean diffusivity - MD), diffusion anisotropy (e.g. fractional anisotropy Basser & Pierpaoli (1996) - FA), and the deviation from Gaussianity, or kurtosis, (e.g. mean kurtosis Jensen & Helpern (2010) - MK) that can inform on tissue integrity or pathology. Mean signal diffusion kurtosis imaging (MSDKI) is a simplified version of DKI that quantifies kurtosis using a simpler model that is easier to fit Henriques (2018). These techniques show promise for extracting imaging biomarkers for a wide variety of medical applications, such mild brain trauma, epilepsy, stroke, and Alzheimer's disease Jensen & Helpern (2010); Ranzenberger & Snyder (2022); Tae et al. (2018).
To fit the DTI, DKI, and MSDKI biophysical models to the data, and obtain the downstream metrics (parameter maps), we employ the widely-used, open-source DIPY library Garyfallidis et al. (2014). We followed standard practice for model fitting in MRI and used the least-squares optimization approach and default fitting settings. To remove outliers, values were clamped where DTI FA \\(\\in[0,1]\\), DTI MD,AD,RD \\(\\in[0,0.003]\\), DKI MK,AK,RK \\(\\in[0,3]\\), MSDKI MSD \\(\\in[0,0.003]\\) MSDKI MSK \\(\\in[0,3]\\). Code for model fitting is in Code Link.
The results in figure 2 and table 8 are all scaled by: DTI-FA \\(\\times 10^{2}\\), DTI-MD \\(\\times 10^{9}\\), DTI-AD \\(\\times 10^{9}\\), DTI-RD \\(\\times 10^{9}\\), DKI-MK \\(\\times 10^{2}\\), DKI-AK \\(\\times 10^{2}\\), DKI-RK \\(10^{2}\\), MSDKI-MSD \\(\\times 10^{9}\\), MSDKI-MSK \\(\\times 10^{2}\\).
### Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) Data and Task
This section describes the data and task considered in tables 3, 10.
The Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) is a highly-specialized hyperspectral device for earth remote sensing commissioned by the Jet Propulsion Laboratory (JPL). It obtains acquisitions from adjacent spectral channels bands between the wavelengths 400nm - 2500nm. It is flown from four different aircrafts and has been deployed worldwide, for purposes such as examining the effect and rehabilitation of forests affected by large wildfires, the effect of climate change, and other applications in atmospheric studies and snow hydrology. More information is available Jet Propulsion Laboratory (JPL) (2023); Simmonds & Green (1996); Thompson et al. (2017) and on the webpage [https://aviris.jpl.nasa.gov](https://aviris.jpl.nasa.gov).
Data used was obtained in June 1992, when the Purdue University Agronomy Department commissioned AVIRIS to obtain two ground images of the 'Indian Pine' to support soils research Baumgardner et al. (2015) from two flight lines: east-to-west and north-to-south. This is publicly available Baumgardner et al. (2022). The data are two 'image cube' corresponding to a 2miles2 area of 20m2 pixel size with \\(\\bar{C}=220\\) channels.
Footnote 2: [https://www.jpl.nasa.gov/](https://www.jpl.nasa.gov/)
Data from the north-to-south flight are used for training and validation. This consists of 1644292 pixels of which 90-10 % were used for training-validation. Data from the east-to-west flight were used for test data, which consists of 1134672 pixels. We removed outliers from both images - details in Code Link and then normalized the image channel-wise so the \\(99th\\)-percentile is \\(255\\) (the maximum in standard images).
The objective is examine whether our supervised feature selection approaches can reconstruct the entire image from a subset of wavelengths, typical of the ground obtained over Indiana (the location of 'Indian Pine').
### Estimation of Oxygen Saturation Data and Task
This experiment and data follows directly from Waterhouse & Stoyanov (2022). Data was generated from the code presented in Waterhouse & Stoyanov (2022), with assistance from its author.
Figure 6: 2D brain slices from 3D MRI scans, for different measurements/features/channels (values of \\(C\\)) for the Multi-Diffusion (MUDI) challenge (top), HCP (middle) data. Bottom: Different wavelengths for the ‘Indian Pine’ remote sensing hyperspectral data north-to-south flight. | This paper presents a data-driven, task-specific paradigm for experimental design, to shorten acquisition time, reduce costs, and accelerate the deployment of imaging devices. Current approaches in experimental design focus on model-parameter estimation and require specification of a particular model, whereas in imaging, other tasks may drive the design. Furthermore, such approaches often lead to intractable optimization problems in real-world imaging applications. Here we present a new paradigm for experimental design that simultaneously optimizes the design (set of image channels) and trains a machine-learning model to execute a user-specified image-analysis task. The approach obtains data densely-sampled over the measurement space (many image channels) for a small number of acquisitions, then identifies a subset of channels of prespecified size that best supports the task. We propose a method: TADRED for TAsk-DRiven Experimental Design in imaging, to identify the most informative channel-subset whilst simultaneously training a network to execute the task given the subset. Experiments demonstrate the potential of TADRED in diverse imaging applications: several clinically-relevant tasks in magnetic resonance imaging; and remote sensing and physiological applications of hyperspectral imaging. Results show substantial improvement over classical experimental design, two recent application-specific methods within the new paradigm, and state-of-the-art approaches in supervised feature selection. We anticipate further applications of our approach. Code is available: Code Link. | Summarize the following text. | 274 |
arxiv-format/1408_6230v2.md | # EOSDB: The Database for Nuclear EoS 1
Footnote 1: affiliation: Research Institute for Science and Technology, Tokyo University of Science, Yamazaki 2641, Noda, Chiba 278-8510, Japan
Chikako Ishizuka
Research Institute for Science and Technology, Tokyo University of Science, Yamazaki 2641, Noda, Chiba 278-8510, Japan [email protected]
Takuma Suda
Research Center for the Early Universe, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033, Japan [email protected]
Hideyuki Suzuki
Faculty of Science and Technology, Tokyo University of Science, Yamazaki 2641, Noda, Chiba 278-8510, Japan [email protected]
Akira Ohnishi
Yukawa Institute for Theoretical Physics, Kyoto University, Kitashirakawa Oiwakecho, Sakyo-ku, Kyoto 606-8502, Japan [email protected]
Kohsuke Sumiyoshi
Numazu College of Technology, Ooka 3600, Numazu, Shizuoka 410-8501, Japan [email protected]
Hiroshi Toki
Research Center for Nuclear Physics, Osaka University, Mihogaoka 10-1, Ibaraki, Osaka 567-0047, Japan [email protected]
## 1 Introduction
Nuclear equation of state (EoS) describes the properties of dense nuclear matter whose density typical ranges \\(10^{9-15}\\) g/cm\\({}^{3}\\). It plays an important role both in nuclear physics and astrophysics, because the EoS of dense matter is directly related to heavy nuclei as well as dense matter in compact objects such as neutron star and black holes after supernova explosions (SNe). For example, simulations of neutron star mergers and/or SNe have been performed to constrain nuclear models (Hotokezaka et al., 2013; Bauswein et al., 2012). Furthermore, in an effort to connect nuclear EoSs and the hydrodynamical simulations of SNe, Typel et al. (2013) developed the database of EoSs for SNe (CompOSE by the CompStar team) at finite temperature systems. It provides EoS tables with nuclear statistical equilibrium (NSE) approximation for the inhomogeneous phase below the nuclear saturation density.
From the observational point of view, the X-ray observations of neutron stars can constrain nuclear EoSs through the determination of masses and radii using the Tolman-Oppenheimer-Volkoff equation, although these quantities are influenced by the model assumptions for neutron star atmospheres. Once the mass and radius of a neutron star are determined observationally, the EoSs can be well constrained by the possible structure properties of neutron stars inferred from its total mass and radius. Therefore, the discoveries of massive neutron stars with \\(\\sim\\) 2\\(M_{\\odot}\\) (PRS J1614-2230 with 1.97 \\(\\pm\\) 0.04\\(M_{\\odot}\\)(Demorest et al., 2010) and PRS J0348+0432 with 2.01 \\(\\pm\\) 0.04 \\(M_{\\odot}\\)(Antoniadis et al., 2013)) cast a doubt on the the existing EoSs derived from nuclear physics. Since the central densities become high enough, exotic constituents and exotic states such as hyperons (baryons with strange quarks), meson condensation, and quarks, are expected to appear, but most of the proposed EoSs with exotic constituents cannot sustain massive neutron stars (Lattimer & Prakash, 2010).
Several ideas have been proposed to retain the consistency between astronomical observations and laboratory experiments. Miyatsu et al. (2012) consider the inter-baryon interactions to suppress the appearance of hyperons. Adjusting hyperon-nucleon or hyperon-hyperon interactions is another idea to make the EoSs stiff. Weissenborn et al. (2012) proposed an EoS that is stiff enough to support massive neutron stars using meson octet, singlet coupling contents and meson-hyperon coupling strengths as fitting parameters. Sulaksono & Agrawal (2012) improved their idea to satisfy nuclear experimental results by adjusting the strengths of these interactions, which are experimentally unknown at present. Masuda et al. (2013) assumed crossover transition from nuclear phase to quark phase to support \\(2M_{\\odot}\\) neutron stars. On the other hand, Tsubakihara & Ohnishi (2013) argued that the three body interactions are to be investigated to realize stiffer EoSs. More investigations are ongoing with ab initio calculations of nuclear EoSs (Takano et al., 2010; Aoki et al., 2012). The magnetic field may also change the effective EoS because single pulsars have \\(10^{3}\\) times stronger magnetic field (\\(10^{14-15}\\) G) than typical neutron stars do. Potekhin & Yakovlev (2012) and Cheoun et al. (2013) insist that such strong field can be responsible for stiff EoSs even without any other additional interactions. Thus, there exists hundreds of published EoSs from nuclear physics community with state-of-the-art input physics taken into account.
Contrary to the intensive explorations of the EoS by nuclear physics community, EoSs adopted in astrophysical context are very limited. Thanks to the many efforts to provide EoSs in astrophysics, improved nuclear EoSs have been available, in addition to the present standard EoSs, e.g., polytropic EoSs, non-relativistic Lattimer-Swesty's EoS (Lattimer & Swesty, 1991) which advanced the first nuclear EoS for astronomical use (Hillebrandt et al., 1984), and relativistic Shen's EoS (Shen et al., 1998). Crucial aspect for the astrophysics application of nuclear EoSs has been the incompressibility \\(K\\!=\\!9\\rho_{0}^{2}\\partial^{2}E/\\partial\\rho_{B}^{2}|_{\\rho_{B}=\\rho_{0}}\\) at the saturation density \\(\\rho_{B}\\!=\\!\\rho_{0}\\!\\simeq\\!0.16\\) fm\\({}^{-3}\\), which is related to recoil of compressed materials. The incompressibility of nuclei could be a key to determine the maximum mass of neutron stars. It has been studied by using nuclear compression modes, such as Iso-Scalar Giant Monopole Resonance (ISGMR) and Iso-Scalar Giant Dipole Resonance (ISGDR). The recommended value of the nuclear matter incompressibility is about 230 - 270 MeV estimated from \\({}^{208}\\)Pb and \\({}^{144}\\)Sm data using different types of interactions (Colo' & van Giai, 2004). At present, these properties of symmetric nuclear matter have been well determined both in nuclear theories and experiments. It is well known that non-relativistic models give smaller incompressibility, while relativistic models give larger incompressibility. This model dependence comes from the treatment of nuclear surface, which is always involved in the experimental data. Lattimer & Swesty (1991) use three choices of the values for incompressibility that gives a reasonable constraint on nuclear EoSs. Shen et al. (1998), on the other hand, provided an EoS for the first time in tabular format with relativity taken into consideration to be applicable in astronomical condition where relativity becomes important. This is also the first EoS table for astrophysics applications considering experimental information on neutron-rich and heavy nuclei that correspond to charge asymmetry and matter-like property of nuclear many-body system.
In order for a EoS to be realistic, there are three requirements to be fulfilled; the saturation point, the incompressibility, and the symmetry energy. The symmetric nuclear matter has its minimum energy (per nucleon) \\(E=E_{0}\\simeq-16\\) MeV at the saturation density. This is the so-called \"nuclear saturation property\". In addition to these criteria, symmetry energy and the slope of the symmetry energy with respect to baryon density, have drawn much attention in the last decade. The symmetry energy is also a dominant component of the bulk nuclear property, and has a great impact on the understanding of pure neutron matter. This is because these quantities can indirectly constrain the property of pure neutron matter that is difficult to know due to the lack of direct experimental approaches. It is to be noted that the symmetry energy can be considered the energy difference between symmetric nuclear matter and pure neutron matter as its 0 -th order approximation. Thanks to the efforts to estimate the symmetry energy, there are several plans of experiments to constrain the EoSs at very low densities. The symmetry energies are expected to be determined experimentally as a function of baryon densities for \\((1/3\\,\\)-\\(\\,1)\\rho_{0}\\) at MSU, \\((1\\,\\)-\\(\\,2)\\rho_{0}\\) at RIKEN, and \\((2\\,\\)-\\(\\,3)\\rho_{0}\\) at GSI by using the mass formula, isobaric analogue state (IAS), heavy-ion collisions(HIC), and neutron skin thickness. The evaluated value of the symmetry energy \\(E_{\\rm sym}\\) and its slope \\(L\\) is \\(31\\pm 3\\) MeV and \\(54\\pm 13\\) MeV, respectively (Chen et al., 2010). As for the determination of pure neutron matter EoSs at extremely low densities, they can be directly measured by cold atom of dilute Fermi gas.
These experimental constraints on nuclear properties generally include model-dependence in their analysis procedures. Especially the symmetry energy is very difficult to measure directly with the current experimental techniques, because of the mutual dependence between experimental analyses and nuclear models at high densities.
To overcome the difficulty regarding the constraint on the EoSs in an appropriate way, we have constructed a database for nuclear EoSs (EOSDB) to assemble as many data dealing with all the four criteria discussed above as possible from the literature. This enables us to integrate the pieces of information about the constraints on nuclear EoSs available in the literature because there are few papers that satisfy all these criteria due to various concerns regarding the properties of nuclear matter. In assembling the data, we pay much attention to the model dependences ascribable to the adopted EoSs. In particular, it is difficult to remove the model dependences caused by the symmetry energy since it can only be derived from raw experimental data using theoretical models. The new database system will help to check the properties of each data through the comparisons of different EoSs in a unified scheme. The EOSDB provides EoSs together with the nuclear saturation properties, the symmetry energy properties and the information related to the mass and radius of neutron stars. The database is also designed to analyze model dependence by compiling theoretical models and assumptions adopted in each EoS.
The paper is organized as follows: The details of the EOSDB are described in SS2. SS3 devotes the description about the usage. The example of a model analysis using the EOSDB is given in SS4. Summary and discussions follow in SS5.
Characteristics of the EOSDB
The basic structure of the EOSDB is common with the SAGA database (Suda et al., 2008; Suda et al., 2011) that is a database for observed metal-poor stars. The EOSDB is operated by MySQL and CGI. The retrieval and data plotting systems are constructed with Perl and JavaScript. We prepared libraries for compilation of the EoSs; the list of major journals, basic physics constants that are used in the calculation, classifications for constituents of dense matter, methods, thermodynamical variables, and physical quantities on symmetric energies. A unit record is defined by the data of interaction available in individual papers. Each record has compiled data according to the format defined by the library. The most important quantities in the database are those related to the basic EoS properties such as thermodynamical quantities, the symmetry energy \\(E_{\\rm sym}\\), its slope \\(L\\), and the incompressibility \\(K\\) as a function of baryon densities for various models.
The selection criteria of papers are as follows with the decreasing order of priority:
* Articles containing data distributed to the public
* Articles often cited as a standard EoS (e.g., Lattimer & Swesty, Shen, Akmal-Pandharipande-Ravenhall)
* Articles containing constraints on the EoSs
Most of the compiled data are derived from theoretical models, although some experimental and observational constraints are also included. Table 1 gives the list of compiled papers, selected from hundreds of candidate papers dealing with EoSs published in the last decade. It is to be noted that the number of candidate papers has increased drastically after the discovery of the massive neutron stars.
In the current version of the database, all of the records are based on the models of \\(T\\!=\\!0\\) MeV for symmetric nuclear matter and/or pure neutron matter. This is because we can focus on the most basic features of hadronic matter, and see the difference among models and the behavior of the models. If the basic parameters of the EoSs cannot be derived from theoretical models at the exact zero temperature, we adopt those for a finite temperature, trying to use as low temperature as possible.
As of Aug. 2014, 36 data sets have been compiled (see Table 1). The data ID and the reference ID in the first and second column, respectively, specifies the record in the database. The IDs can be found on the web site in using the data search and plot system (See SS 3). The other columns in the tables describe the quantities related to the saturation properties of the data. Constituents denote the components considered in the data. The quantities in each bottom line, \\(\\rho_{0}\\) [fm\\({}^{-3}\\)], \\(E_{0}\\) [MeV], \\(E_{\\rm sym}\\) [MeV], \\(L\\) [MeV], \\(K\\) [MeV] are saturation density, binding energy, symmetry energy, symmetry energy slope to baryon density at \\(\\rho_{0}\\), and incompressibility, respectively. The details about these quantities are described in the following subsections. Energy except for the saturation point, and the other quantities such as pressure and entropy, can be downloaded from the online database.
All the data compiled in the database are also stored in text format and are accessible through the web site so that users can inspect individual data in more detail. The text data include the following information.
1. Bibliographic information
2. Instructions on how to handle the tabulated data if exists
3. Physics constants used in each EoS
4. Assumed constituents and conditions
5. Theoretical/Experimental/Observational methods to derive each EoS and its strong/weak points and comments.
6. Saturation density
7. Saturation energy
8. Symmetry Energy properties
9. Maximum cold neutron star mass (if calculated data exist)
10. Source of data (if tabulated data or numerical codes are distributed or not.)
The EOSDB web site also provides the link to the original papers in which full information should be available.
The data of nuclear EoSs are taken from open EoS tables and/or compiled data using a software named GSYS 1 to read viewgraphs. Most of data in the EOSDB has been scanned from viewgraphs in the papers using the GSYS at present. These data possibly include systematic errors because the work relies on manual operation in using the software.
Footnote 1: distributed by the JCPRG at [http://www.jcprg.org/gsys/gsys-e.html](http://www.jcprg.org/gsys/gsys-e.html)
Table 2 presents physical quantities registered in the database. In the following, we give their details along with our categories and items. As explained in the previous section, these quantities are essential to characterize and constrain nuclear models.
### Bibliography and Attribute
The identifier of the compiled paper is one of the primary key of the database. We include bibliographic data in the database and label them specific ID (Data ID) as shown in the first top columns in Table 1. We also give identifiers for the data in individual papers (Reference ID). The reference ID is given by the following format; [Surname of the first author][Journal code][Year]_[Comment]. The comment in the ID is added only if two or more EoSs can be compiled from a single paper.
Compiled data are classified according to which approach is made in the paper to deduce a constraint on EoSs; theory, experiment, or both. The observational determinations on mass and radius of neutron stars are classified as experimental approach in our database.
### Constituents, Methods, Physics Constants
Constituents and methods adopted in the papers are helpful when users try to reproduce the original data by themselves. The constituents of nuclear matter that we registered are given in Table 2. The symbols \\(N,Y,\\alpha,A,Q\\) and \\(L\\) correspond to nucleons, hyperons, \\(\\alpha\\)-particles, nuclei quarks, and leptons, respectively. The other exotic particles can be added to the library if needed. Note that the \\(L\\) in the list of constituents is different from the symmetry energy slope \\(L\\). They are treated as different quantities in the database. The constituents in EoSs are important information in the database since they directly affect their energy and/or pressure. Users should check the components of a system when they compare different sets of EoSs.
The database contains information about the theoretical frameworks, approximations and assumptions adopted in the papers that are selected from the data in the library. It is useful in surveying the model dependence of each physical quantity as demonstrated in SS4. We selected around 40 representative major theoretical frameworks, models, and approximations published in the last decade and registered them as method codes in the library. If two or more models and approximations are used (for example, the relativistic mean field and random phase approximation) in one record, they are both compiled (\"RMF\" and \"RPA\" should be used in this case). In the future updates, users will be able to use the methods as a query option in search and plot system.
### EoS for \\(Y_{\\rm C}=0\\) and \\(0.5\\)
In the EOSDB, thermodynamical quantities for the charge ratio of 0.0 and 0.5 are compiled as a function of baryon densities. The charge ratio is defined as follows,
\\[Y_{\\rm C}=(\\sum_{i=n,p,\\ldots}\\!\\!Q_{i}\\rho_{i}+\\sum_{j=e^{\\pm},\\mu^{\\pm}}\\!\\! Q_{j}\\rho_{j})/\\rho_{\\rm B}, \\tag{1}\\]
where \\(Q_{i}\\) and \\(\\rho_{i}\\) is the charge and number density of particle \\(i\\), respectively.
For \\(Y_{\\rm C}=0.0\\), the database for pure neutron matter is expected to play an essential role in imposing strong constraints on nuclear EoSs. However, the direct determinations of EoSs for pure neutron matter is extremely difficult except for low depinsities where \\(Y_{\\rm C}=0.0\\) can be reproduced in laboratory environment. From the theoretical point of view, the predictions for nuclear EoSs for pure neutron matter by different groups do not agree with each other, even for the same theoretical frameworks. This has been a controversy among theorists and brought about a motivation to compare different EoSs. It is to be noted that the EOSDB has two cases for \\(Y_{\\rm C}=0.0\\). One is a pure neutron matter consisting of neutrons, and another is a neutron star matter consisting of neutrons and a small number of protons and electrons. It is not meaningful to compare EoSs with different composition even if the data are provided for \\(Y_{\\rm C}=0.0\\).
If leptons are included in a system, the users should consider the contribution of leptons and need special care when comparing with other EoSs. Most of published data related to neutron stars contain leptons. The reason we include leptons in the definition of \\(Y_{\\rm C}\\) is thatsome theoretical models do not contain any published data without leptons, which is the case in typical neutron star calculations. Whether or not leptons are included in the dataset is described in the text data that the EOSDB web site provides. Users can choose data without leptons only.
The dataset with \\(Y_{\\rm C}\\!=\\!0.5\\) needs a special attention when they are used. As discussed in SS1, we can investigate nuclear saturation properties from the EoS of symmetric nuclear matter. For \\(Y_{\\rm C}\\!=\\!0.5\\), we assume symmetric nuclear matter which consists of the equal number of neutrons and protons, or a charge symmetric system which includes hyperons (the other members of baryon octet). In the most of the datasets for \\(Y_{\\rm C}\\!=\\!0.5\\), available EoSs are basically for symmetric nuclear matter. We compile all the energy data with \\(Y_{\\rm C}\\!=\\!0.5\\) that provides nuclear saturation properties. This enables to evaluate the validity of compiled EoSs by verifying that the compiled energy satisfies the typical value of the saturation energy of \\(-16\\) MeV at the saturation density of \\(0.16\\) fm\\({}^{-3}\\). This check will be important for EoSs derived from theoretical models because it is still a big challenge for some ab initio models including lattice QCD to reproduce the saturation properties, starting from fundamental particle interactions. This is also demonstrated in SS4. The consistency of the saturation energy for phenomenological models, on the other hand, should be ensured because these models are determined to reproduce those properties.
### Symmetry Energy
The symmetry energy \\(E_{\\rm sym}\\)and its slope \\(L\\) are expressed as a parameter \\(a_{4}\\) that is a representative around the saturation density. The incompressibility \\(K\\) is also given in the similar expansion of the energy of symmetric nuclear matter. The EOSDB treats these properties as a function of baryon density, which enables to check the compiled EoSs explicitly. These constraints (\\(E_{\\rm sym}\\), \\(L\\), and \\(K\\)) are useful in the comparisons of the data taken from different papers.
The symmetry energy is defined in terms of a Taylor series expansion of the energy for nuclear matter as a function of the charge asymmetry \\(\\delta=1-2Y_{\\rm C}\\),
\\[E(\\rho_{\\rm B},\\delta) = E(\\rho_{\\rm B},0)+\\frac{1}{2!}
symmetry energy, i.e., energy difference between symmetric nuclear matter and pure neutron matter at arbitrary baryon densities.
The symmetry energies are available at any given baryon densities from theoretical models in the literature, while the availability is limited for experimental data. The compiled data are also to be compared with experimental data. In the current experimental setups, the symmetry energy is measured for the limited range of the saturation density, (\\(\\sim 1/3-1\\) times \\(\\rho_{0}\\)). However, ongoing and future experiments are expected to determine the symmetry energy at higher densities up to \\(\\sim 3\\rho_{0}\\).
## 3 Search and Plot System of the EOSDB
We have constructed a database subsystem for use of the EOSDB. Users can access and select data based on various criteria, some of which are shown in Table 2. The selected data can be drawn in the viewgraph on the browser with user-specified axes, and the results can be downloaded from the server. All the data are linked to the text files and original papers where required information are available.
### Data Search
Figure 1 shows the screen snapshot of the search and plot system. Search criteria are divided into three sections in the form of the system. The first section of the form provides criteria related to the axes of the diagram. The first line specifies the category of the search. In the current version, users can specify \"Symmetry Energy\" or \"Thermodynamic Variables\", which is used to help to specify the axes of the graph depending on the properties of EoSs, but this option is under construction and will be used in the future update. The next two lines are used to specify the quantities to draw in the graph. Users can select from the following quantities for each axis in the first column: baryon density, \\(\\rho_{\\rm B}\\), symmetry energy, \\(E_{\\rm sym}\\), slope coefficient, \\(L\\), incompressibility, \\(K\\), energy, pressure, and entropy. Users can also enter one of these quantities in the text box in the second column: All these variables are given as a function of \\(\\rho_{\\rm B}\\) in the database. Therefore, the default option for the quantity in the X-axis is set at \\(\\rho_{\\rm B}\\). Those who are interested in symmetry energy should choose \\(E_{\\rm sym}\\), \\(K\\), or \\(L\\) for the Y-axis, while users interested in thermodynamic variables should choose energy, pressure, or entropy. The third column gives an option to specify the value of the charge ratio (\\(Y_{\\rm C}=0.0\\) or \\(0.5\\)). This should be specified unless the graph axis is set at \\(\\rho_{\\rm B}\\). As described in SS 2.4, the option, \\(Y_{\\rm C}=0.0\\) means pure neutron matter or charge neutral matter. It is recommended to check which condition is realized in compiled data by tracing the links to individual text data as described in SS 2. The 4th and 5th columns set the range of the values for each selected quantity, with the option in the 6th column of whether to include or exclude data that report the quantity with only an upper limit. In the 4th line, users can specify the required range in the data, if necessary, to select or remove the data from plotting, e.g., by setting, \\(0\\leq E/B\\leq 500\\) MeV.
The number of criteria can be extended to as many as desired by the user.
The second section of the form is used to extract specified papers. Through the use of these options, one can extract the data containing a specific author, journal, and the range of the year of publication.
Retrieval options are set in the third section, such as the number of data to display in the resulting list and order of the list.
A screen snapshot of an example for the retrieved set of records is shown in Figure 2. The retrieved records are displayed in table format on the browser. The columns represent, from left to right, the checkbox to select data to be plotted, the reference ID, and the minimum and maximum values for the quantities selected as the X-axis and Y-axis of the plotted diagram, respectively. By using the provided links to the reference ID, one can trace the information on the data stored as text format as listed in SS 2. For selected data, the diagram is drawn in the web browser according to the choice of options, using the publicly available graphic software Gnuplot (see Figure 3). Graphs drawn in the browser are equipped with simple functions for editing. The standard options are to change the labels, position of the legend, and the scales and ranges of the graph. Users can also download the figures in various formats (png, eps, ps, and pdf, in color or in black and white). Plotted numerical data, as well as the script to reproduce the figure on the screen can be downloaded from the server, if one wishes to edit the graph in more detail. Numerical data are also accessible by tracing the link to each data set in the list. Users can upload their own data to the server to compare them with the plotted data.
It is recommended to refer the detailed information when one compares the plotted data with the system. In particular, assumptions and methods adopted in theoretical models should be checked so that the comparison is based on appropriate conditions. In the future update of the system, this information will be added as a criterion for choosing the data.
## 4 Application to Model Analysis
We demonstrate how the data in the EOSDB can be used to analyze theoretical models. First, the EoSs widely used in astrophysics community are examined using the symmetry energy registered in the EOSDB. Second, theoretical models are compared using the density dependence of energy.
In Figure 4, we compare energy per baryon (\\(E/B\\)) and symmetry energy as a function of baryon densities using two EoSs, Lattimer & Swesty's EoS (Lattimer & Swesty 1991, hereafter LS EoS) and Shen's EoS (Shen et al. 1998, hereafter Shen EoS), both of which are commonly used in astrophysical studies of neutron stars, supernovae and black holes. These data are also compared with experimental data in the bottom panel. Here, the dataset _LS180_, _LS220_, and _LS375_ denotes the LS EoS at the incompressibility of 180, 220, and 375 MeV, respectively.
Each dataset is obtained by the following procedure. The LS EoS has been compiled using the analytic equations in Lattimer & Swesty (1991) that describe energy as a function of baryon densities at \\(T\\!=\\!0\\) MeV. It is to be noted that this EoS is only for uniform matter. This is due to a problem in computing an EoS using the distributed version of the program that was supposed to give a table with nucleons including leptons and photons at low temperatures and low \\(Y_{\\rm C}\\). The _LS375_ provides the best consistency with the parameter set for Skyrme force, whose symmetry energy is consistent with experimental constraints. The Shen EoS data is taken from their EoS tables using the RMF parameter set, _TM1_. In the tables, the inhomogeneous phase under \\(\\rho_{0}\\) is treated by Thomas-Fermi approximation. The EoSs with this parameter set gives generally lower energy than those for uniform matter. In both types of EoSs, the symmetry energy is defined as the energy difference between pure neutron matter and symmetric nuclear matter. We compare the dataset \"Niksic (2002)\" with LS and Shen EoSs in the bottom panel of Fig. 4. The data provides the constraints on EoS from experiments and are compiled from Fuchs & Wolter (2006) using the GSYS. We represent the constraint on the symmetry energy with error bars, while it is shown as a shaded area in the original figure. The constraint on the symmetry energy is obtained experimentally from \\({}^{208}\\)Pb and \\(\\alpha\\) inelastic scattering data for Iso-Vector Giant Dipole Resonance using a density dependent relativistic mean field (DDRMF) parameter set, _DD-ME1_ and _PRA_ for exited modes. As shown in Table 1, the basic properties of these EoSs are (E\\({}_{\\rm sym}\\), \\(K\\), M\\({}_{\\rm NS}^{\\rm Max}\\)) = (29.3 MeV, 220 MeV, 2.06\\(M_{\\odot}\\)) for _LS220_ and (37.9 MeV, 281 MeV, 2.18\\(M_{\\odot}\\)) for _TM1_. These are in good agreement with the recent experimental
Figure 1: Screen snapshot of the top page of the search and plot system for the EOSDB.
constraints on the value of \\(K\\) of 230 - 270 MeV.
The large discrepancy between these two EoSs can be understood as follows, speculated from the differences in their theoretical models. The major difference in models between LS EoS and Shen EoS is the condition assumed for a nuclear system. In the LS EoS, a modified Skyrme I force (SkI) (Vautherin et al. 1970) is used. The SkI can reproduce the properties of closed shell nuclei such as \\({}^{16}\\)O, \\({}^{40,48}\\)Ca, \\({}^{90}\\)Zr, \\({}^{208}\\)Pb. They adjusted the incompressibility by adding a three-body interaction parameter. Adding the three-body interactions in the Skyrme forces can be justified only if they can reproduce experimental values. The dataset _TM1_, on the other hand, is produced by a parameter set of relativistic mean field (RMF) that is adjusted to reproduce both the binding energies and the charge radii of proton-rich and neutron-rich nuclei as well as representative closed shell nuclei such as \\({}^{8-20}\\)C, \\({}^{14-22}\\)O, \\({}^{28,34}\\)Si, \\({}^{40,48}\\)Ca, \\({}^{90}\\)Zr, \\({}^{116,124}\\)Sn, and \\({}^{184-214}\\)Pb. The RMF models naturally involve the relativity effect that is known to make an EoS stiffer than non-relativistic EoSs. Shen EoS covers nuclear matter at high densities and various \\(Y_{\\rm C}\\), which is useful in applications to astronomical phenomena such as supernovae and the formation of neutron stars.
It is also to be noted that there is a limitation in the application of the Skyrme Hartree Fock and RMF models. Both the Skyrme Hartree Fock and the RMF models are based on experimental analyses of the HIC data to constrain the symmetry energy and its slope with
Figure 2: Screen snapshot of the search result of the search and plot system of the EOSDB. In this case, the X-axis and Y-axis are set to \\(\\rho_{\\rm B}\\) and \\(E/B\\) (energy per baryon), respectively, for the category of thermodynamic variables. See text for the meanings of the columns in the table.
respect to \\(\\rho_{\\rm B}\\). The Skyrme Hartree Fock models can well describe various finite nuclei at low energy, although it should be applied below \\(E/B<50\\) MeV because it is difficult to determine a Skyrme parameter that can reproduce both Pb and Sn at the same time (Stone et al., 2003). This appears to be in conflict with the fact that heavy ion collisions at high energies are required to derive the symmetry energy above \\(\\rho_{0}\\). On the other hand, the RMF models can explain p-induced reactions even at high energies but it shows the poor reproduciblity of experimental data such as binding energy and charge radius for light nuclei. We should also note that the RMF includes only direct interaction, and that the exchange interaction (the Fock term) might be necessary in a dense many-body system. Thus these major models have their advantages and disadvantages. However, Skyrme Hartree Fock models have been widely used in the analysis for the symmetry energy thanks to their plentiful variety. Some Skyrme forces have a peak in the symmetry energy at around the saturation density, the others show almost mono-topical increase of \\(E_{\\rm sym}\\) as the density rises, while that of RMF models mono-topically arises as a
Figure 3: Screen snapshot of the data plotting of the search and plot system of the EOSDB. Three sets of data were selected from the list shown in Fig. 2. See text for the equipments in the system and the options to edit the diagram on the browser.
function of the density, in general. As shown in the bottom panel of Fig. 4, there is an increasing discrepancy of \\(E_{\\rm sym}\\) with increasing density between these two models. Experiments to constrain the symmetry energy are ongoing in such a high \\(\\rho_{\\rm B}\\) region.
The above discussion tells us why we need careful treatments on the saturation property and symmetry energy. At around \\(\\rho_{\\rm B}=0.1\\) fm\\({}^{-3}\\), both LS and Shen EoSs show a reasonable agreement with the experimental constraint on the symmetry energy. Especially at around \\(\\rho_{0}\\), _LS375_ satisfies the constraint. However, it has too large value of the incompressibility compared with that constrained by experiments, i.e., 230 - 270 MeV. As for the other datasets, _LS220_ and _LS180_, they also show reasonable agreement with the symmetry energies around the saturation density. However, they do not provide the best fit with the result with the SkI' force, which is fitted to reproduce various closed shell nuclei with its incompressibility \\(K=370\\) MeV. In addition, the incompressibility is smaller for _LS220_ and _LS180_ than that constrained by experiments. The saturation property of Shen EoS is similar to the softer EoSs like _LS220_ and _LS180_. In Shen EoS, the symmetry energy seems to be larger in a few MeV than the constraint around \\(\\rho_{0}\\), even though it agrees well with the recent experimental constraint for the symmetry energy, 31 \\(\\pm\\) 3 MeV at the saturation density, \\(\\rho_{0}\\). Its incompressibility \\(K=281\\) MeV is also slightly large compared with 230 - 270 MeV. In conclusion, it is difficult to satisfy the constraints on both the incompressibility and the symmetry energy simultaneously for the EoSs widely accepted by astrophysics community. It may be caused by the model dependence contained in the constraint itself, because the experimental analysis has been performed with various models for which the analysis should not be applied.
Figure 5 demonstrates another advantage of using the EOSDB. It compares energy as a function of baryon densities for results with different theoretical models. We present the models of the RMF that is based on a phenomenologial framework, and those of variational methods that is based on ab initio calculations.
Along with the different characteristics of theoretical models, the compiled models of nuclear matter can be divided into two groups as shown in Table 3, 4 and 5. Each table has a list of models and characteristics, and the mass and radius of neutron stars together with the corresponding data ID and the reference ID in our database. In both tables, the symbol M\\({}_{\\rm NS}^{\\rm Max}\\)in the last column is the maximum mass of neutron stars in each model, and \\(R\\) is the radius at the M\\({}_{\\rm NS}^{\\rm Max}\\). We should note that the radius could vary according to treatment of the neutron star crust. We calculated the radii of E0002 and E0012 using the Shen EoS table for the crust. As for the other entries, the detailed information on the crust treatment can be seen in the references (Krastev & Sammarruca, 2006; Schwenk, 2013; Bauswein & Janka, 2012; Ban et al., 2004).
In Table 3, the first column denotes the adopted framework which is either relativistic or non-relativistic. The acronyms \"VM\", \"BHF\", and \"DBHF\" in the second column mean Variational Method, Brueckner-Hartree-Fock, and Dirac-Brueckner-Hartree-Fock, respectively.
Figure 4: Comparisons of energy per baryon or symmetric energy as a function of baryon density for different sets of nuclear EoSs. The upper and middle panel show the energy density of pure neutron matter and symmetric nuclear matter, respectively. Note that Shen EoS includes inhomogeneous phase at lower densities than the saturation, while the others are calculations for simple uniform matter. In the upper and lower panel, the red solid line is Lattimer-Swesty EoS with the incompressibility of \\(K=375\\) MeV, while the green dashed line represents Shen EoS only with nucleons. In the middle panel, the three options of Lattimer-Swesty EoS are plotted in the red solid line (\\(K=375\\) MeV), the greed dashed line (\\(K=220\\) MeV) and the blue dotted line (\\(K=180\\) MeV) in comparison with Shen EoS. The blue error bar in the lower panel shows an experimental error bars.
The label \"\\(NN\\)\" in the third column denotes nucleon-nucleon interaction while \"\\(NNN\\)\" in the last column means the three-body interaction. In Table 4 and 5, the first column denotes the adopted framework which is either relativistic or non-relativistic. The acronyms \"RDBBHF\", \"SKF\", \"RMF\", \"RHF+QMC\" and \"DDRMF\" represent the method of calculations and correspond to Skyrme Hartree-Fock, Relativistic Mean Field, Relativistic Hartree Fock with Quark Meson Coupling Model and Density Dependent RMF, respectively.
In the left panels of Figure 5, we present three models with the RMF theory with hyperons (\\(Y\\)) taken into account and have compiled in the EOSDB. These three models have the following characteristics. The Data labeled \"H.Shen\" (Shen et al. 2011b) contains only \\(\\Lambda\\) as hyperons using the RMF parameter set _TM1_. In the data labeled with \"Ishizuka\" (Ishizuka et al. 2008), we use repulsive 30 MeV case of \\(\\Sigma N\\) interaction model, which gives a good agreement with \\({}^{28}\\)Si(\\(\\pi^{-}\\), \\(K^{+}\\))-reaction of the KEK E438 experiment (Maekawa et al. 2007). We calculated a new dataset to remove the contribution by leptons, but omitted the inhomogeneous phase for simplicity. These two EoSs are based on the same parameter set _TM1_ for nuclear part. As shown in the bottom left panel, they show the same behavior with each other in the \\(Y_{C}\\) = 0.5 case. On the contrary, the difference between these models increase with the density in the \\(Y_{C}\\) = 0 case (top right panel). This is because the Ishizuka EoS contains more neutral \\(n\\) and \\(\\Lambda\\) than Shen EoS due to the inclusion of the other charged hyperons.
We display another EoS with hyperons to examine the possibility to distinguish the different constituents within multi-theoretical frameworks when an EoS, which is determined from observational masses and radii of neutron stars, is provided. As seen from the top left panel of Fig. 5, the discrepancy caused by constituents is more significant than that caused by different parameters at \\(\\rho_{\\rm B}\\gtrsim 3\\rho_{0}\\). The data of \"Miyatsu\" (Miyatsu et al. 2012, private communication), are based on the calculations using a relativistic Hartree-Fock method with a quark-meson coupling model. It also contains full baryon octet as well as the Ishizuka EoS. The data of Miyatsu and Ishizuka show similar behavior below the saturation density as shown in the left panels, while they do not agree with the result of Shen EoS. This is because the inhomogeneous phase in Shen EoS gives lower energy in a system than the others that are based on uniform matter calculations. In fact, Shen EoS and Ishizuka EoS are consistent with each other at higher densities than the saturation in the case of nucleon matter. Another difference between Miyatsu and the other RMF EoSs is the Fock term in high densities, which is neglected in the RMF models. This effect gives a stiff EoS enough to support a massive neutron star, which is also consistent with the Shapiro delay observations (Demorest et al. 2010) as seen in the left upper panel.
In the right panels of Fig. 5, we compare two ab initio calculations. The data labeled with \"APR-Full\" is taken from tables shown in Akmal et al. (1998). The APR EoS is the representative of ab initio calculations based on the fundamental interactions of nuclear many body systems. They use the variational method with two- and three-body interactions and Lorentz boost for relativistic correction. The data labeled with \"Kanzawa\" (Kanzawa et al. 2009) follows the APR EoS scheme whose data are scanned from the viewgraphs. In both cases of pure neutron matter (the top right panel) and symmetric nuclear matter (the bottom right panel), these models show almost the same properties with each other, except for the small gap. As for the APR data, we adopt the values before adjusting its binding energy to the empirical value of \\(-16\\) MeV at \\(\\rho_{\\rm B}=0.16\\)fm\\({}^{-3}\\) in order to include the many-body corrections and the other corrections to their EoSs, separately. These information are necessary to estimate the influence of each component on EoSs. On the other hand, the Kanzawa EoS is the data with the adjustment because data only with the correction are provided in the paper. The correction term in the Kanzawa EoS satisfies the same condition required in the APR EoS. From the comparison of these EoSs with and without the correction of the binding energy, we need to pay enough attention to a criterion of each data in collecting figures from tables in published articles. At present, such saturation property calculated by ab-initio models has been used as a fitting condition even in the other ab initio calculations such as lattice QCD. To derive the saturation property from experiments has a mutual dependence with nuclear models, similar to the case for symmetry energy. Such dependences are inevitable for ab-initio calculations. This problem is expected to be resolved by the improvement of experimental techniques and high performance computers.
In summary, we demonstrated how the difference among theoretical models, assumptions and constituents affect the basic properties of nuclear matter using visualization tools of the EOSDB. In addition, exploring the relationships between two physical quantities may lead us to find important diagnoses to constrain EoSs in different viewpoints. Moreover, the visualizations reveal the overall picture of the model dependence of various EoSs. In Fig. 5, it has been confirmed that the RMF models give stiffer EoSs than the non-relativistic ab initio EoSs, especially above the saturation density \\(\\rho_{\\rm B}\\sim 0.16\\) fm\\({}^{-3}\\). It has also been confirmed that the incompressibility in the RMF models is larger than that in the ab initio models. From the comparison between the left and right panels, we find that the present models agree well with these patterns by checking energy dependence on the density in the symmetric nuclear matter and the curvature at \\(\\rho_{0}\\).
Figure 5: Comparison of ab initio EoSs and phenomenological relativistic EoSs using the data downloaded from the EOSDB data retrieval system.
\\begin{table}
\\begin{tabular}{l l l l l} \\hline Data ID & Code & Constituents & & \\\\ & (Reference) & & & \\\\ \\(\\rho_{0}[{\\rm fm^{-3}}]\\) & \\(E_{0}\\) [MeV] & \\(E_{\\rm sym}\\) [MeV] & \\(L\\) [MeV] & \\(K\\) [MeV] \\\\ \\hline E0001 & GShenPRC2011\\_FSUgold2.1 & \\(n,p,\\alpha,A\\) & & \\\\ & (Shen et al. 2011a) & & & \\\\
0.148 & -16.30 & 32.59 & 60.5 & 230 \\\\ \\hline E0002 & HShenNPA1998 & \\(n,p,\\alpha,A\\) & & \\\\ & (Shen et al. 1998) & & & \\\\
0.145 & -16.3 & 36.9 & 110.8 & 281 \\\\ \\hline E0003 & HShenAPJS2011\\_N & \\(n,p,\\alpha,A\\) & & \\\\ & (Shen et al. 2011b) & & & \\\\
0.145 & -16.3 & 36.9 & 110.8 & 281 \\\\ \\hline E0004 & HShenAJPS2011\\_Y & \\(n,p,\\alpha,A,\\Lambda\\) & & \\\\ & (Shen et al. 2011b) & & & \\\\
0.145 & -16.3 & 36.9 & 110.8 & 281 \\\\ \\hline E0005 & LatttimerNPA1991\\_LS180 & \\(n,p,\\alpha,A\\) & & \\\\ & (Lattimer \\& Swesty 1991) & & \\\\
0.155 & -16 & 29.3 & 74 & 180 \\\\ \\hline E0006 & LattimerNPA1991\\_LS220 & \\(n,p,\\alpha,A\\) & & \\\\ & (Lattimer \\& Swesty 1991) & & \\\\ & (Lattimer \\& Swesty 1991) & & \\\\
0.155 & -16 & 29.3 & 74 & 375 \\\\ \\hline E0008 & HempelNPA2010\\_TMA & \\(n,p,\\alpha,A\\) & & \\\\ & (Hempel \\& Schaffner-Bielich 2010) & & \\\\
0.147 & -16.03 & 30.66 & 90.14 & 318 \\\\ \\hline E0009 & MiyatsuPLB2012 & \\(n,p,\\Lambda,\\Sigma^{0,\\pm},\\Xi^{\\pm}\\) & & \\\\ & (Miyatsu et al. 2012) & & & \\\\
0.15 & -15.7 & 32.5 & 88.7 & 280 \\\\ \\hline E0010 & KanzawaPTP2009 & \\(n,p\\) & & \\\\ & (Kanzawa et al. 2009) & & & \\\\
0.16 & -16.09 & 30.0 & — & 250 \\\\ \\hline \\end{tabular}
* The corrected values when we adjust the binding energy to -16 [MeV].
* Experimental constraint on EoSs.
* Observational constrain on EoSs.
* Data ready to compiled as of 24, Sep., 2013.
\\end{table}
Table 1: List of Compiled data in the EOSDB
\\begin{table}
\\begin{tabular}{l l l l l} \\hline Data ID & Code & Constituents & & \\\\ & (Reference) & & & \\\\ \\(\\rho_{0}[{\\rm fm}^{-3}]\\) & \\(E_{0}\\) [MeV] & \\(E_{\\rm sym}\\) [MeV] & \\(L\\) [MeV] & \\(K\\) [MeV] \\\\ \\hline E0011 & FurusawaApJ2011 & \\(n,p,\\alpha,A\\) & & & \\\\ & (Furusawa et al. 2011) & & & \\\\
0.145 & -16.3 & 36.9 & 110.8 & 281 \\\\ \\hline E0012 & IshizukaJPG2008\\_SR30 & \\(n,p,\\alpha,A,\\Lambda,\\Sigma^{0,\\pm},\\Xi^{\\pm}\\) & & \\\\ & (Ishizuka et al. 2008) & & & \\\\
0.145 & -16.3 & 36.9 & 110.8 & 281 \\\\ \\hline E0013 & HillebrandtAA1984 & \\(n,p,A\\) & & \\\\ & (Hillebrandt et al. 1984) & & & \\\\
0.155 & -16.0 & 32.9 & — & 263 \\\\ \\hline E0014 & TimmesAPJS1999 & Helmholtz type EoS & & \\\\ & (Timmes \\& Arnett 1999) & & \\\\ — & — & — & — & — \\\\ \\hline E0015 & NewtonJPC2006 & \\(n,p\\) & & \\\\ & (Newton et al. 2006) & & & \\\\
0.16 & -15.78 & 30.03 & 45.78 & 216.7 \\\\ \\hline E0016 & AkmalPRC1998\\_AV18 & \\(n,p\\) & & \\\\ & (Akmal et al. 1998) & & & \\\\
0.16 & -14.59 & 32.60\\({}^{\\dagger}\\) & 57.6 & 266.0\\({}^{*}\\) \\\\ \\hline E0017 & AkmalPRC1998\\_AV18\\_3BF & \\(n,p\\) & & \\\\ & (Akmal et al. 1998) & & & \\\\
0.16 & -11.85 & 32.60\\({}^{\\dagger}\\) & 57.6 & 266.0\\({}^{*}\\) \\\\ \\hline E0018 & AkmalPRC1998\\_AV18\\_Boost & \\(n,p\\) & & \\\\ & (Akmal et al. 1998) & & & \\\\
0.16 & -12.54 & 32.60\\({}^{\\dagger}\\) & 57.6 & 266.0\\({}^{*}\\) \\\\ \\hline E0019 & AkmalPRC1998\\_AV18\\_3BF\\_Boost & \\(n,p\\) & & \\\\ & (Akmal et al. 1998) & & & \\\\
0.16 & -12.16 & 32.60\\({}^{\\dagger}\\) & 57.6 & 266.0\\({}^{*}\\) \\\\ \\hline E0020 & ZuoNPA2002 & \\(n,p\\) & & \\\\ & (Zuo et al. 2002) & & & \\\\
0.198 & -15.08 & — & — & 207 \\\\ \\hline E0021 & GrossNPA1999 & \\(n,p\\) & & \\\\ \\hline \\end{tabular} \\({}^{*}\\) The corrected values when we adjust the binding energy to -16 [MeV].
\\({}^{\\dagger}\\) Experimental constraint on EoSs.
\\({}^{\\ddagger}\\) Observational constrain on EoSs.
\\({}^{\\lx@sectionsign}\\) Data ready to compiled as of 24, Sep., 2013.
\\end{table}
Table 1: (Continued.)
\\begin{table}
\\begin{tabular}{l l l l l} \\hline Data ID & Code & Constituents & & \\\\ & (Reference) & & & \\\\ \\(\\rho_{0}\\)[fm\\({}^{-3}\\)] & \\(E_{0}\\) [MeV] & \\(E_{\\rm sym}\\) [MeV] & \\(L\\) [MeV] & \\(K\\) [MeV] \\\\ \\hline & (Gross-Boelting et al. 1999) & & & \\\\
0.185 & -16.15 & 34.36 ([email protected] fm\\({}^{-3}\\)) & — & 230 \\\\ \\hline E0022 & vanDalenNPA2004 & \\(n\\),\\(p\\) & & \\\\ & (van Dalen et al. 2004) & & & \\\\
0.185 & -16.15 & 34.36([email protected] fm\\({}^{-3}\\) & — & 230 \\\\ \\hline E0023 & TypelNPA1999 & \\(n\\),\\(p\\) & & \\\\ & (Typel \\& Wolter 1999) & & \\\\
0.153 & -16.247 & 33.39 & — & 240 \\\\ \\hline E0024 & NiksicPRC2002\\({}^{\\dagger}\\) & \\(n\\),\\(p\\) & & \\\\ & (Nikšić et al. 2002) & & & \\\\
0.152 & -16.20 & 33.1 & 55 & 244.5 \\\\ \\hline E0025\\({}^{\\lx@sectionsign}\\) & SteinerPRC2005 & \\(n\\),\\(p\\) & & \\\\ & (Steiner \\& Li 2005) & & \\\\
0.16 & -16 & 31.6 & 107.4 & 211 \\\\ \\hline E0026\\({}^{\\lx@sectionsign}\\) & TsangPRC2012\\({}^{\\dagger}\\) & — & & \\\\ & (Tsang et al. 2012) & & & \\\\
0.16 & — & \\(30\\leq E_{sym}\\leq\\)34.4 & \\(45\\leq L\\leq 110\\) & — \\\\ \\hline E0027\\({}^{\\lx@sectionsign}\\) & DanielewiczRSEPSN2002\\({}^{\\dagger}\\) & — & & \\\\ & (Danielewicz 2002) & & & \\\\ — & — & — & 300 \\\\ \\hline E0028\\({}^{\\lx@sectionsign}\\) & SteinerPRL2012\\({}^{\\ddagger}\\) & only \\(n\\) & & \\\\ & (Steiner \\& Gandolfi 2012) & & \\\\ — & -16 & \\(32\\leq E_{sym}\\leq\\) 34 & \\(43\\leq L\\leq 52\\) & — \\\\ \\hline E0029\\({}^{\\lx@sectionsign}\\) & OnoPTPS2002 & \\(n\\),\\(p\\) & & \\\\ & (Ono 2002) & & & \\\\ — & -16.32 & 30.8 & — & 228.0 \\\\ \\hline E0030\\({}^{\\lx@sectionsign}\\) & FriedmanNPA1981 & \\(n\\),\\(p\\) & & \\\\ & (Friedman \\& Pandharipande 1981) & & \\\\
0.16 & — & — & 240 \\\\ \\hline E0031\\({}^{\\lx@sectionsign}\\) & CarlsonPRC2003 & — & & \\\\ & (Carlson et al. 2003) & & \\\\ \\hline \\end{tabular} \\({}^{\\star}\\) The corrected values when we adjust the binding energy to –16
[MeV].
\\({}^{\\dagger}\\) Experimental constraint on EoSs.
\\({}^{\\ddagger}\\) Observational constrain on EoSs.
\\({}^{\\lx@sectionsign}\\) Data ready to compiled as of 24, Sep., 2013.
\\end{table}
Table 1: (Continued.)
\\begin{table}
\\begin{tabular}{l l l l l} \\hline Data ID & Code & Constituents & & \\\\ & (Reference) & & & \\\\ \\(\\rho_{0}[{\\rm fm}^{-3}]\\) & \\(E_{0}\\) [MeV] & \\(E_{\\rm sym}\\) [MeV] & \\(L\\) [MeV] & \\(K\\) [MeV] \\\\ \\hline — & — & — & — & — \\\\ \\hline E0032\\({}^{\\lx@sectionsign}\\) & GezerlisPRC2010 & only \\(n\\) & & \\\\ & (Gezerlis \\& Carlson 2010) & & & \\\\ — & — & — & — & — \\\\ \\hline E0033\\({}^{\\lx@sectionsign}\\) & GandolfiPRC2009 & only \\(n\\) & & \\\\ & (Gandolfi et al. 2009) & & & \\\\ — & — & — & — & — \\\\ \\hline E0034\\({}^{\\lx@sectionsign}\\) & GandolfiPRL2007 & only \\(n\\) & & \\\\ & (Gandolfi et al. 2007) & & & \\\\ — & — & — & — & — \\\\ \\hline E0035\\({}^{\\lx@sectionsign}\\) & SchwenkPRL2005 & only \\(n\\) & & \\\\ & (Schwenk \\& Pethick 2005) & & \\\\
0.16 & — & — & — & — \\\\ \\hline E0036 & BotvinaNPA2010 & n, p, \\(\\alpha\\), A & & \\\\ & (Buyukcizmeci et al. 2013) & & & \\\\
0.145 & -16.3 & 36.9 & 110.8 & 281 \\\\ \\hline \\multicolumn{5}{l}{\\({}^{\\star}\\) The corrected values when we adjust the binding energy to -16} \\\\ \\multicolumn{5}{l}{[MeV].} \\\\ \\multicolumn{5}{l}{\\({}^{\\dagger}\\) Experimental constraint on EoSs.} \\\\ \\multicolumn{5}{l}{\\({}^{\\ddagger}\\) Observational constrain on EoSs.} \\\\ \\multicolumn{5}{l}{\\({}^{\\lx@sectionsign}\\) Data ready to compiled as of 24, Sep., 2013.} \\\\ \\end{tabular}
\\end{table}
Table 1: (Continued.)
\\begin{table}
\\begin{tabular}{l l l} \\hline Data table category & Item & Note \\\\ \\hline Bibliography & Title & \\\\ & Authors & \\\\ & Reference & \\\\ \\hline Attribute & Theory & For pure theoretical arguments \\\\ & Pure Experiment & Experimental constraints on EoSs \\\\ & Analysis & Theoretical analysis of experimental results \\\\ \\hline Constituents & N, Y, \\(\\alpha\\), A, Q, L & particles or nuclei \\\\ \\hline Method & Model & Theoretical framework \\\\ & Approximation & \\\\ \\hline Physics Constants & \\(\\hbar\\),\\(c\\), amu, etc. & \\\\ \\hline EoS for SNM\\({}^{*}\\) & \\(\\rho_{\\rm B}\\) & Baryon density \\\\ & \\(Y_{\\rm C}\\) & Charge Ratio \\(Y_{\\rm C}=0.5\\) \\\\ & \\(E/B\\) & Energy per baryon \\\\ & \\(P\\) & Pressure \\\\ & \\(S\\) & Entropy \\\\ \\hline EoS for PNM\\({}^{\\dagger}\\) & \\(\\rho_{\\rm B}\\) & Baryon density \\\\ & \\(Y_{\\rm C}\\) & Charge Ratio \\(Y_{\\rm C}=0.0\\) \\\\ & \\(E/B\\) & Energy per baryon \\\\ & \\(P\\) & Pressure \\\\ & \\(S\\) & Entropy \\\\ \\hline Symmetry Energy & \\(E_{\\rm sym}\\) & Symmetry energy \\\\ & \\(L\\) & \\(E_{\\rm sym}\\) slope to baryon density \\\\ & \\(K\\) & Incompressibility \\\\ \\hline \\end{tabular} \\({}^{*}\\) symmetric nuclear matter
\\({}^{\\dagger}\\) pure neutron matter
\\end{table}
Table 2: Data compiled in the EOSDB
\\begin{table}
\\begin{tabular}{l l l l l l l} \\hline \\multicolumn{2}{l}{Ab initio} & & & & & \\\\ \\hline Rel. / Non-rel. & Method & Interaction & Reference & Data ID & Comment \\\\ \\hline Non-rel. & VM & AV18 for NN & AkmalPRC1998 & E0016 & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (1.67M\\({}_{\\odot}\\), 8.2 [km]). \\\\ Non-rel. & VM & AV18 for NN & AkmalPRC1998\\_AV18\\_3BF & E0017 & UUX for NNN. (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.38M\\({}_{\\odot}\\), 10.08 [km]). \\\\ Non-rel. & VM & AV18 for NN & AkmalPRC1998\\_AV18\\_Boost & E0018 & Relativistic Correction included \\\\ & & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (1.80M\\({}_{\\odot}\\), 8.75 [km]) \\\\ Non-rel. & VM & AV18 for NN & AkmalPRC1998\\_AV18\\_3BF\\_Boost & E0019 & UUX for \\(NNN\\). \\\\ & & & & & Relativistic Correction included \\\\ & & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.20M\\({}_{\\odot}\\), 10.04 [km]) \\\\ & & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.30M\\({}_{\\odot}\\), 10.08 [km]) \\\\ & Non-rel. & VM & AV18 for NN & KanzawaPTP2009 & E0010 & UUX for NNN. M\\({}_{\\rm NS}^{\\rm Max}\\) = 2.2M\\({}_{\\odot}\\). \\\\ & Non-rel. & BHF & AV18 for NN & ZuoNPA2002 & E0020 & Tuscon-Melbourne for \\(NNN\\) \\\\ Rel. & DBHF & Bonn-A for NN & Gross1999 & E0021 & Symmetric Matter \\\\ & & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.24M\\({}_{\\odot}\\), 10.78 [km]) for (\\(n\\),\\(p\\),\\(e^{-}\\)) matter \\\\ Rel. & DBHF & Bonn-A for NN & vanDalenNPA2004 & E0022 & Asymmetric Matter \\\\ & & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.24M\\({}_{\\odot}\\), 10.78 [km]) for (\\(n\\),\\(p\\),\\(e^{-}\\)) matter \\\\ \\end{tabular}
\\end{table}
Table 3: Table for classification of ab initio theoretical models.
\\begin{table}
\\begin{tabular}{l l l l l l} \\hline \\multicolumn{2}{l}{Phenomenological} & & & & \\\\ \\hline Rel. / Non-rel. & Method & Interaction & Reference & Data ID & Comment \\\\ \\hline Non-rel. & SHF & SkI’ & LattimerNPA1991\\_LS180 & E0005 & \\(K\\)=180. \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (1.84M\\({}_{\\odot}\\), 10.2 [km]). \\\\ Non-rel. & SHF & SkI’ & LattimerNPA1991\\_LS220 & E0006 & \\(K\\)=220. \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.06M\\({}_{\\odot}\\), 10.85 [km]). \\\\ Non-rel. & SHF & SkI’ & LattimerNPA1991\\_LS375 & E0007 & \\(K\\)=375. \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.72M\\({}_{\\odot}\\), 12.53 [km]). \\\\ Non-rel. & 3Dim. SHF & SKa & HillebrandtA1984 & E0013 & Data shown only in Entry.html \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.21, 11.7 [km]). \\\\ Non-rel. & 3Dim. SHF & SkM\\({}^{*}\\) & NewtonJPC2006 & E0015 & — \\\\ \\hline \\end{tabular}
\\end{table}
Table 4: Table for classification of phenomenological theoretical models.
\\begin{table}
\\begin{tabular}{l l l l l l} \\hline \\multicolumn{2}{l}{Phenomenological} & & & \\\\ \\hline Rel. / Non-rel. & Method & Interaction & Reference & Data ID & Comment \\\\ \\hline Rel. & RMF & TM1(Only N) & HShenNPA1998 & E0002 & Thomas-Fermi apprx. \\\\ & & & & & for inhomo. phase. \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm MS}\\), R) = (2.18M\\({}_{\\odot}\\), 12.5 [km]). \\\\ Rel. & RMF & TM1(Only N) & HShenAPJS2011\\_N & E0003 & Different from E0002 at \\((T,Y_{p})=(0,0)\\). \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.18M\\({}_{\\odot}\\), 12.5 [km]). \\\\ Rel. & RMF & TM1(Only N) & FurusawaApJ2011 & E0011 & NSE for inhomo. phase \\\\ Rel. & RMF & TM1(Only N) & BotvinaNPA2010 & E0010 & NSE for inhomo. phase \\\\ Rel. & RMF & TM1(with Y) & HShenAPJS2011\\_Y & E0004 & Only \\(\\Lambda\\) included as hyperons. \\\\ & & & & M\\({}_{\\rm NS}^{\\rm Max}\\)= 1.75M\\({}_{\\odot}\\). \\\\ Rel. & RMF & TM1(with Y) & IshizukaJPG2008\\_SR30 & E0012 & Full Baryon Octet. \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = 1.63M\\({}_{\\odot}\\), 13.26 [km]). \\\\ Rel. & RMF & TMA & HempelNPA2010\\_TMA & E0008 & NSE for infomo. phase \\\\ & & & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.04M\\({}_{\\odot}\\), 12.43 [km]) \\\\ Rel. & RMF(RHF+QMC) & — & MiyatsuPLB2012 & E0009 & Full Baryon Octet. M\\({}_{\\rm NS}^{\\rm Max}\\)= 1.95M\\({}_{\\odot}\\). \\\\ Rel. & DD RMF & DD-TW & TypelNPA1999 & E0023 & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.2M\\({}_{\\odot}\\), 11.2 [km]). \\\\ Rel. & DD RMF & DD-ME1 & Nikš PRC2002 & E0024 & (M\\({}_{\\rm NS}^{\\rm Max}\\)= 2.47M\\({}_{\\odot}\\), 11.9 [km]). \\\\ Rel. & DD RMF & FSUgold & GShenPRC2011\\_FSUgold2.1 & E0001 & Adjusted to support 2.1M\\({}_{\\odot}\\) NS. \\\\ & & + Polytrope & & (M\\({}_{\\rm NS}^{\\rm Max}\\), R) = (2.1M\\({}_{\\odot}\\), 12.2 [km]) \\\\ \\hline \\end{tabular}
\\end{table}
Table 5: Table for classification of phenomenological theoretical models.
Summary and future prospects
We have constructed the database of nuclear EoSs (EOSDB), which is available online. The database includes information on experimental or theoretical details, energy, pressure, entropy, symmetry energy, the derivative of the symmetry energy with respect to baryon density, and incompressibility as functions of baryon density. These data are taken from published papers, with the help of a software to scan viewgraphs. A search and plot system has been converted from the SAGA database that deals with observed metal-poor stars in the Galaxy. The system enables the retrieval and plotting of data selected according to various criteria featuring nuclear saturation properties. Our sample includes 36 datasets mainly for symmetric nuclear matter, pure neutron matter and its constraints (the symmetry energy) at \\(T\\!=\\!0\\) MeV. The list of the compiled data are presented in tabular format. The summary of the theoretical models compiled in our database together with the derived maximum mass of neutron stars is also presented according to the classification of theoretical models.
The EOSDB can help to examine various EoSs because the data is provided in a unified format. However, the users should note that these data are based on different assumptions and models and may cause problems in attempting comparisons among EoSs without understanding their details. One of the future updates will include the query options according to models and methods in the search and plot system. This will elucidate the model dependence of EoSs and the physics behind qualitatively and quantitatively in more efficient way. We will report in a forthcoming paper a benchmark test for various EoSs using the EOSDB. Another future update will be to try to receive data from the authors of the papers, instead of scanning viewgraphs, to guarantee the quality of the compiled data.
We demonstrated the model dependence of EoSs using the EOSDB and find that theoretical EoSs commonly used in astrophysics have a difficulty in satisfying the experimental constraints on both the incompressibility and the symmetry energy simultaneously. This suggests that we need more sophistications of models that deal with nuclei. For example, a compound system of baryons, which spans a wide range of size and energy, should be treated as static or dynamical context.
The EOSDB can be an even more powerful tool with the help of the future observations of neutron stars. In this paper, we also summarize the theoretical mass and radius of a cold neutron star, although it could depend on the treatment of neutron star crust. We will soon report the details of the dependence. It is currently very difficult to measure the mass and radius of neutron stars simultaneously; For isolated neutron stars or magnetars, the surface temperature and radius (and possibly magnetic field) can be measured, while their masses cannot be determined in such systems. The mass can be determined with uncertainties associated with the inclination angle of the orbital plane of binary neutron stars. A possible case to measure the mass and radius will be for X-ray binaries with weak magnetic field. In this case, the information of the distance (or redshift), the temperature, and possibly the surface gravity of neutron stars are required. Still, we should keep in mind that these values contain ambiguities in absorption lines used, the assumption of blackbody radiation, and atmosphere models. Therefore, to increase the quality of the observed parameters, we need more sample to compare, which will be achieved by future observations of neutron stars. The ASTRO-H, which is scheduled for the launch in 2015, will enable us to analyze the 4.1keV absorption line of the neutron star X-ray transient 4U 1608-52 thanks to its high resolution spectra. The observations of this object was so far performed only with Tenma. The Neutron star Interior Composition ExploreR (NICER), which will be launched in 2016, enables rotation-resolved spectroscopy of the thermal and non-thermal emissions of neutron stars. In the early 2020s, the Large Observatory for X-ray Timing (LOFT) is also proposed. The EOSDB has a potential to include these data in the future updates. We are planning to compile more detailed information about neutron stars such as binarity, mass, radius, and magnetic field for thousands of neutron stars that have already been observed in our Galaxy, which may give us an opportunity to discuss the EoS and NSs in a new aspect.
The authors thank to Ken'ichiro Nakazato, Matthias Hempel, Shun Furusawa, Nihal Buyukcizmeci, Igor Mishustin and Tsuyoshi Miyatsu for kindly providing data of their theoretical models. T. S. and C. I. thank to Masayuki Y. Fujimoto for fruitful discussions on neutron star mass and radius, and to Jirina R. Stone, Thomas Klaehn and Micaela Oestel for useful discussions on the database. CI has been supported by Sasagawa Grants for Scientific Fellows of the Japan Science Society, F12-208. This work has been partially supported by Grant-in-Aid for Scientific Research on Innovative Areas (20105001, 24105001) and by Grant-in-Aid for Scientific Research (22540296, 23224004, 24244036, 26105515, 26104006), from Japan Society of the Promotion of Science. The authors appreciate the support of Tokyo University of Science and Research Center for Nuclear Physics, Osaka University for the use of the database server.
## References
* Akmal et al. (1998) Akmal, A., Pandharipande, V. R., & Ravenhall, D. G. 1998, Phys. Rev. C, 58, 1804
* Antoniadis et al. (2013) Antoniadis, J. et al. 2013, Science, 340, 448
* Aoki et al. (2012) Aoki, S. et al. 2012, Prog. Theor. Expr. Phys., 01A105
* Ban et al. (2004) Ban, S. F., Li, J., Zhang, S. Q., Jia, H. Y., Sang, J. P., & Meng, J. 2004, Phys. Rev. C, 69, 045805
* Bauswein & Janka (2012) Bauswein, A. & Janka, H.-T. 2012, Physical Review Letters, 108, 011101
* Bauswein et al. (2012) Bauswein, A., Janka, H.-T., Hebeler, K., & Schwenk, A. 2012, Phys. Rev. D, 86, 063001
* Buyukcizmeci et al. (2013) Buyukcizmeci, N., et al. 2013, Nuclear Physics A, 907, 13
* Carlson et al. (2003) Carlson, J., Morales, J., Pandharipande, V. R., & Ravenhall, D. G. 2003, Phys. Rev. C, 68, 025802
* Chen et al. (2010) Chen, L.-W., Ko, C. M., Li, B.-A., & Xu, J. 2010, Phys. Rev. C, 82, 024321
* Cheoun et al. (2013) Cheoun, M.-K., Deliduman, C., Gungor, C., Keles, V., Ryu, C. Y., Kajino, T., & Mathews, G. J. 2013, JCAP, 10, 21
* Colo' & van Giai (2004) Colo', G. & van Giai, N. 2004, Nuclear Physics A, 731, 15
* Colo' & van Giai (2004)Danielewicz, P. 2002, Ricerca Scietifica ed Educazione Permanente Supplemento N., 119, 66
* Demorest et al. (2010) Demorest, P. B., Pennucci, T., Ransom, S. M., Roberts, M. S. E., & Hessels, J. W. T. 2010, Nature, 467, 1081
* Friedman & Pandharipande (1981) Friedman, B. & Pandharipande, V. R. 1981, Nuclear Physics A, 361, 502
* Fuchs & Wolter (2006) Fuchs, C. & Wolter, H. H. 2006, European Physical Journal A, 30, 5
* Furusawa et al. (2011) Furusawa, S., Yamada, S., Sumiyoshi, K., & Suzuki, H. 2011, ApJ, 738, 178
* Gandolfi et al. (2009) Gandolfi, S., Illarionov, A. Y., Schmidt, K. E., Pederiva, F., & Fantoni, S. 2009, Phys. Rev. C, 79, 054005
* Gandolfi et al. (2007) Gandolfi, S., Pederiva, F., Fantoni, S., & Schmidt, K. E. 2007, Physical Review Letters, 98, 102503
* Gezerlis & Carlson (2010) Gezerlis, A. & Carlson, J. 2010, Phys. Rev. C, 81, 025803
* Gross-Boelting et al. (1999) Gross-Boelting, T., Fuchs, C., & Faessler, A. 1999, Nuclear Physics A, 648, 105
* Hempel & Schaffner-Bielich (2010) Hempel, M. & Schaffner-Bielich, J. 2010, Nuclear Physics A, 837, 210
* Hillebrandt et al. (1984) Hillebrandt, W., Nomoto, K., & Wolff, R. G. 1984, A&A, 133, 175
* Hotokezaka et al. (2013) Hotokezaka, K., Kiuchi, K., Kyutoku, K., Okawa, H., Sekiguchi, Y.-i., Shibata, M., & Taniguchi, K. 2013, Phys. Rev. D, 87, 024001
* Ishizuka et al. (2008) Ishizuka, C., Ohnishi, A., Tsubakihara, K., Sumiyoshi, K., & Yamada, S. 2008, Journal of Physics G Nuclear Physics, 35, 085201
* Kanzawa et al. (2009) Kanzawa, H., Takano, M., Oyamatsu, K., & Sumiyoshi, K. 2009, Progress of Theoretical Physics, 122, 673
* Krastev & Sammarruca (2006) Krastev, P. G. & Sammarruca, F. 2006, Phys. Rev. C, 74, 025808
* Lattimer & Douglas Swesty (1991) Lattimer, J. M. & Douglas Swesty, F. 1991, Nuclear Physics A, 535, 331
* Lattimer & Prakash (2010) Lattimer, J. M. & Prakash, M. 2010, ArXiv e-prints
* Maekawa et al. (2007) Maekawa, H., Tsubakihara, K., & Ohnishi, A. 2007, European Physical Journal A, 33, 269
* Masuda et al. (2013) Masuda, K., Hatsuda, T., & Takatsuka, T. 2013, ApJ, 764, 12
* Miyatsu et al. (2012) Miyatsu, T., Katayama, T., & Saito, K. 2012, Physics Letters B, 709, 242
* Newton et al. (2006) Newton, W. G., Stone, J. R., & Mezzacappa, A. 2006, Journal of Physics Conference Series, 46, 408
* Niksic et al. (2002) Niksic, T., Vretenar, D., & Ring, P. 2002, Phys. Rev. C, 66, 064302
* Ono (2002) Ono, A. 2002, Progress of Theoretical Physics Supplement, 146, 378
* Potekhin & Yakovlev (2012) Potekhin, A. Y. & Yakovlev, D. G. 2012, Phys. Rev. C, 85, 039801
* Schwenk (2013) Schwenk, A. 2013, Journal of Physics Conference Series, 445, 012009
* Schwenk & Pethick (2005) Schwenk, A. & Pethick, C. J. 2005, Physical Review Letters, 95, 160401
* Shen et al. (2011a) Shen, G., Horowitz, C. J., & Teige, S. 2011a, Phys. Rev. C, 83, 035802
* Shen et al. (1998) Shen, H., Toki, H., Oyamatsu, K., & Sumiyoshi, K. 1998, Nuclear Physics A, 637, 435
* Shen et al. (2011b) --. 2011b, ApJS, 197, 20
* Steiner & Gandolfi (2012) Steiner, A. W. & Gandolfi, S. 2012, Physical Review Letters, 108, 081102
* Steiner & Li (2005) Steiner, A. W. & Li, B.-A. 2005, Phys. Rev. C, 72, 041601
* Stone et al. (2003) Stone, J. R., Miller, J. C., Koncewicz, R., Stevenson, P. D., & Strayer, M. R. 2003, Phys. Rev. C, 68, 034324
* Suda et al. (2008) Suda, T. et al. 2008, PASJ, 60, 1159
* Suda et al. (2011) Suda, T., Yamada, S., Katsuta, Y., Komiya, Y., Ishizuka, C., Aoki, W., & Fujimoto, M. Y. 2011, MNRAS, 412, 843
* Suda et al. (2012)Sulaksono, A. & Agrawal, B. K. 2012, Nuclear Physics A, 895, 44
* Takano et al. (2010) Takano, M., Togashi, H., & Kanzawa, H. 2010, Progress of Theoretical Physics Supplement, 186, 63
* Timmes & Arnett (1999) Timmes, F. X. & Arnett, D. 1999, ApJS, 125, 277
* Tsang et al. (2012) Tsang, M. B. et al. 2012, Phys. Rev. C, 86, 015803
* Tsubakihara & Ohnishi (2013) Tsubakihara, K. & Ohnishi, A. 2013, Nuclear Physics A, 914, 438
* Typel et al. (2013) Typel, S., Oertel, M., & Klaehn, T. 2013, ArXiv e-prints
* Typel & Wolter (1999) Typel, S. & Wolter, H. H. 1999, Nuclear Physics A, 656, 331
* van Dalen et al. (2004) van Dalen, E. N. E., Fuchs, C., & Faessler, A. 2004, Nuclear Physics A, 744, 227
* Vautherin et al. (1970) Vautherin, D., Veneroni, M., & Brink, D. M. 1970, Physics Letters B, 33, 381
* Weissenborn et al. (2012) Weissenborn, S., Chatterjee, D., & Schaffner-Bielich, J. 2012, Phys. Rev. C, 85, 065802
* Zuo et al. (2002) Zuo, W., Lejeune, A., Lombardo, U., & Mathiot, J. F. 2002, Nuclear Physics A, 706, 418 | Nuclear equation of state (EoS) plays an important role in understanding the formation of compact objects such as neutron stars and black holes. The true nature of the EoS has been a matter of debate at any density range not only in the nuclear physics but also in the astronomy and astrophysics. We have constructed a database of EoSs by compiling data from the literature. Our database contains the basic properties of the nuclear EoS of symmetric nuclear matter and of pure neutron matter. It also includes the detailed information about the theoretical models, for example the adopted methods and assumptions in individual models. The novelty of the databaseis to consider new experimental probes such as the symmetry energy, its slope with respect to the baryon density, and the incompressibility, which enables users to check their model dependences. We demonstrate the performance of the EOSDB through the examinations of the model dependence among different nuclear EoSs. It is reveled that some theoretical EoSs, which is commonly used in astrophysics, do not satisfactorily agree with the experimental constraints.
**Key words:** Equation of State, Symmetry Energy, Neutron Star | Provide a brief summary of the text. | 233 |
arxiv-format/1012_4792v1.md | ## A Minimized Mutual Information retrieval for simultaneous atmospheric pressure and temperature
_Prabhat K. Koner and James R. Drummond_
### Introduction
In order for atmospheric temperature and pressure to be inferred from measurements from a Fourier Transform Spectrometer orbiting Mars, the source of emission must be a relatively abundant gas of known and uniform distribution. Otherwise, the uncertainty in the abundance of the gas will make the determination of temperature and pressure from the measurements ambiguous. Fortunately carbon dioxide is present in the Martian atmosphere in uniform abundance for altitudes below about 200 km, and has emission bands in spectral regions that are convenient for measurement.
There are many papers and books that review the retrieval theories which have been developed over the past few decades [1-5]. The consensus is that retrieving an atmospheric profile of (for example) temperature and pressure from spectroscopic measurements from space is an ill-posed problem for which there is no unique solution because (a) the outgoing radiances arise from relatively deep layers of the atmosphere,(b) the radiances observed within various spectral channels come from overlapping layers of the atmosphere and are not vertically independent of each other, and (c) the measurements have errors. As a consequence a large number of approaches have been tried. They differ both in the procedure for solving the set of spectrally independent radiative transfer equations (e.g., matrix inversion, numerical iteration) and in the type of ancillary data used to constrain the solution to ensure a meaningful result (e.g., the use of atmospheric covariance statistics, hydrostatic constraint and an a priori estimate of the profile structure etc.). We have already reported in previous papers [6-8] that it is possible to obtain unique solution under some circumstances with proper attention to the mathematics, proper experiment design and reasonable assumptions in the forward model. We show in this paper that pressure and temperature can be uniquely retrieved from the spectroscopic measurement of the Martian atmosphere.
There are two main geometries for these types of measurement: nadir (downward-looking) and limb viewing. Nadir viewing can rely on atmospheric emission, surface emission or reflected solar radiation depending upon the experiment design. Limb viewing uses atmospheric emission or a stellar/lunar source, usually the sun. Nadir techniques have the potential for good horizontal spatial localisation, but worse vertical resolution whereas limb techniques, particularly solar occultation, have the reverse characteristics: potentially high vertical resolution and poor horizontal localisation. The technique discussed in this paper is solar occultation, chosen for its high sensitivity and vertical resolution.
As mentioned above there are many pressure and temperature retrieval algorithms [9-13] available for spectroscopic measurements in the Earth's atmosphere. They frequently use parameter reduction with the help of the hydrostatic equation or some other relationship and a-priori information. The accuracy of the tangent height - the lowest altitude of the limb path - plays an important role when the problem is constrained by the hydrostatic equation. In the case of Mars it will be more difficult to get good pointing knowledge relative to the surface because of the difficulty of knowing the orbit and relevant topography precisely. However the relative nature of the tangent heights in a series of measurements through the satellite orbit is determined mainly by time and the satellite orbital velocity, both of which are very well-known. Thus the relative nature of the tangent heights will be well-known, but the absolute nature less well.
There are few papers on spectroscopic measurements of the Martian atmosphere. Recently a paper [14] reported on the retrieval of pressure and temperature of Mars from the Mars Climate Sounder (MCS) measurement on the Mars Reconnaissance Orbiter using the Chahine method to get an approximate retrieval. MCS is a nadir-viewing instrument. The paper reported that there is no unique solution for the problem and the results are heavily dependent on the initial guess/constrained profile.
An additional consideration is that dust opacity is significant in the Martian atmosphere to an extent not seen upon Earth and the atmospheric temperature profile of Mars can be strongly influenced by the suspended dust.
Finally, the a-priori information is very poor simply due to our ignorance of the Martian atmosphere.
This paper presents a methodology to solve the radiative transfer retrieval problem in a finite parametric space for an orbiting solar occultation spectrometer probing the Martian atmosphere considering pressure and temperature as two independent variables, no a-priori profiles and no hydrostatic constraint.
### Theoretical Background
The information content of a datum is a relative measure and has quantitative meaning only in a defined framework. The information content of data can be considered with respect to models if compatible models exist. Radiative transfer modelling for atmospheric problem using line by line (LBL) calculation is mature. The model consists of a set of targeted parameters, which is embedded in a mathematical framework. A model containing \"N\" parameters can be described by a point in a N-dimensional space. The information gained in a measurement can be quantified by the reduction of the volume of the region in hyperspace [15]. The information, H\\({}_{\\mathrm{hs}}\\), in bits, required to reduce region 1 of volume R\\({}_{1}\\) to region 2 of volume R\\({}_{2}\\) can be written
\\[H_{{}_{\\mathrm{hs}}}=\\log\\Biggl{(}\\frac{R_{{}_{1}}}{R_{{}_{2}}}\\Biggr{)} \\tag{1}\\]
Since the amount of information is necessarily finite, the regions in hyperspace defined by the model cannot be infinitely large or infinitely small and \"truth\" has to be found within the region.
The evaluation of the volume of R in a high dimensional space is difficult. If we assume that the points are normally distributed in hyper-ellipsoidal surface [16] then
\\[\\mathbf{x}^{T}\\mathbf{V}^{-1}\\mathbf{x}=c \\tag{2}\\]
where \\(\\mathbf{x}\\) is any displacement vector, \\(\\mathbf{V}\\) is the covariance matrix for the parameters, and c is a constant that defines a particular equal-probability surface. The \"volume\" R of the region bounded by the surface defined by Eqn(2) is [15, 17]
\\[R=\\frac{\\sqrt{\\pi}\\ c^{2}}{\\Gamma(N/2+1)}\\sqrt{|\\mathbf{V}|} \\tag{3}\\]
where \\(\\Gamma\\) is the gamma function, \\(\\left|\\right|\\) is the determinant of the matrix, and N is the dimensionality of the space or variables.
Combining Eqn(1) & Eqn(3), H\\({}_{\\mathrm{hs}}\\) can be written as
\\[H_{{}_{\\mathrm{hs}}}=\\frac{1}{2}\\log_{2}\\left|\\frac{\\mathbf{V}_{1}}{\\mathbf{V} _{2}}\\right| \\tag{4}\\]This is same formula as that obtained directly from the formulation of Shannon [18] when applied to a distribution of estimators that is multivariate normal. For example, the measure of entropy for the parameter space \\(\\mathbf{x}\\), i.e.
\\[\\left\\langle\\log\\left[f(x)\\right]\\right\\rangle=\\int f(x)\\log\\left[f(x)\\right]dx \\tag{5}\\]
where \\(\\left\\langle.\\right\\rangle\\) is the expected value and \\(f(x)\\) is the probability density function for the model parameters. The information content of an experiment can then be defined as the entropy based on the prior distribution of x minus the entropy based on the posterior distribution (i.e., that based on applying Bayes' rule to the observed data and the prior distribution). Under multivariate normality, the entropy is a constant \\(+(1/2)\\) [log(det V)], where V is the covariance matrix for the distribution of x. Thus, if V\\({}_{1}\\) and V\\({}_{2}\\) are equivalent to the prior and posterior covariances, respectively, a result identical to Eqn (4) can be obtained.
For a forward model \\(\\mathbf{y=Kx+\\delta}\\), the covariance matrix V is usually computed \\(\\mathbf{V=\\delta^{2}(K^{T}K)^{-1}}\\). Where \\(\\mathbf{K}\\) is forward operator and \\(\\delta\\) is the estimate of the variance of the observation errors. \\(\\mathbf{K}\\) is conventionally called the Jacobian matrix for the linear and vector valued function/equation. For the assumption that the error in all measurements is the same, H\\({}_{\\text{hs}}\\) can also be defined in terms of K as
\\[H_{\\text{hs}}=\\frac{1}{2}\\log_{2}\\frac{\\left|\\mathbf{K}_{2}^{T}\\mathbf{K}_{2} \\right|}{\\left|\\mathbf{K}_{1}^{T}\\mathbf{K}_{1}\\right|} \\tag{6}\\]
The determinant of \\(\\mathbf{K}^{T}\\mathbf{K}\\) approaches zero when the condition number of \\(\\mathbf{K}^{T}\\mathbf{K}\\) is high, which may produce an inappropriate result. In such a situation, regularization is introduced. Thus, a better assumption of covariance is the inverse of the regularized Jacobian under a least squares formulation, which is \\(\\mathbf{V=(K^{T}K+\\alpha L^{T}L)^{-1}}\\). Where \\(\\mathbf{L},\\alpha\\) are the regularization matrix and strength respectively. The regulation strength can be calculated [19] as: \\(\\alpha=\\delta/C\\), \\(\\left|\\mathbf{Lx}\\right|\\leq C\\) _where_\\(\\left|\\!\\left|\\!\\left|\\!\\right|\\!\\right|\\!\\right|\\) is the norm. Eqn(6) can be modified for highly ill-conditioned Jacobians as
\\[H_{\\text{hs}}=\\frac{1}{2}\\log_{2}\\frac{\\left|\\mathbf{K}_{2}^{T}\\mathbf{K}_{2} +\\frac{\\delta}{C}\\mathbf{L}^{T}\\mathbf{L}\\right|}{\\left|\\mathbf{K}_{1}^{T} \\mathbf{K}_{1}+\\frac{\\delta}{C}\\mathbf{L}^{T}\\mathbf{L}\\right|} \\tag{7}\\]
A precise technique for calculating of the regularization strength is required for a severely ill-conditioned Jacobian. The region of the hyperspace is also dependent on the nonlinearity of the problem.
### Information Analysis
To calculate the information, a model is required. A representative Mars atmosphere is taken from [20]. 95.32% abundance of CO\\({}_{2}\\) has been assumed. The HITRAN-2004 [21] spectroscopic database of the gases is used and the extinction calculation for the gases has been made by the GENSPECT program [22]. We have considered the resolution of the Fourier Transform Spectrometer to be 0.02 cm-1 and the geometry of the measurement is solar occultation. The signal-to-noise ratio (SNR) is 100, which is somewhat lower than could reasonably be achieved in practice. The effect of refraction, which is very small because the Martian atmosphere is less dense than Earth, is not considered. We consider a prototype model from 4-50km, which contains 16 grid points. The tangent points are [4, 6, 8, 10, 12, 14, 16, 18, 20, 22.5, 25.0, 27.5, 30.0, 32.5, 35.0 and 40.0] km.
The search space of the proposed problem is wide; for example the atmospheric pressure varies from 0.1pa to 600pa and the temperature varies from 60\\({}^{\\circ}\\)K to 300\\({}^{\\circ}\\)K. If we consider a logarithmic scale of the state space parameters, then the range of the search space is from -2.3 to 6.2 for pressure and from 4.1 to 5.7 for temperature. For this reason we have made this calculation considering the state space as logarithmic.
The length of the tangent level sub-path is very large as compared to other sub-paths in a solar occultation measurement, which restricts the opportunity to extract any information from other sub-paths using any retrieval scheme. For the reduction of computational cost, we consider one signal value for one solar occultation spectrum by integrating the transmittance of a micro-window as \\(\\sum\\limits_{i=1}^{N_{\\rm c}}(1-\\tau_{i})\\) where N\\({}_{\\rm c}\\) is the numbers of channels in the micro-window and \\(\\tau\\) is the transmittance. This will also reduce the nonlinearity and increases the SNR (800\\(\\,\\approx\\,\\)10\\({}^{3}\\) ) component without altering the spectroscopic properties in the retrievals.
We select 140 MWs with a width of 1.3 cm-1 (64 channels of the resolution of 0.02cm-1) from different parts of the spectrum to understand the problem. These windows are basically chosen intuitively from the weak line areas of CO\\({}_{2}\\) bands to produce reasonable signals for the present solar occultation measurements and are presented for illustration purposes only. The path length of the solar occultation geometry is very high (\\(\\sim\\)100km) and measurements at low tangent heights will saturate if the line-strength of the gas lines are high. First, we calculate the two separate Jacobians of pressure and temperature using two different pressure and temperature profiles. One of these profiles is the standard pressure and temperature profile of Mars referred to above. The second profile is a constant profile: T=280K, p=80Pa.
First we have calculated the information content (\\(H_{b}\\) ) using most popular method, which is based on the Bayes theory [5] :
\\[H_{b}=-\\frac{1}{2}\\ln\\left|\\textbf{I - A}\\right| \\tag{8}\\]where \\(\\mathbf{A}=(\\mathbf{K}^{T}\\mathbf{S}_{e}^{-1}\\mathbf{K}+\\mathbf{S}_{a}^{-1})^{-1} \\mathbf{K}^{T}\\mathbf{S}_{e}^{-1}\\mathbf{K}\\).
Where \\(\\mathbf{S}_{a}^{-1}\\) and \\(\\mathbf{S}_{e}^{-1}\\) are the a-priori and error covariance matrices respectively. The inverse method is so called the optimal estimation method (OEM).
Since we do not know the proper a-priori covariance, a 50% diagonal covariance is used for these calculations and three different SNRs (100,800,2400) to understand the problem. Fig. 1 shows the values of \\(H_{b}\\) to be in the range of 20\\(\\sim\\)80 bits (SNR=800) and 1\\(\\sim\\)50 (SNR=100). The values are unreasonably large. According to Shannon [18] in the parameter space, one parameter can produce one bit of information and N parameters N bits of information since the total number of possible states is \\(2^{N}\\) and \\(\\log_{2}(2^{N})=N\\). It should be mentioned that this is valid when the number of measurements greater than or equal to the state space parameters. In our example N=16, thus we expect a maximum of 16 bits of information whereas the results in Fig. 1 show values up to 100. The
Figure 1: Information content using Bayes theory for 140 different micro-windows: Tm100 stands for temperature information at SNR=100, similarly Pr800 stands for pressure information at SNR=800.
information content using the same theory of 60 bits has been reported by Dudhia et al. [23], where the problem is almost similar to us (e.g. limb view spectroscopic measurement with a set of parameters of 16) except SNR of this problem is low.
To understand the problem, we explore the basic formulation of this method [5], where the posterior covariance is based on the expected total error \\(\\left\\langle\\xi\\right\\rangle\\)
\\[\\left\\langle\\xi\\right\\rangle=(\\mathbf{y-Kx})^{T}\\mathbf{S}_{e}^{-1}(\\mathbf{y -Kx})+(\\mathbf{x-x}_{a})^{T}S_{a}^{-1}(\\mathbf{x-x}_{a}) \\tag{9}\\]
Where \\(\\mathbf{S}_{e}^{-1}and\\mathbf{S}_{a}^{-1}\\)are the error and a-priori covariance's, which are effectively scaling factor for the calculation. When the system is perfectly linear and we know the truth exactly, it is possible to approximate error as \\((\\mathbf{(y-Kx)}^{T}\\mathbf{S}_{e}^{-1}(\\mathbf{y-Kx}))\\). The problem is that these problems are not linear. In that case the measurement errors cannot be constructed by using \\((\\mathbf{y-Kx})\\) because the residual will contain terms from the nonlinear nature of the problem and will therefore not properly represent the measurement errors. Thus, an information calculation using this formulation for any nonlinear problem will produce wrong results. The error injection due to the nonlinearity will be proportional to the nonlinear contribution term multiplied by SNR. At this point we can mention the work of Ceccherini et al [31] where it is stated that \\(\\chi\\)-square does not exist for nonlinear functions.
Spectroscopic measurements generally deal with SNR of \\(10^{2}\\)-\\(10^{3}\\) and for the present problem \\(\\sim\\)\\(10^{3}\\)(corresponding to the single point measurement \\(\\sim\\)\\(10^{2}\\)). The nonlinear contribution of the present problem from the term \\((\\mathbf{y-Kx})\\) is the order of 0.2. The amount of error inserted in such calculation is the \\(\\log_{e}(.2\\times SNR)=\\log_{e}(200)\\approx\\)5. We observed in Fig. 1 that the calculated value of \\(H_{b}\\) (\\(\\sim\\)80 at SNR=800) is approximately 5 times larger than the number of parameters (16). Thus the measurement error statistics or the entropy in measurement from this calculation are corrupted by the nonlinear contribution. This is the source of the problem. By the same argument, even in the linear case, if the residual due to the a-priori assumption is more than the measurement noise, it will produce erroneous results. This is not seen only our problem, the same unreasonably high information content has been reported in many papers [24-30], which confirms that the radiative transfer problem is nonlinear.
The question will arise at this point that there are some successful retrieval have been produced using OEM method, where theory is not appropriate for radiative transfer inverse problem. This can be discussed that the successful retrieval is possible when the choices of \\(\\mathbf{S}_{e}^{-1}\\) and \\(\\mathbf{S}_{a}^{-1}\\) are optimally regularized the problem. To validate our statement, we analyse OEM formulation in the framework of a regularized least squares paradigm. Under such condition, the gain matrix (G\\({}_{y}\\)) in the Optimal Estimation Method (OEM) can be treated as the inverse of the regularized Jacobian. G\\({}_{y}\\) is then given by
\\[\\mathbf{G}_{y}=(\\mathbf{K}^{T}\\mathbf{S}_{e}^{-1}\\mathbf{K}+\\mathbf{S}_{a}^{- 1})^{-1}\\mathbf{K}^{T}\\mathbf{S}_{e}^{-1} \\tag{10}\\]Usually the error covariance is diagonal and the individual terms the same. In such cases, \\(S_{e}\\) is a scalar term and \\(\\;\\;\\mathrm{G}_{y}\\) can be rewritten as: \\(\\;\\;\\mathrm{\\bf G}_{y}=(\\mathrm{\\bf K}^{T}\\mathrm{\\bf K}+S_{e}\\mathrm{\\bf S}_{a} ^{-1})^{-1}\\mathrm{\\bf K}^{T}\\,\\). Under the regularized least squares assumption[32, 33] the information matrix is \\(\\;\\;(\\mathrm{\\bf K}^{T}\\mathrm{\\bf K}+S_{e}\\mathrm{\\bf S}_{a}^{-1})\\;\\) and the posterior covariance is \\(\\;\\mathrm{\\bf S}_{p}=(\\mathrm{\\bf K}^{T}\\mathrm{\\bf K}+S_{e}\\mathrm{\\bf S}_{a} ^{-1})^{-1}\\). Using this formulation the information can be calculated as
\\[H_{bm}= -\\frac{1}{2}\\ln\\left|\\mathrm{\\bf S}_{p}\\mathrm{\\bf S}_{a}^{-1}\\right| \\tag{11}\\]
Where \\(\\;H_{bm}\\) is the modified information content using Bayes theory. The calculated \\(\\;H_{bm}\\) for our 140 MWs for the standard pressure and temperature profiles of Mars with three different 100%, 50% and 10% diagonal a-priori covariances for SNR\\(=\\)800 are presented in Fig. 2. Notice that the calculated information has dropped significantly, but is still high.
The modified calculation of information (H\\({}_{\\mathrm{bm}}\\)) shown in Fig. 2 demonstrates that the values of H\\({}_{\\mathrm{bm}}\\) can be within the domain of the number of parameters (16) for a small a-priori covariance of 10%, whereas even if we recomputed Figure 1 with 10% a-priori covariance instead of 50%, the values of H\\({}_{\\mathrm{b}}\\) would still be in the range 50-140. This also confirms that the regularization paradigm can produce better results than the error statistics estimation approach for a nonlinear problem. The values of H\\({}_{\\mathrm{bm}}\\) very much depend upon the choice of the a-priori covariance and the calculated \"information\" increases with increasing a-priori covariance. This implies that a-priori covariance is obscuring the information in the measurement. It is very difficult to partition the information between the measurement and the a-priori in such a formulation.
Fig. 2 shows that the calculated maximum information content of temperature-sensitive MWs can be up to double the maximum information content of pressure-sensitive MWs. This can be understood in terms of the functional nonlinearity. This problem is not linear and has a complex functional behaviour. It is not easy, or perhaps not even possible, to determine the quantitative degree of nonlinearity. On the other hand, when the covariance calculation is based on the model, the functional relation of the model is important and we have assumed a linear relation of the forward model to develop the covariance in the information content calculation in Eqn(6). If the assumption is very far from reality, the information content calculation will almost certainly lead to erroneous results.
We have made a simple trial to depict the effect the degree of nonlinearity of the present problem. We first select for illustration purposes two MWs from Fig. 2. For the first the temperature information is very high (MW-79) and for the second one the temperature information is low (MW-132). We then calculate the signal using the standard Mars atmospheric pressure (**P**) and temperature (**T**), which is plotted in Fig. 3. This implies **x** =[**T**;**P**]. We have calculated the Jacobians of these MWs for temperature and pressure **K\\({}_{p1}\\)**, **K\\({}_{t1}\\)**, **K\\({}_{p2}\\)** and **K\\({}_{t2}\\)** Then, we plot four different functional relations as follow:
\\([\\textbf{K}_{t1}\\ \\textbf{K}_{p1}]\\textbf{x}\\), \\([\\sqrt{\\textbf{K}_{t1}\\ \\textbf{K}_{p1}}]\\textbf{x}\\), \\([\\textbf{K}_{t1}\\sqrt{\\textbf{K}_{p1}}]\\textbf{x}\\)_and_\\([\\sqrt{\\textbf{K}_{t1}}\\ \\sqrt{\\textbf{K}_{p1}}]\\textbf{x}\\) in Fig. 3a.
\\([\\textbf{K}_{t2}\\ \\textbf{K}_{p2}]\\textbf{x}\\), \\([\\sqrt{\\textbf{K}_{t2}\\ \\textbf{K}_{p2}}]\\textbf{x}\\), \\([\\textbf{K}_{t2}\\ \\sqrt{\\textbf{K}_{p2}}]\\textbf{x}\\)_and_\\([\\sqrt{\\textbf{K}_{t2}}\\ \\sqrt{\\textbf{K}_{p2}}]\\textbf{x}\\) in Fig. 3b.
That is, we have compared a linear relationship and a square root (non-linear) relationship.
(The matrix notation is
Figure 2: The information of H\\({}_{\\rm bm}\\) for pressure and temperature under three different a-priori covariance (i.e. temp100-\\(>\\) 100% a-priori covariance of temperature; similarly pres50-\\(>\\)50% a-priori covariance of pressure).
\\[[\\mathbf{A}\\ \\mathbf{B}]=\\left[\\begin{array}{cccccccc}a_{11}&a_{12}& &b_{11}&b_{12}& \\\\ a_{21}&a_{22}& &b_{21}&b_{22}& \\\\ & & & & \\end{array}\\right]\\]
Where \\(\\mathbf{A}\\ \\&\\ \\mathbf{B}\\) are two different matrices with elements \\(a_{11},a_{12}, \\\\ \\&\\ b_{11},b_{12}, \\) respectively.)
If the functional relationship is correct, then the appropriate line in Fig. 3 should lie on top of the data, i.e. in that case [ ]x=y. The further the functional line departs from the data, this means that the assumed functional relationship departs from the actual functional relationship. We categorised these in four groups as: good, reasonably good, poor, very poor.
It is very difficult to develop a simple nonlinear relation of the present problem. The nonlinearity pattern is very complex and highly dependent upon the MWs selected since the nonlinearity patterns are very different for various parts of the spectrums and different tangent points. Table-1 shows that the square root of temperature jacobian and a linear pressure jacobian produces the best overall relationship and can at least be used as a qualitative measure.
Figure 3: Verification of the degree of nonlinearity of present model using a linear and a square roots of Jacobians.
Table - 1 Proximity of the functional relationship
\\begin{tabular}{|l|l|l|} \\hline & MW-79 & MW-132 \\\\ \\hline \\([K_{,}\\ K_{p}]\\) & poor & good \\\\ \\hline \\([\\sqrt{K_{,}\\ K_{p}}]\\) & good & reasonably good \\\\ \\hline \\([K_{,}\\sqrt{K_{,}\\ }]\\) & very poor & poor \\\\ \\hline \\([\\sqrt{K_{,}\\sqrt{K_{,}\\ }}]\\) & reasonably good & Very poor \\\\ \\hline \\end{tabular} We observed the condition number of these jacobian O(10\\({}^{2}\\)), which is not highly ill-conditioned and we can use the Eqn(6) for calculating the information using hyperspace. Since the present problem produces a triangular matrix, \\(\\sqrt{\\mathbf{K}}\\,\\)\\(=\\,\\)\\(\\sqrt{\\left|\\mathbf{K}\\right|}\\,\\). we can modify Eqn(6) as
\\[H_{hs}^{\\prime}=\\frac{1}{4}\\log_{2}\\frac{\\left|\\mathbf{K}_{2,}^{T}\\mathbf{K}_ {2,}\\right|}{\\left|\\mathbf{K}_{1,}^{T}\\mathbf{K}_{1,}\\right|}\\quad\\text{and} \\quad H_{hs}^{\\,p}=\\frac{1}{2}\\log_{2}\\frac{\\left|\\mathbf{K}_{2,}^{T}\\mathbf{K }_{2,}\\right|}{\\left|\\mathbf{K}_{1,}^{T}\\mathbf{K}_{1,}\\right|} \\tag{12}\\]
Where, \\(H_{hs}^{\\prime}\\)_and_\\(H_{hs}^{\\,p}\\) are the information content of temperature and pressure using hyperspace respectively. The notation of the Jacobian: \\(\\mathbf{K}_{u}\\)_and_\\(\\mathbf{K}_{rl}\\,\\)stands for the temperature Jacobian for different profiles and different MWs respectively.
Fig. 4 Information content using hyperspace of 140 different Micro-windows.
Using all of the above, we compute Figure 4 realising that it is only a rough first-order approximation to a very complex problem. However, even with this crude calculation, the numbers do finally become reasonable. The number of bits of information is of the same order as the number of parameters. The calculated information for the pressure component in Fig. 2 for 5% a-priori is quite similar in nature as we obtained in Fig. 4 using hyperspace formulation, because in that case the relationship is linear.
### Micro-Window (MW) Selection
There is no doubt that the pressure and temperature retrieval from this spectroscopic measurement is an ill-posed problem. Therefore it is impossible to solve the problem at an arbitrary location in the spectrum. However, there could be some micro-windows (MWs) available where the problem can be solved. Thus, we need to select appropriate MWs by considering both the physics and mathematics of the problem.
Consider the following: the signal is primarily a function of the number density of molecules of the species being considered at a level. The same density may be found for several sets of pressure and temperature for a constant volume mixing ratio since they are linked via the ideal gas law. In such a situation, we have to be very careful to select MWs in such a way that there is maximal information content for a particular set of parameters (e.g. the pressure/temperature profile points) together with a minimal information for another set of parameters (the temperature/pressure profile points). Alternately stated, the mutual information between two sets of parameters has to be minimized. This implies that there should be a large difference between the information for the two sets of parameters. Ideally the value of the information of one set of parameters should be zero and the value of the other very high.
For example, if H\\({}_{\\rm hs}\\) for temperature
### Synthetic Data Generation
The most significant difference between the forward model for Mars and that for Earth is the dust. Dust scattering is the most important extinction process in the Martian atmosphere, and must be considered in synthetic data preparation. The extinction coefficient at optical wavelength \\(\\lambda\\) due to dust, \\(\\sigma_{\\lambda}\\) (m\\({}^{-1}\\)) can be modelled as
\\[\\sigma_{\\lambda}=\\int_{0}^{\\infty}N(r,z)\\ k_{r,\\lambda}\\ dr \\tag{13}\\]
where k\\({}_{r,\\lambda}\\) is the effective scattering cross-section of a single dust particle of radius r. For spherical dust particles, the effective scattering cross-section for a single particle is given by
\\[k_{r,\\lambda}=\\pi r^{2}\\ Q_{r,\\lambda} \\tag{14}\\]
where \\(\\pi r^{2}\\) is the actual dust particle cross-sectional area, and Q\\({}_{r,\\lambda}\\) (dimensionless) is the single particle scattering efficiency. The scattering efficiency is a function of a complex index of refraction, the wavelength of the light and a size parameter. We have collected the extinction efficiency data for 0.3-4.15 \\(\\upmu\\)m from Ockert-Bell et al. [34] and 6\\(\\sim\\)16 \\(\\upmu\\)m from Forget [35]. No dependence on particle size is mentioned, but the effective diameter of particle is considered to be 1.85 \\(\\upmu\\)m. The optical depth (dimensionless) quantifies the scattering and absorption that occurs between the top of the atmosphere can be calculated as
\\[\\tau_{\\lambda}=\\int\\sigma_{z}dz \\tag{15}\\]
For \\(Q_{r,\\lambda}=Q_{ext}\\), independent of r, and thus of height, z, we can write
\\[\\tau_{\\lambda}=Q_{ext}\\ J(z)dz \\tag{16}\\]
where A(z) is the cross-sectional area per unit volume
\\[A(z)=\\int_{0}^{\\infty}\\pi r^{2}N(r,z)dr \\tag{17}\\]
We need a dust profile to do our numerical simulation. We used a simplified dust profile with a scale height (H) of 10km and uniform distribution. Then we can write
\\[\\tau_{\\lambda}=Q_{ext}N_{0}\\int\\pi r^{2}e^{-\\frac{z}{H}}dz \\tag{18}\\]
Where N\\({}_{0}\\) is the dust concentration at the ground. The value of N\\({}_{0}\\) of 1.85 \\(\\upmu\\)m particles at 1075cm\\({}^{-1}\\) for a vertical optical depth of 0.5 is 1.68x10\\({}^{6}\\) m\\({}^{-3}\\). If we assume particle size of 1 \\(\\mu\\)m then it calculates N\\({}_{0}\\) of 5.68x10\\({}^{6}\\) m\\({}^{-3}\\) and the column particle integrated cross section 5.9x10\\({}^{10}\\) m\\({}^{-2}\\). These values appear reasonable as the Pheonix dust model [36] calculates a number density of 1.47x10\\({}^{7}\\) m\\({}^{-3}\\) and the column particle integrated crosssection of 5.4x10\\({}^{10}\\) m\\({}^{-2}\\). The dust on Mars is known to be highly variable, so these values should be treated as approximations.
Figs. 5 & 6 show the transmission of dust for two different vertical optical depths 0.06 and 0.5. It is observed that the proposed solar occultation instrument cannot make any measurement below 25km when the vertical optical depth of dust is as high as 0.5.
We will now present a prototype retrieval from 4-50km, which contains 16 grid points, with a dust vertical optical depth of 0.06. To simulate the data, we added Gaussian random noise of mean zero and standard deviation of 0.01 produce a SNR=100 for 100% transmission. Simulated retrievals have been performed using a regularised total least squares method (RTLS), which is described in ref [7-8]. Three runs of each retrieval have been done using three different realizations of the noise to gain confidence in the retrieval scheme. We solve the problem by considering both sets of parameters as a single vector (e.g. [T;P]\\({}^{\\rm T}\\) means [t\\({}_{1}\\) t\\({}_{2}\\) ; p\\({}_{1}\\) p\\({}_{2}\\) ]\\({}^{\\rm T}\\)). We constructed a synthetic \"true\" profile in such a way that there is a sinusoidal variation of amplitude of 25% around the standard temperature and 50% around the standard pressure profile of the Mars. We have performed our simulated retrieval from two different initial guess profiles: the standard
Figure 5: A calculated dust transmission spectra for various tangent height of a solar occultation measurement in low dust conditions.
pressure and temperature profile of Mars referred to above and a constant profiles of T=280K, p=80Pa. The present simulated retrieval assumes that the differences between the tangent heights of the individual spectra are known, but not the absolute value. This corresponds to the case of a spacecraft where the relationship between the spectra taken in a single occultation is well-known from spacecraft and scanning parameters, but a fiducial value related to either the atmosphere or the planet has to be obtained from the data.
### Results and Discussions
The common belief is that the information is the measure for the success of an ill-posed inversion. According to Fig. 1, the information (H\\({}_{\\rm b}\\)) of both parameters is quite high (\\(\\sim\\)160 bits) for MWs 1\\(\\sim\\)15 and/or 40\\(\\sim\\)50 as compared to the other MWs. Thus, one can expect that there will be a reasonable solution for any combination of these MWs. However, we did not get a reasonable solution using OEM showing that the present
Figure 6: A calculated dust transmission spectra for various tangent heights of solar occultation measurement in moderate dust conditions.
problem cannot be solved using any MWs rich in information for both sets of parameters, even if these MWs are supplemented with a-priori information and forced by a-priori constraints. The a-priori information and/or constraint are basically noise in terms of the true solution process as is discussed also in ref [6, 8]. The solution process is governed by the information in the measurement, the condition number of Jacobian, the functional complexity, prevention principle of the multiple solutions and the stabilizing criteria (that blocks the noise propagation).
The retrieval results are shown in Fig. 7, showing that there exists a unique solution of these parameters from the spectroscopic measurement using this methodology. There is a little oscillation observed when both temperatures and pressure are low, however, the error is less than 2%. The signal is very low at the low end of the temperature and pressure grid and noise is corrupting the signal. This problem can be minimized by selection of several appropriate MWs for several ranges of P,T. It is often argued that the regularization suppresses the information to smooth out the solution. To demonstrate otherwise we have plotted the pressure and temperature as a single vector (as it is solved)
Figure 7: A composite retrieval of pressure and temperature from a spectroscopic measurement. The grids 17 to 32 correspondence the pressure grids 1 to 16.
to emphasize that our regularization scheme does not do this. The solutions tend to the correct result even with the big difference between the last temperature point and of the first pressure point. This is possible because the RTLS method implicitly decreases the regularization strength when the iterative process approaches the final solution. Thus the regularization effect close to the solution point is very low and all the information is retained in the solution.
To further verify the regularization effect in our retrieval schemes, we have assumed the truth is not a smooth profile. The true profile is the standard profile modified at random by up to 25% in temperature and 50% in pressure. The results are shown in Fig. 8. The results of this study gives an assurance that the pressure and temperature can be uniquely solved using a spectroscopic measurement. In this case the low temperature points at high pressures do not produce any oscillation.
Figure 8: A composite retrieval of pressure and temperature for random profiles. The grids 17 to 32 correspondence the pressure grids 1 to 16.
A issue can be raised that the information calculation using the hyperspace formulation is independent of the noise in the measurement. In considering this point it can be argued that the noise in the measurement is not the limiting factor (i.e. our experiment can be achieved at SNR\\(\\sim\\)500), the major issue is the ill-conditioning of the problem. We have examined this issue by calculating the information using Eqn(7) and Eqn(11). The difference is very small due to the inclusion of the error terms in the calculation. We have already discussed that the information calculation using the entropy method is a qualitative measure whereas the problem is solved by an iterative method. The value of degree of freedom of retrieval (DFR), which is discussed in ref[6], at the last iteration contains the meaningful information for the inversion. The basic idea behind the information analysis using hyperspace is the understanding of the physical problem in perfect measurement conditions in order to select the optimum MWs.
Yet another issue is why we use a high resolution measurement when 64 spectral points are co-added to produce the \"signal\". The high resolution measurement is necessary so that we can eliminate the unwanted features from the spectrum using our select channel model [7] before adding the spectral points. The advantages of adding spectral points are a reduction in the problem nonlinearity as well as the measurement noise without a corresponding disturbance of the spectral features and information.
There is no dependence upon the initial guess: Our retrieval scheme produces the same solution for two very different initial guess profiles as shown in Figs. 7 & 8. Our solution does not produce any error correlated with the error in the fiducial tangent height. An error of 5km in tangent height will introduce only 7.3*10\\({}^{\\text{-2}}\\) % error in our solution. This happens because we do not use a hydrostatic constraint in our retrieval model. We do require that we know the relation between successive measurements, which is easier to determine from a satellite based measurement as discussed in the introduction.
Despite the fact that the proposed retrieval scheme gives a good result, there still remains the problem of the fidelity of our dust model. Our dust model is based on the single scattering and only silicate as a scattering material, which is a reasonable assumption for the Martian dust. We now present a variant of our retrieval scheme that avoids the ambiguity of dust. We assume that the transmittance of the dust is fairly constant within the small portion of spectrum in a MW and scale the calculated transmittance (with noise) with respect to the maximum value in the MW before adding the spectral points. The advantage is that the scaled retrieval will remove any other broad band features. As we assume that the detector is dominated by \"Johnson noise\" in our simulated retrieval (worst case scenario - photon noise limits would produce a better result), the noise is enhanced 4-5 times for lower tangent height measurements due to the high absorption of the dust. It should be mentioned that the assumed SNR\\(=\\)100 of this study, which is somewhat lower than could reasonably be achieved in practice. SNR\\(=\\)400 could probably be achieved with this measurement technique[12]. Thus, we have made same retrieval using SNR\\(=\\)300 without the dust model in the retrieval scheme, where as the synthetic data contains the dust transmittance. The optimized MW will change when the model is changed. In such a situation, we intuitively select the optimized MWs for this study, the details are not shown here.
The results as shown in Fig. 9 confirm that the ambiguity of dust in Martian atmosphere can be avoided for the pressure and temperature retrieval by using our model. The present retrieval scheme has stable solutions even under scaling. The solution contains more error due to the high noise in the input signal (the simulation under SNR=100, which is not shown here).
We have also made a similar simulated study for the satellite based solar occultation measurement of the Earth atmosphere (results are not shown here). It is easier to solve the problem as the dust is not present in Earth atmosphere. We find that the pressure and temperature of Earth's atmosphere can be uniquely retrieved from such spectroscopic measurements using this method.
Figure 9: A composite retrieval of pressure and temperature without dust model in retrievals. The grids 17 to 32 correspondence the pressure grids 1 to 16.
domain. Our solution is independent of the initial guess profile, a-priori information and tangent height.
A new methodology for the calculation of information in measurement based on the model space parameters has been introduced, which helps to understand the physical problem in an ill-posed inversion. We also include a new technique for the micro window selection for the two sets of parameters (pressure and temperature) using only the information content of the measurements.
The success of this retrieval is due to optimum MWs selection using hyperspace information content analysis. It can be concluded that the information analysis using Bayes formulation has its own inherent limitations: the information in bits are erroneous when problem is nonlinear and a-priori information cannot be partitioned from the measurement. The information rich MWs for both sets of parameters are not necessarily a good choice to achieve a reasonable solution when the physical problem produces multiple solutions. The mutual information between the parameters, where multiple solutions occur, has to be minimized in order to achieve the reasonable retrievals.
We also showed that the effect of the dust in Mars can be successfully eliminated from the pressure and temperature retrievals.
### Acknowledgement
We acknowledge the support of the Canadian Space Agency and ABB Bomen Inc. to do this work.
## References
* [1] Houghton JT, Taylor FW, Rodgers CD. Remote sounding of atmospheres. Cambridge University Press, 1984.
* [2] Fleming HE, Smith WL. Inversion techniques for remote sensing of atmospheric temperature profiles. Reprint from Fifth Symposium on Temperature :Instrument Society of America 1971, p. 2239-50.
* [3] S. Fritz S, Wark DQ, Fleming HE, Smith WL, Jacobowitz H, Hilleary DT, Alishouse JC. Temperature sounding from satellites. _NOAA Technical Report NESS 59._ U.S. Department of Commerce:National Oceanic and Atmospheric Administration, National Environmental Satellite Service 1972.
* [4] Twomey S, An introduction to the mathematics of inversion in remote sensing and indirect measurements. Elsevier:New York, 1977.
* [5] Rodgers CD. Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific, 2000.
* [6] Koner PK, and Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals, JQSRT 2008; 109:514-26.
* [7] Koner PK, Drummond JR. Atmospheric trace gases retrievals using the nonlinear regularized total least squares method. JQSRT 2008;109:2045-59.
* [8] Koner PK, A. Battaglia A, Simmer C. A Rain Rate retrieval algorithm for Attenuating Radar Measurement. J Appl Meteo Climatology 2010;49:381-93.
* [9] Kumer B, Mergenthaler JL. Pressure, temperature, and ozone profile retrieval from simulated atmospheric earthlimb infrared emission. Appl Opt 1991;30:1124-31.
* [10] Clarmann T, Glathor N, Grabowski U, Hopfner M, Kellmann S, Kiefer M, et al. Retrieval of temperature and tangent altitude pointing from limb emission spectra recorded from space by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). J Geophys Res 2003;108:ACH9 1-13.
* [11] Carlotti M, Ridolfi M. Derivation of temperature and pressure from submillimetric limb observations. Appl Opt 1999;38:2398-409.
* [12] Boone CD, Nassar R, Walker KA, Rochon Y, McLeod SD, Rinsland CP, et al. Retrievals for the atmospheric chemistry experiment Fourier-transform spectrometer. Appl Opt 2005;44:7218-31.
* [13] Nowlan CR, McElroy CT, Drummond JR. Measurements of the O2 A- and B-bands for determining temperature and pressure profiles from ACE-MAESTRO: Forward model and retrieval algorithm. JQSRT 2007;108:371-88.
* [14] Kleinbohl A, Schofield JT, Kass DM, Abdou WA, Backus CR, Sen B, et al. Mars climate sounder limb profile retrieval of atmospheric temperature, pressure, and dust and water ice opacity. J Geophys Res 2009;114:E10006 1-30.
* [15] Berman M, Van Eerdewegh P. Information content of data with respect to models. Am J Physiol Regulatory Integrative Comp Physiol 1983;245:620-623.
* [16] Anderson TW. An Introduction to Multivariate Statistical Analysis. New York: Wiley, 1958.
* [17] Zhu J, Ting K. Performance distribution analysis and robust design. ASME J Mech Design 2001;123:11-17.
* [18] Shannon CE. A Mathematical Theory of Communication. Bell System Technical Journal 1948;27:379-423, 623-56.
* [19] Poggio T, Koch C. Ill-posed problems in early vision: from computational theory to analogue. Proc R Soc Lond 1985;226:303-23.
* [20] Kieffer HH, Zent AP. Quasi-perodic climate change on Mars, Mars. University of Arizona Press, 1992.
* [21] Rothman LS, Jacquemart D, Barbe A, Benner DC, Birk M, Brown LR, et al. The HITRAN 2004 molecular spectroscopic database. JQSRT 2005;96:139-204.
* [22] Quine BM, Drummond JR, GENSPECT a line-by-line code with selectable interpolation error tolerance. JQSRT 2002;74:147-65.
* [23] Dudhia A, Jay VL, Rodgers CD. Microwindow selection for high-spectral-resolution sounders. Appl Opt 2002;41:3665-73.
* [24] Del Bianco S, Carli B, Cecchi-Pestellini C, Dinelli BM, Gaia M, Santurri L. Retrieval of minor constituents in a cloudy atmosphere with remote-sensing millimetre-wave measurements. Q J R Meteorol Soc 2007;133(S2):163-70.
* [25] Weidmann D, Reburn WJ, Smith KM, Retrieval of atmospheric ozone profiles from an infrared quantum cascade laser heterodyne radiometer: results and analysis. Appl Opt 2007;46:7162-71.
* [26] Rodgers CD. Information content and optimisation of high spectral resolution remote measurement. Adv Space Res 1998;21:361-67.
* [27] Tristan S. L, Gabriel P, Leesman K, Cooper SJ, Stephens GL. Objective Assessment of the Information Content of Visible and Infrared Radiance Measurements for Cloud Microphysical Property Retrievals over the Global Oceans. Part I: Liquid clouds. J Appl Meteo Climatology 2006;45:20-41.
* [28] Huang HL, Purser RJ. Objective Measures of the Information Density of Satellite Data. Meteorol Atmos Phys 1996;60:105-17.
* [29] Reuter M, Buchwitz M, Schneising O, Heymann J, Bovensmann H, Burrows JP. A method for improved SCIAMACHY CO2 retrieval in the presence of optically thin clouds. Atmos Meas Tech 2010;3:209-32.
* [30] Worden J, Kulawik SS, Shephard MW, Clough SA, Worden H, Bowman K, et al. Predicted errors of tropospheric emission spectrometer nadir retrievals from spectral window selection. J Geophys Res 2004;109:D09308, doi:10.1029/2004JD004522.
* [31] Ceccherini S, Ridolfi M. Variance-covariance matrix and averaging kernels for the Levenberg-Marquardt solution of the retrieval of atmospheric vertical profiles. Atmos Chem Phys Discuss 2009;9:25663-85.
* [32] Moonen M, Vandewalle J. A Square Root Covariance Algorithm for Constrained Recursive Least Squares Estimation. J VLSI Signal Processing 1991;3:163-72.
* [33] Yang H, Xu W, Zhao H, Chen X, Wang J. Information flow and controlling in regularization inversion of quantitative remote sensing. _Science in China_ Ser. D Earth Sciences 2005;48:74--83.
* [34] Ockert-Bell ME, Bell-III JF, Pollack JB, McKay CP, Forget F. Absorption and scattering properties of the Martian dust in the solar wavelengths. J Geophys Res 1997;102:9039-50.
* [35] Forget F. Improved optical properties of the Martian atmospheric dust for radiative transfer calculations in the infrared. Geoph Res Let 1998;25:1105-09.
* 36. Taylor PA, Li PY, Michelangeli DV, Pathak J, Weng W. Modelling dust distributions in the atmospheric boundary layer on Mars. Boundary-Layer Meteorol 2007;125,:305-28. | The primary focus of the Mars Trace Gas Orbiter (TGO) collaboration between NASA and ESA is the detection of the temporal and spatial variation of the atmospheric trace gases using a solar occultation Fourier transform spectrometer. To retrieve any trace gas mixing ratios from these measurements, the atmospheric pressure and temperature have to be known accurately. Thus, a prototype retrieval model for the determination of pressure and temperature from a broadband high resolution infrared Fourier Transform spectrometer experiment with the Sun as a source on board a spacecraft orbiting the planet Mars is presented. It is found that the pressure and temperature can be uniquely solved from remote sensing spectroscopic measurements using a Regularized Total Least Squares method and selected pairs of micro-windows without any a-priori information of the state space parameters and other constraints.
The selection of the pairs of suitable micro-windows is based on the information content analysis. A comparative information content calculation using Bayes theory and a hyperspace formulation are presented to understand the information available in measurement. A method of minimization of mutual information is used to search the suitable micro-windows for a simultaneous pressure and temperature retrieval. | Write a summary of the passage below. | 222 |
isprs/22bdd535_babf_4446_b656_a4dd3962eefc.md | ## Data Fusion of LIDAR and CASI data for building detailed forest inventories
Paul Treitz\\({}^{1}\\), Kevin Lim\\({}^{1}\\), Valerie Thomas\\({}^{1}\\) and Art Groot\\({}^{2}\\)
\\({}^{1}\\) Department of Geography, Queen's University at Kingston, Canada
\\({}^{2}\\) Canadian Forest Service, Great Lakes Forestry Centre, Sault Ste. Marie, Ontario
| In August 2000, airborne Light Detection and Ranging (LIDAR) data were collected for a tolerant hardwood forest in the Turkey Lakes Watershed near Sault St. Marie, Ontario, Canada. Existing empirical studies have demonstrated the ability of LIDAR to estimate forest stand parameters such as canopy height, which in turn can be used to estimate other parameters such as stand biomass and gross merchantable volume. However, management of forests for multiple uses often requires specific information on the individual trees (e.g. species and height) composing a stand. This type of detailed forest inventory cannot be derived from the vertical component of the LIDAR data alone but requires the use of the horizontal component of the data and other remotely sensed data. This paper describes how a tree crown delineation algorithm can be applied to high spatial resolution intensity data derived from the LIDAR return signal and Compact Airborne Spectrographic Imager (CASI) data to characterize stem density, species, height and crown area of individual trees; and wood volume and biomass by species for a given forest stand. Airborne data are compared to detailed mensuration and coordinate data for over 700 trees in a one-hectare study plot. Detailed crown architecture data for approximately 200 trees are used for correlating crown shape. A comparison of the crown map derived from the fused LIDAR and CASI data and the crown map created from field measurements is presented. The results of validating the estimate of total number of trees against the existing stem map are also presented. | Provide a brief summary of the text. | 309 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.